Merge pull request #39838 from thaJeztah/bump_gcplogs

Bump gcplogs and dependencies to v0.44.3
This commit is contained in:
Tibor Vass 2020-10-02 06:30:48 -07:00 committed by GitHub
commit 1a5b7f50bc
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
157 changed files with 17200 additions and 1549 deletions

View file

@ -115,10 +115,11 @@ github.com/bsphere/le_go 7a984a84b5492ae539b79b62fb4a
# gcplogs deps
golang.org/x/oauth2 bf48bf16ab8d622ce64ec6ce98d2c98f916b6303
google.golang.org/api de943baf05a022a8f921b544b7827bacaba1aed5
go.opencensus.io c3ed530f775d85e577ca652cb052a52c078aad26 # v0.11.0
cloud.google.com/go 0fd7230b2a7505833d5f69b75cbd6c9582401479 # v0.23.0
github.com/googleapis/gax-go 317e0006254c44a0ac427cc52a0e083ff0b9622f # v2.0.0
google.golang.org/api dec2ee309f5b09fc59bc40676447c15736284d78 # v0.8.0
github.com/golang/groupcache 869f871628b6baa9cfbc11732cdf6546b17c1298
go.opencensus.io d835ff86be02193d324330acdb7d65546b05f814 # v0.22.3
cloud.google.com/go ceeb313ad77b789a7fa5287b36a1d127b69b7093 # v0.44.3
github.com/googleapis/gax-go bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2 # v2.0.5
google.golang.org/genproto 3f1135a288c9a07e340ae8ba4cc6c7065a3160e8
# containerd

617
vendor/cloud.google.com/go/README.md generated vendored
View file

@ -8,7 +8,7 @@ Go packages for [Google Cloud Platform](https://cloud.google.com) services.
import "cloud.google.com/go"
```
To install the packages on your system,
To install the packages on your system, *do not clone the repo*. Instead use
```
$ go get -u cloud.google.com/go/...
@ -19,263 +19,44 @@ make backwards-incompatible changes.
**NOTE:** Github repo is a mirror of [https://code.googlesource.com/gocloud](https://code.googlesource.com/gocloud).
* [News](#news)
* [Supported APIs](#supported-apis)
* [Go Versions Supported](#go-versions-supported)
* [Authorization](#authorization)
* [Cloud Datastore](#cloud-datastore-)
* [Cloud Storage](#cloud-storage-)
* [Cloud Pub/Sub](#cloud-pub-sub-)
* [Cloud BigQuery](#cloud-bigquery-)
* [Stackdriver Logging](#stackdriver-logging-)
* [Cloud Spanner](#cloud-spanner-)
## News
_May 18, 2018_
*v0.23.0*
- bigquery: Add DDL stats to query statistics.
- bigtable:
- cbt: Add cells-per-column limit for row lookup.
- cbt: Make it possible to combine read filters.
- dlp: v2beta2 client removed. Use the v2 client instead.
- firestore, spanner: Fix compilation errors due to protobuf changes.
_May 8, 2018_
*v0.22.0*
- bigtable:
- cbt: Support cells per column limit for row read.
- bttest: Correctly handle empty RowSet.
- Fix ReadModifyWrite operation in emulator.
- Fix API path in GetCluster.
- bigquery:
- BEHAVIOR CHANGE: Retry on 503 status code.
- Add dataset.DeleteWithContents.
- Add SchemaUpdateOptions for query jobs.
- Add Timeline to QueryStatistics.
- Add more stats to ExplainQueryStage.
- Support Parquet data format.
- datastore:
- Support omitempty for times.
- dlp:
- **BREAKING CHANGE:** Remove v1beta1 client. Please migrate to the v2 client,
which is now out of beta.
- Add v2 client.
- firestore:
- BEHAVIOR CHANGE: Treat set({}, MergeAll) as valid.
- iam:
- Support JWT signing via SignJwt callopt.
- profiler:
- BEHAVIOR CHANGE: PollForSerialOutput returns an error when context.Done.
- BEHAVIOR CHANGE: Increase the initial backoff to 1 minute.
- Avoid returning empty serial port output.
- pubsub:
- BEHAVIOR CHANGE: Don't backoff during next retryable error once stream is healthy.
- BEHAVIOR CHANGE: Don't backoff on EOF.
- pstest: Support Acknowledge and ModifyAckDeadline RPCs.
- redis:
- Add v1 beta Redis client.
- spanner:
- Support SessionLabels.
- speech:
- Add api v1 beta1 client.
- storage:
- BEHAVIOR CHANGE: Retry reads when retryable error occurs.
- Fix delete of object in requester-pays bucket.
- Support KMS integration.
_April 9, 2018_
*v0.21.0*
- bigquery:
- Add OpenCensus tracing.
- firestore:
- **BREAKING CHANGE:** If a document does not exist, return a DocumentSnapshot
whose Exists method returns false. DocumentRef.Get and Transaction.Get
return the non-nil DocumentSnapshot in addition to a NotFound error.
**DocumentRef.GetAll and Transaction.GetAll return a non-nil
DocumentSnapshot instead of nil.**
- Add DocumentIterator.Stop. **Call Stop whenever you are done with a
DocumentIterator.**
- Added Query.Snapshots and DocumentRef.Snapshots, which provide realtime
notification of updates. See https://cloud.google.com/firestore/docs/query-data/listen.
- Canceling an RPC now always returns a grpc.Status with codes.Canceled.
- spanner:
- Add `CommitTimestamp`, which supports inserting the commit timestamp of a
transaction into a column.
_March 22, 2018_
*v0.20.0*
- bigquery: Support SchemaUpdateOptions for load jobs.
- bigtable:
- Add SampleRowKeys.
- cbt: Support union, intersection GCPolicy.
- Retry admin RPCS.
- Add trace spans to retries.
- datastore: Add OpenCensus tracing.
- firestore:
- Fix queries involving Null and NaN.
- Allow Timestamp protobuffers for time values.
- logging: Add a WriteTimeout option.
- spanner: Support Batch API.
- storage: Add OpenCensus tracing.
_February 26, 2018_
*v0.19.0*
- bigquery:
- Support customer-managed encryption keys.
- bigtable:
- Improved emulator support.
- Support GetCluster.
- datastore:
- Add general mutations.
- Support pointer struct fields.
- Support transaction options.
- firestore:
- Add Transaction.GetAll.
- Support document cursors.
- logging:
- Support concurrent RPCs to the service.
- Support per-entry resources.
- profiler:
- Add config options to disable heap and thread profiling.
- Read the project ID from $GOOGLE_CLOUD_PROJECT when it's set.
- pubsub:
- BEHAVIOR CHANGE: Release flow control after ack/nack (instead of after the
callback returns).
- Add SubscriptionInProject.
- Add OpenCensus instrumentation for streaming pull.
- storage:
- Support CORS.
_January 18, 2018_
*v0.18.0*
- bigquery:
- Marked stable.
- Schema inference of nullable fields supported.
- Added TimePartitioning to QueryConfig.
- firestore: Data provided to DocumentRef.Set with a Merge option can contain
Delete sentinels.
- logging: Clients can accept parent resources other than projects.
- pubsub:
- pubsub/pstest: A lighweight fake for pubsub. Experimental; feedback welcome.
- Support updating more subscription metadata: AckDeadline,
RetainAckedMessages and RetentionDuration.
- oslogin/apiv1beta: New client for the Cloud OS Login API.
- rpcreplay: A package for recording and replaying gRPC traffic.
- spanner:
- Add a ReadWithOptions that supports a row limit, as well as an index.
- Support query plan and execution statistics.
- Added [OpenCensus](http://opencensus.io) support.
- storage: Clarify checksum validation for gzipped files (it is not validated
when the file is served uncompressed).
_December 11, 2017_
*v0.17.0*
- firestore BREAKING CHANGES:
- Remove UpdateMap and UpdateStruct; rename UpdatePaths to Update.
Change
`docref.UpdateMap(ctx, map[string]interface{}{"a.b", 1})`
to
`docref.Update(ctx, []firestore.Update{{Path: "a.b", Value: 1}})`
Change
`docref.UpdateStruct(ctx, []string{"Field"}, aStruct)`
to
`docref.Update(ctx, []firestore.Update{{Path: "Field", Value: aStruct.Field}})`
- Rename MergePaths to Merge; require args to be FieldPaths
- A value stored as an integer can be read into a floating-point field, and vice versa.
- bigtable/cmd/cbt:
- Support deleting a column.
- Add regex option for row read.
- spanner: Mark stable.
- storage:
- Add Reader.ContentEncoding method.
- Fix handling of SignedURL headers.
- bigquery:
- If Uploader.Put is called with no rows, it returns nil without making a
call.
- Schema inference supports the "nullable" option in struct tags for
non-required fields.
- TimePartitioning supports "Field".
[Older news](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/old-news.md)
## Supported APIs
Google API | Status | Package
---------------------------------|--------------|-----------------------------------------------------------
[BigQuery][cloud-bigquery] | stable | [`cloud.google.com/go/bigquery`][cloud-bigquery-ref]
[Bigtable][cloud-bigtable] | stable | [`cloud.google.com/go/bigtable`][cloud-bigtable-ref]
[Container][cloud-container] | alpha | [`cloud.google.com/go/container/apiv1`][cloud-container-ref]
[Data Loss Prevention][cloud-dlp]| alpha | [`cloud.google.com/go/dlp/apiv2beta1`][cloud-dlp-ref]
[Datastore][cloud-datastore] | stable | [`cloud.google.com/go/datastore`][cloud-datastore-ref]
[Debugger][cloud-debugger] | alpha | [`cloud.google.com/go/debugger/apiv2`][cloud-debugger-ref]
[ErrorReporting][cloud-errors] | alpha | [`cloud.google.com/go/errorreporting`][cloud-errors-ref]
[Firestore][cloud-firestore] | beta | [`cloud.google.com/go/firestore`][cloud-firestore-ref]
[Language][cloud-language] | stable | [`cloud.google.com/go/language/apiv1`][cloud-language-ref]
[Logging][cloud-logging] | stable | [`cloud.google.com/go/logging`][cloud-logging-ref]
[Monitoring][cloud-monitoring] | beta | [`cloud.google.com/go/monitoring/apiv3`][cloud-monitoring-ref]
[OS Login][cloud-oslogin] | alpha | [`cloud.google.com/compute/docs/oslogin/rest`][cloud-oslogin-ref]
[Pub/Sub][cloud-pubsub] | beta | [`cloud.google.com/go/pubsub`][cloud-pubsub-ref]
[Spanner][cloud-spanner] | stable | [`cloud.google.com/go/spanner`][cloud-spanner-ref]
[Speech][cloud-speech] | stable | [`cloud.google.com/go/speech/apiv1`][cloud-speech-ref]
[Storage][cloud-storage] | stable | [`cloud.google.com/go/storage`][cloud-storage-ref]
[Translation][cloud-translation] | stable | [`cloud.google.com/go/translate`][cloud-translation-ref]
[Video Intelligence][cloud-video]| beta | [`cloud.google.com/go/videointelligence/apiv1beta1`][cloud-video-ref]
[Vision][cloud-vision] | stable | [`cloud.google.com/go/vision/apiv1`][cloud-vision-ref]
Google API | Status | Package
------------------------------------------------|--------------|-----------------------------------------------------------
[Asset][cloud-asset] | alpha | [`cloud.google.com/go/asset/v1beta`][cloud-asset-ref]
[BigQuery][cloud-bigquery] | stable | [`cloud.google.com/go/bigquery`][cloud-bigquery-ref]
[Bigtable][cloud-bigtable] | stable | [`cloud.google.com/go/bigtable`][cloud-bigtable-ref]
[Cloudtasks][cloud-tasks] | stable | [`cloud.google.com/go/cloudtasks/apiv2`][cloud-tasks-ref]
[Container][cloud-container] | stable | [`cloud.google.com/go/container/apiv1`][cloud-container-ref]
[ContainerAnalysis][cloud-containeranalysis] | beta | [`cloud.google.com/go/containeranalysis/apiv1beta1`][cloud-containeranalysis-ref]
[Dataproc][cloud-dataproc] | stable | [`cloud.google.com/go/dataproc/apiv1`][cloud-dataproc-ref]
[Datastore][cloud-datastore] | stable | [`cloud.google.com/go/datastore`][cloud-datastore-ref]
[Debugger][cloud-debugger] | alpha | [`cloud.google.com/go/debugger/apiv2`][cloud-debugger-ref]
[Dialogflow][cloud-dialogflow] | alpha | [`cloud.google.com/go/dialogflow/apiv2`][cloud-dialogflow-ref]
[Data Loss Prevention][cloud-dlp] | alpha | [`cloud.google.com/go/dlp/apiv2`][cloud-dlp-ref]
[ErrorReporting][cloud-errors] | alpha | [`cloud.google.com/go/errorreporting`][cloud-errors-ref]
[Firestore][cloud-firestore] | stable | [`cloud.google.com/go/firestore`][cloud-firestore-ref]
[IAM][cloud-iam] | stable | [`cloud.google.com/go/iam`][cloud-iam-ref]
[IoT][cloud-iot] | alpha | [`cloud.google.com/iot/apiv1`][cloud-iot-ref]
[KMS][cloud-kms] | stable | [`cloud.google.com/go/kms`][cloud-kms-ref]
[Natural Language][cloud-natural-language] | stable | [`cloud.google.com/go/language/apiv1`][cloud-natural-language-ref]
[Logging][cloud-logging] | stable | [`cloud.google.com/go/logging`][cloud-logging-ref]
[Monitoring][cloud-monitoring] | alpha | [`cloud.google.com/go/monitoring/apiv3`][cloud-monitoring-ref]
[OS Login][cloud-oslogin] | alpha | [`cloud.google.com/go/oslogin/apiv1`][cloud-oslogin-ref]
[Pub/Sub][cloud-pubsub] | stable | [`cloud.google.com/go/pubsub`][cloud-pubsub-ref]
[Phishing Protection][cloud-phishingprotection] | alpha | [`cloud.google.com/go/phishingprotection/apiv1betad1`][cloud-phishingprotection-ref]
[reCAPTCHA Enterprise][cloud-recaptcha] | alpha | [`cloud.google.com/go/recaptchaenterprise/apiv1betad1`][cloud-recaptcha-ref]
[Memorystore][cloud-memorystore] | alpha | [`cloud.google.com/go/redis/apiv1`][cloud-memorystore-ref]
[Scheduler][cloud-scheduler] | stable | [`cloud.google.com/go/scheduler/apiv1`][cloud-scheduler-ref]
[Spanner][cloud-spanner] | stable | [`cloud.google.com/go/spanner`][cloud-spanner-ref]
[Speech][cloud-speech] | stable | [`cloud.google.com/go/speech/apiv1`][cloud-speech-ref]
[Storage][cloud-storage] | stable | [`cloud.google.com/go/storage`][cloud-storage-ref]
[Talent][cloud-talent] | alpha | [`cloud.google.com/go/talent/apiv4beta1`][cloud-talent-ref]
[Text To Speech][cloud-texttospeech] | alpha | [`cloud.google.com/go/texttospeech/apiv1`][cloud-texttospeech-ref]
[Trace][cloud-trace] | alpha | [`cloud.google.com/go/trace/apiv2`][cloud-trace-ref]
[Translate][cloud-translate] | stable | [`cloud.google.com/go/translate`][cloud-translate-ref]
[Video Intelligence][cloud-video] | alpha | [`cloud.google.com/go/videointelligence/apiv1beta1`][cloud-video-ref]
[Vision][cloud-vision] | stable | [`cloud.google.com/go/vision/apiv1`][cloud-vision-ref]
> **Alpha status**: the API is still being actively developed. As a
> result, it might change in backward-incompatible ways and is not recommended
@ -288,23 +69,16 @@ Google API | Status | Package
> **Stable status**: the API is mature and ready for production use. We will
> continue addressing bugs and feature requests.
Documentation and examples are available at
https://godoc.org/cloud.google.com/go
Visit or join the
[google-api-go-announce group](https://groups.google.com/forum/#!forum/google-api-go-announce)
for updates on these packages.
Documentation and examples are available at [godoc.org/cloud.google.com/go](godoc.org/cloud.google.com/go)
## Go Versions Supported
We support the two most recent major versions of Go. If Google App Engine uses
an older version, we support that as well. You can see which versions are
currently supported by looking at the lines following `go:` in
[`.travis.yml`](.travis.yml).
an older version, we support that as well.
## Authorization
By default, each API will use [Google Application Default Credentials][default-creds]
By default, each API will use [Google Application Default Credentials](https://developers.google.com/identity/protocols/application-default-credentials)
for authorization credentials used in calling the API endpoints. This will allow your
application to run in many environments without requiring explicit configuration.
@ -316,12 +90,12 @@ client, err := storage.NewClient(ctx)
To authorize using a
[JSON key file](https://cloud.google.com/iam/docs/managing-service-account-keys),
pass
[`option.WithServiceAccountFile`](https://godoc.org/google.golang.org/api/option#WithServiceAccountFile)
[`option.WithCredentialsFile`](https://godoc.org/google.golang.org/api/option#WithCredentialsFile)
to the `NewClient` function of the desired package. For example:
[snip]:# (auth-JSON)
```go
client, err := storage.NewClient(ctx, option.WithServiceAccountFile("path/to/keyfile.json"))
client, err := storage.NewClient(ctx, option.WithCredentialsFile("path/to/keyfile.json"))
```
You can exert more control over authorization by using the
@ -335,249 +109,6 @@ tokenSource := ...
client, err := storage.NewClient(ctx, option.WithTokenSource(tokenSource))
```
## Cloud Datastore [![GoDoc](https://godoc.org/cloud.google.com/go/datastore?status.svg)](https://godoc.org/cloud.google.com/go/datastore)
- [About Cloud Datastore][cloud-datastore]
- [Activating the API for your project][cloud-datastore-activation]
- [API documentation][cloud-datastore-docs]
- [Go client documentation](https://godoc.org/cloud.google.com/go/datastore)
- [Complete sample program](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/datastore/tasks)
### Example Usage
First create a `datastore.Client` to use throughout your application:
[snip]:# (datastore-1)
```go
client, err := datastore.NewClient(ctx, "my-project-id")
if err != nil {
log.Fatal(err)
}
```
Then use that client to interact with the API:
[snip]:# (datastore-2)
```go
type Post struct {
Title string
Body string `datastore:",noindex"`
PublishedAt time.Time
}
keys := []*datastore.Key{
datastore.NameKey("Post", "post1", nil),
datastore.NameKey("Post", "post2", nil),
}
posts := []*Post{
{Title: "Post 1", Body: "...", PublishedAt: time.Now()},
{Title: "Post 2", Body: "...", PublishedAt: time.Now()},
}
if _, err := client.PutMulti(ctx, keys, posts); err != nil {
log.Fatal(err)
}
```
## Cloud Storage [![GoDoc](https://godoc.org/cloud.google.com/go/storage?status.svg)](https://godoc.org/cloud.google.com/go/storage)
- [About Cloud Storage][cloud-storage]
- [API documentation][cloud-storage-docs]
- [Go client documentation](https://godoc.org/cloud.google.com/go/storage)
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/storage)
### Example Usage
First create a `storage.Client` to use throughout your application:
[snip]:# (storage-1)
```go
client, err := storage.NewClient(ctx)
if err != nil {
log.Fatal(err)
}
```
[snip]:# (storage-2)
```go
// Read the object1 from bucket.
rc, err := client.Bucket("bucket").Object("object1").NewReader(ctx)
if err != nil {
log.Fatal(err)
}
defer rc.Close()
body, err := ioutil.ReadAll(rc)
if err != nil {
log.Fatal(err)
}
```
## Cloud Pub/Sub [![GoDoc](https://godoc.org/cloud.google.com/go/pubsub?status.svg)](https://godoc.org/cloud.google.com/go/pubsub)
- [About Cloud Pubsub][cloud-pubsub]
- [API documentation][cloud-pubsub-docs]
- [Go client documentation](https://godoc.org/cloud.google.com/go/pubsub)
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/pubsub)
### Example Usage
First create a `pubsub.Client` to use throughout your application:
[snip]:# (pubsub-1)
```go
client, err := pubsub.NewClient(ctx, "project-id")
if err != nil {
log.Fatal(err)
}
```
Then use the client to publish and subscribe:
[snip]:# (pubsub-2)
```go
// Publish "hello world" on topic1.
topic := client.Topic("topic1")
res := topic.Publish(ctx, &pubsub.Message{
Data: []byte("hello world"),
})
// The publish happens asynchronously.
// Later, you can get the result from res:
...
msgID, err := res.Get(ctx)
if err != nil {
log.Fatal(err)
}
// Use a callback to receive messages via subscription1.
sub := client.Subscription("subscription1")
err = sub.Receive(ctx, func(ctx context.Context, m *pubsub.Message) {
fmt.Println(m.Data)
m.Ack() // Acknowledge that we've consumed the message.
})
if err != nil {
log.Println(err)
}
```
## Cloud BigQuery [![GoDoc](https://godoc.org/cloud.google.com/go/bigquery?status.svg)](https://godoc.org/cloud.google.com/go/bigquery)
- [About Cloud BigQuery][cloud-bigquery]
- [API documentation][cloud-bigquery-docs]
- [Go client documentation][cloud-bigquery-ref]
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/bigquery)
### Example Usage
First create a `bigquery.Client` to use throughout your application:
[snip]:# (bq-1)
```go
c, err := bigquery.NewClient(ctx, "my-project-ID")
if err != nil {
// TODO: Handle error.
}
```
Then use that client to interact with the API:
[snip]:# (bq-2)
```go
// Construct a query.
q := c.Query(`
SELECT year, SUM(number)
FROM [bigquery-public-data:usa_names.usa_1910_2013]
WHERE name = "William"
GROUP BY year
ORDER BY year
`)
// Execute the query.
it, err := q.Read(ctx)
if err != nil {
// TODO: Handle error.
}
// Iterate through the results.
for {
var values []bigquery.Value
err := it.Next(&values)
if err == iterator.Done {
break
}
if err != nil {
// TODO: Handle error.
}
fmt.Println(values)
}
```
## Stackdriver Logging [![GoDoc](https://godoc.org/cloud.google.com/go/logging?status.svg)](https://godoc.org/cloud.google.com/go/logging)
- [About Stackdriver Logging][cloud-logging]
- [API documentation][cloud-logging-docs]
- [Go client documentation][cloud-logging-ref]
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/logging)
### Example Usage
First create a `logging.Client` to use throughout your application:
[snip]:# (logging-1)
```go
ctx := context.Background()
client, err := logging.NewClient(ctx, "my-project")
if err != nil {
// TODO: Handle error.
}
```
Usually, you'll want to add log entries to a buffer to be periodically flushed
(automatically and asynchronously) to the Stackdriver Logging service.
[snip]:# (logging-2)
```go
logger := client.Logger("my-log")
logger.Log(logging.Entry{Payload: "something happened!"})
```
Close your client before your program exits, to flush any buffered log entries.
[snip]:# (logging-3)
```go
err = client.Close()
if err != nil {
// TODO: Handle error.
}
```
## Cloud Spanner [![GoDoc](https://godoc.org/cloud.google.com/go/spanner?status.svg)](https://godoc.org/cloud.google.com/go/spanner)
- [About Cloud Spanner][cloud-spanner]
- [API documentation][cloud-spanner-docs]
- [Go client documentation](https://godoc.org/cloud.google.com/go/spanner)
### Example Usage
First create a `spanner.Client` to use throughout your application:
[snip]:# (spanner-1)
```go
client, err := spanner.NewClient(ctx, "projects/P/instances/I/databases/D")
if err != nil {
log.Fatal(err)
}
```
[snip]:# (spanner-2)
```go
// Simple Reads And Writes
_, err = client.Apply(ctx, []*spanner.Mutation{
spanner.Insert("Users",
[]string{"name", "email"},
[]interface{}{"alice", "a@example.com"})})
if err != nil {
log.Fatal(err)
}
row, err := client.Single().ReadRow(ctx, "Users",
spanner.Key{"alice"}, []string{"email"})
if err != nil {
log.Fatal(err)
}
```
## Contributing
Contributions are welcome. Please, see the
@ -592,32 +123,23 @@ for more information.
[cloud-datastore]: https://cloud.google.com/datastore/
[cloud-datastore-ref]: https://godoc.org/cloud.google.com/go/datastore
[cloud-datastore-docs]: https://cloud.google.com/datastore/docs
[cloud-datastore-activation]: https://cloud.google.com/datastore/docs/activate
[cloud-firestore]: https://cloud.google.com/firestore/
[cloud-firestore-ref]: https://godoc.org/cloud.google.com/go/firestore
[cloud-firestore-docs]: https://cloud.google.com/firestore/docs
[cloud-firestore-activation]: https://cloud.google.com/firestore/docs/activate
[cloud-pubsub]: https://cloud.google.com/pubsub/
[cloud-pubsub-ref]: https://godoc.org/cloud.google.com/go/pubsub
[cloud-pubsub-docs]: https://cloud.google.com/pubsub/docs
[cloud-storage]: https://cloud.google.com/storage/
[cloud-storage-ref]: https://godoc.org/cloud.google.com/go/storage
[cloud-storage-docs]: https://cloud.google.com/storage/docs
[cloud-storage-create-bucket]: https://cloud.google.com/storage/docs/cloud-console#_creatingbuckets
[cloud-bigtable]: https://cloud.google.com/bigtable/
[cloud-bigtable-ref]: https://godoc.org/cloud.google.com/go/bigtable
[cloud-bigquery]: https://cloud.google.com/bigquery/
[cloud-bigquery-docs]: https://cloud.google.com/bigquery/docs
[cloud-bigquery-ref]: https://godoc.org/cloud.google.com/go/bigquery
[cloud-logging]: https://cloud.google.com/logging/
[cloud-logging-docs]: https://cloud.google.com/logging/docs
[cloud-logging-ref]: https://godoc.org/cloud.google.com/go/logging
[cloud-monitoring]: https://cloud.google.com/monitoring/
@ -630,17 +152,16 @@ for more information.
[cloud-language-ref]: https://godoc.org/cloud.google.com/go/language/apiv1
[cloud-oslogin]: https://cloud.google.com/compute/docs/oslogin/rest
[cloud-oslogin-ref]: https://cloud.google.com/compute/docs/oslogin/rest
[cloud-oslogin-ref]: https://cloud.google.com/go/oslogin/apiv1
[cloud-speech]: https://cloud.google.com/speech
[cloud-speech-ref]: https://godoc.org/cloud.google.com/go/speech/apiv1
[cloud-spanner]: https://cloud.google.com/spanner/
[cloud-spanner-ref]: https://godoc.org/cloud.google.com/go/spanner
[cloud-spanner-docs]: https://cloud.google.com/spanner/docs
[cloud-translation]: https://cloud.google.com/translation
[cloud-translation-ref]: https://godoc.org/cloud.google.com/go/translation
[cloud-translate]: https://cloud.google.com/translate
[cloud-translate-ref]: https://godoc.org/cloud.google.com/go/translate
[cloud-video]: https://cloud.google.com/video-intelligence/
[cloud-video-ref]: https://godoc.org/cloud.google.com/go/videointelligence/apiv1beta1
@ -657,4 +178,50 @@ for more information.
[cloud-dlp]: https://cloud.google.com/dlp/
[cloud-dlp-ref]: https://godoc.org/cloud.google.com/go/dlp/apiv2beta1
[default-creds]: https://developers.google.com/identity/protocols/application-default-credentials
[cloud-dataproc]: https://cloud.google.com/dataproc/
[cloud-dataproc-ref]: https://godoc.org/cloud.google.com/go/dataproc/apiv1
[cloud-iam]: https://cloud.google.com/iam/
[cloud-iam-ref]: https://godoc.org/cloud.google.com/go/iam
[cloud-kms]: https://cloud.google.com/kms/
[cloud-kms-ref]: https://godoc.org/cloud.google.com/go/kms/apiv1
[cloud-natural-language]: https://cloud.google.com/natural-language/
[cloud-natural-language-ref]: https://godoc.org/cloud.google.com/go/language/apiv1
[cloud-memorystore]: https://cloud.google.com/memorystore/
[cloud-memorystore-ref]: https://godoc.org/cloud.google.com/go/redis/apiv1
[cloud-texttospeech]: https://cloud.google.com/texttospeech/
[cloud-texttospeech-ref]: https://godoc.org/cloud.google.com/go/texttospeech/apiv1
[cloud-trace]: https://cloud.google.com/trace/
[cloud-trace-ref]: https://godoc.org/cloud.google.com/go/trace/apiv2
[cloud-dialogflow]: https://cloud.google.com/dialogflow-enterprise/
[cloud-dialogflow-ref]: https://godoc.org/cloud.google.com/go/dialogflow/apiv2
[cloud-containeranalysis]: https://cloud.google.com/container-registry/docs/container-analysis
[cloud-containeranalysis-ref]: https://godoc.org/cloud.google.com/go/devtools/containeranalysis/apiv1beta1
[cloud-asset]: https://cloud.google.com/security-command-center/docs/how-to-asset-inventory
[cloud-asset-ref]: https://godoc.org/cloud.google.com/go/asset/apiv1
[cloud-tasks]: https://cloud.google.com/tasks/
[cloud-tasks-ref]: https://godoc.org/cloud.google.com/go/cloudtasks/apiv2
[cloud-scheduler]: https://cloud.google.com/scheduler
[cloud-scheduler-ref]: https://godoc.org/cloud.google.com/go/scheduler/apiv1
[cloud-iot]: https://cloud.google.com/iot-core/
[cloud-iot-ref]: https://godoc.org/cloud.google.com/go/iot/apiv1
[cloud-phishingprotection]: https://cloud.google.com/phishing-protection/
[cloud-phishingprotection-ref]: https://cloud.google.com/go/phishingprotection/apiv1beta1
[cloud-recaptcha]: https://cloud.google.com/recaptcha-enterprise/
[cloud-recaptcha-ref]: https://cloud.google.com/go/recaptchaenterprise/apiv1beta1
[cloud-talent]: https://cloud.google.com/solutions/talent-solution/
[cloud-talent-ref]: https://godoc.org/cloud.google.com/go/talent/apiv4beta1

100
vendor/cloud.google.com/go/cloud.go generated vendored Normal file
View file

@ -0,0 +1,100 @@
// Copyright 2014 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/*
Package cloud is the root of the packages used to access Google Cloud
Services. See https://godoc.org/cloud.google.com/go for a full list
of sub-packages.
Client Options
All clients in sub-packages are configurable via client options. These options are
described here: https://godoc.org/google.golang.org/api/option.
Authentication and Authorization
All the clients in sub-packages support authentication via Google Application Default
Credentials (see https://cloud.google.com/docs/authentication/production), or
by providing a JSON key file for a Service Account. See the authentication examples
in this package for details.
Timeouts and Cancellation
By default, all requests in sub-packages will run indefinitely, retrying on transient
errors when correctness allows. To set timeouts or arrange for cancellation, use
contexts. See the examples for details.
Do not attempt to control the initial connection (dialing) of a service by setting a
timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts
would be ineffective and would only interfere with credential refreshing, which uses
the same context.
Connection Pooling
Connection pooling differs in clients based on their transport. Cloud
clients either rely on HTTP or gRPC transports to communicate
with Google Cloud.
Cloud clients that use HTTP (bigquery, compute, storage, and translate) rely on the
underlying HTTP transport to cache connections for later re-use. These are cached to
the default http.MaxIdleConns and http.MaxIdleConnsPerHost settings in
http.DefaultTransport.
For gRPC clients (all others in this repo), connection pooling is configurable. Users
of cloud client libraries may specify option.WithGRPCConnectionPool(n) as a client
option to NewClient calls. This configures the underlying gRPC connections to be
pooled and addressed in a round robin fashion.
Using the Libraries with Docker
Minimal docker images like Alpine lack CA certificates. This causes RPCs to appear to
hang, because gRPC retries indefinitely. See https://github.com/googleapis/google-cloud-go/issues/928
for more information.
Debugging
To see gRPC logs, set the environment variable GRPC_GO_LOG_SEVERITY_LEVEL. See
https://godoc.org/google.golang.org/grpc/grpclog for more information.
For HTTP logging, set the GODEBUG environment variable to "http2debug=1" or "http2debug=2".
Client Stability
Clients in this repository are considered alpha or beta unless otherwise
marked as stable in the README.md. Semver is not used to communicate stability
of clients.
Alpha and beta clients may change or go away without notice.
Clients marked stable will maintain compatibility with future versions for as
long as we can reasonably sustain. Incompatible changes might be made in some
situations, including:
- Security bugs may prompt backwards-incompatible changes.
- Situations in which components are no longer feasible to maintain without
making breaking changes, including removal.
- Parts of the client surface may be outright unstable and subject to change.
These parts of the surface will be labeled with the note, "It is EXPERIMENTAL
and subject to change or removal without notice."
*/
package cloud // import "cloud.google.com/go"

View file

@ -0,0 +1,9 @@
// +build ignore
// Empty include file to generate z symbols
// EOF

View file

@ -0,0 +1,472 @@
// Copyright 2018 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/*
* Line tables
*/
package gosym
import (
"encoding/binary"
"sync"
)
// A LineTable is a data structure mapping program counters to line numbers.
//
// In Go 1.1 and earlier, each function (represented by a Func) had its own LineTable,
// and the line number corresponded to a numbering of all source lines in the
// program, across all files. That absolute line number would then have to be
// converted separately to a file name and line number within the file.
//
// In Go 1.2, the format of the data changed so that there is a single LineTable
// for the entire program, shared by all Funcs, and there are no absolute line
// numbers, just line numbers within specific files.
//
// For the most part, LineTable's methods should be treated as an internal
// detail of the package; callers should use the methods on Table instead.
type LineTable struct {
Data []byte
PC uint64
Line int
// Go 1.2 state
mu sync.Mutex
go12 int // is this in Go 1.2 format? -1 no, 0 unknown, 1 yes
binary binary.ByteOrder
quantum uint32
ptrsize uint32
functab []byte
nfunctab uint32
filetab []byte
nfiletab uint32
fileMap map[string]uint32
}
// NOTE(rsc): This is wrong for GOARCH=arm, which uses a quantum of 4,
// but we have no idea whether we're using arm or not. This only
// matters in the old (pre-Go 1.2) symbol table format, so it's not worth
// fixing.
const oldQuantum = 1
func (t *LineTable) parse(targetPC uint64, targetLine int) (b []byte, pc uint64, line int) {
// The PC/line table can be thought of as a sequence of
// <pc update>* <line update>
// batches. Each update batch results in a (pc, line) pair,
// where line applies to every PC from pc up to but not
// including the pc of the next pair.
//
// Here we process each update individually, which simplifies
// the code, but makes the corner cases more confusing.
b, pc, line = t.Data, t.PC, t.Line
for pc <= targetPC && line != targetLine && len(b) > 0 {
code := b[0]
b = b[1:]
switch {
case code == 0:
if len(b) < 4 {
b = b[0:0]
break
}
val := binary.BigEndian.Uint32(b)
b = b[4:]
line += int(val)
case code <= 64:
line += int(code)
case code <= 128:
line -= int(code - 64)
default:
pc += oldQuantum * uint64(code-128)
continue
}
pc += oldQuantum
}
return b, pc, line
}
func (t *LineTable) slice(pc uint64) *LineTable {
data, pc, line := t.parse(pc, -1)
return &LineTable{Data: data, PC: pc, Line: line}
}
// PCToLine returns the line number for the given program counter.
// Callers should use Table's PCToLine method instead.
func (t *LineTable) PCToLine(pc uint64) int {
if t.isGo12() {
return t.go12PCToLine(pc)
}
_, _, line := t.parse(pc, -1)
return line
}
// LineToPC returns the program counter for the given line number,
// considering only program counters before maxpc.
// Callers should use Table's LineToPC method instead.
func (t *LineTable) LineToPC(line int, maxpc uint64) uint64 {
if t.isGo12() {
return 0
}
_, pc, line1 := t.parse(maxpc, line)
if line1 != line {
return 0
}
// Subtract quantum from PC to account for post-line increment
return pc - oldQuantum
}
// NewLineTable returns a new PC/line table
// corresponding to the encoded data.
// Text must be the start address of the
// corresponding text segment.
func NewLineTable(data []byte, text uint64) *LineTable {
return &LineTable{Data: data, PC: text, Line: 0}
}
// Go 1.2 symbol table format.
// See golang.org/s/go12symtab.
//
// A general note about the methods here: rather than try to avoid
// index out of bounds errors, we trust Go to detect them, and then
// we recover from the panics and treat them as indicative of a malformed
// or incomplete table.
//
// The methods called by symtab.go, which begin with "go12" prefixes,
// are expected to have that recovery logic.
// isGo12 reports whether this is a Go 1.2 (or later) symbol table.
func (t *LineTable) isGo12() bool {
t.go12Init()
return t.go12 == 1
}
const go12magic = 0xfffffffb
// uintptr returns the pointer-sized value encoded at b.
// The pointer size is dictated by the table being read.
func (t *LineTable) uintptr(b []byte) uint64 {
if t.ptrsize == 4 {
return uint64(t.binary.Uint32(b))
}
return t.binary.Uint64(b)
}
// go12init initializes the Go 1.2 metadata if t is a Go 1.2 symbol table.
func (t *LineTable) go12Init() {
t.mu.Lock()
defer t.mu.Unlock()
if t.go12 != 0 {
return
}
defer func() {
// If we panic parsing, assume it's not a Go 1.2 symbol table.
recover()
}()
// Check header: 4-byte magic, two zeros, pc quantum, pointer size.
t.go12 = -1 // not Go 1.2 until proven otherwise
if len(t.Data) < 16 || t.Data[4] != 0 || t.Data[5] != 0 ||
(t.Data[6] != 1 && t.Data[6] != 4) || // pc quantum
(t.Data[7] != 4 && t.Data[7] != 8) { // pointer size
return
}
switch uint32(go12magic) {
case binary.LittleEndian.Uint32(t.Data):
t.binary = binary.LittleEndian
case binary.BigEndian.Uint32(t.Data):
t.binary = binary.BigEndian
default:
return
}
t.quantum = uint32(t.Data[6])
t.ptrsize = uint32(t.Data[7])
t.nfunctab = uint32(t.uintptr(t.Data[8:]))
t.functab = t.Data[8+t.ptrsize:]
functabsize := t.nfunctab*2*t.ptrsize + t.ptrsize
fileoff := t.binary.Uint32(t.functab[functabsize:])
t.functab = t.functab[:functabsize]
t.filetab = t.Data[fileoff:]
t.nfiletab = t.binary.Uint32(t.filetab)
t.filetab = t.filetab[:t.nfiletab*4]
t.go12 = 1 // so far so good
}
// go12Funcs returns a slice of Funcs derived from the Go 1.2 pcln table.
func (t *LineTable) go12Funcs() []Func {
// Assume it is malformed and return nil on error.
defer func() {
recover()
}()
n := len(t.functab) / int(t.ptrsize) / 2
funcs := make([]Func, n)
for i := range funcs {
f := &funcs[i]
f.Entry = uint64(t.uintptr(t.functab[2*i*int(t.ptrsize):]))
f.End = uint64(t.uintptr(t.functab[(2*i+2)*int(t.ptrsize):]))
info := t.Data[t.uintptr(t.functab[(2*i+1)*int(t.ptrsize):]):]
f.LineTable = t
f.FrameSize = int(t.binary.Uint32(info[t.ptrsize+2*4:]))
f.Sym = &Sym{
Value: f.Entry,
Type: 'T',
Name: t.string(t.binary.Uint32(info[t.ptrsize:])),
GoType: 0,
Func: f,
}
}
return funcs
}
// findFunc returns the func corresponding to the given program counter.
func (t *LineTable) findFunc(pc uint64) []byte {
if pc < t.uintptr(t.functab) || pc >= t.uintptr(t.functab[len(t.functab)-int(t.ptrsize):]) {
return nil
}
// The function table is a list of 2*nfunctab+1 uintptrs,
// alternating program counters and offsets to func structures.
f := t.functab
nf := t.nfunctab
for nf > 0 {
m := nf / 2
fm := f[2*t.ptrsize*m:]
if t.uintptr(fm) <= pc && pc < t.uintptr(fm[2*t.ptrsize:]) {
return t.Data[t.uintptr(fm[t.ptrsize:]):]
} else if pc < t.uintptr(fm) {
nf = m
} else {
f = f[(m+1)*2*t.ptrsize:]
nf -= m + 1
}
}
return nil
}
// readvarint reads, removes, and returns a varint from *pp.
func (t *LineTable) readvarint(pp *[]byte) uint32 {
var v, shift uint32
p := *pp
for shift = 0; ; shift += 7 {
b := p[0]
p = p[1:]
v |= (uint32(b) & 0x7F) << shift
if b&0x80 == 0 {
break
}
}
*pp = p
return v
}
// string returns a Go string found at off.
func (t *LineTable) string(off uint32) string {
for i := off; ; i++ {
if t.Data[i] == 0 {
return string(t.Data[off:i])
}
}
}
// step advances to the next pc, value pair in the encoded table.
func (t *LineTable) step(p *[]byte, pc *uint64, val *int32, first bool) bool {
uvdelta := t.readvarint(p)
if uvdelta == 0 && !first {
return false
}
if uvdelta&1 != 0 {
uvdelta = ^(uvdelta >> 1)
} else {
uvdelta >>= 1
}
vdelta := int32(uvdelta)
pcdelta := t.readvarint(p) * t.quantum
*pc += uint64(pcdelta)
*val += vdelta
return true
}
// pcvalue reports the value associated with the target pc.
// off is the offset to the beginning of the pc-value table,
// and entry is the start PC for the corresponding function.
func (t *LineTable) pcvalue(off uint32, entry, targetpc uint64) int32 {
if off == 0 {
return -1
}
p := t.Data[off:]
val := int32(-1)
pc := entry
for t.step(&p, &pc, &val, pc == entry) {
if targetpc < pc {
return val
}
}
return -1
}
// findFileLine scans one function in the binary looking for a
// program counter in the given file on the given line.
// It does so by running the pc-value tables mapping program counter
// to file number. Since most functions come from a single file, these
// are usually short and quick to scan. If a file match is found, then the
// code goes to the expense of looking for a simultaneous line number match.
func (t *LineTable) findFileLine(entry uint64, filetab, linetab uint32, filenum, line int32) uint64 {
if filetab == 0 || linetab == 0 {
return 0
}
fp := t.Data[filetab:]
fl := t.Data[linetab:]
fileVal := int32(-1)
filePC := entry
lineVal := int32(-1)
linePC := entry
fileStartPC := filePC
for t.step(&fp, &filePC, &fileVal, filePC == entry) {
if fileVal == filenum && fileStartPC < filePC {
// fileVal is in effect starting at fileStartPC up to
// but not including filePC, and it's the file we want.
// Run the PC table looking for a matching line number
// or until we reach filePC.
lineStartPC := linePC
for linePC < filePC && t.step(&fl, &linePC, &lineVal, linePC == entry) {
// lineVal is in effect until linePC, and lineStartPC < filePC.
if lineVal == line {
if fileStartPC <= lineStartPC {
return lineStartPC
}
if fileStartPC < linePC {
return fileStartPC
}
}
lineStartPC = linePC
}
}
fileStartPC = filePC
}
return 0
}
// go12PCToLine maps program counter to line number for the Go 1.2 pcln table.
func (t *LineTable) go12PCToLine(pc uint64) (line int) {
return t.go12PCToVal(pc, t.ptrsize+5*4)
}
// go12PCToSPAdj maps program counter to Stack Pointer adjustment for the Go 1.2 pcln table.
func (t *LineTable) go12PCToSPAdj(pc uint64) (spadj int) {
return t.go12PCToVal(pc, t.ptrsize+3*4)
}
func (t *LineTable) go12PCToVal(pc uint64, fOffset uint32) (val int) {
defer func() {
if recover() != nil {
val = -1
}
}()
f := t.findFunc(pc)
if f == nil {
return -1
}
entry := t.uintptr(f)
linetab := t.binary.Uint32(f[fOffset:])
return int(t.pcvalue(linetab, entry, pc))
}
// go12PCToFile maps program counter to file name for the Go 1.2 pcln table.
func (t *LineTable) go12PCToFile(pc uint64) (file string) {
defer func() {
if recover() != nil {
file = ""
}
}()
f := t.findFunc(pc)
if f == nil {
return ""
}
entry := t.uintptr(f)
filetab := t.binary.Uint32(f[t.ptrsize+4*4:])
fno := t.pcvalue(filetab, entry, pc)
if fno <= 0 {
return ""
}
return t.string(t.binary.Uint32(t.filetab[4*fno:]))
}
// go12LineToPC maps a (file, line) pair to a program counter for the Go 1.2 pcln table.
func (t *LineTable) go12LineToPC(file string, line int) (pc uint64) {
defer func() {
if recover() != nil {
pc = 0
}
}()
t.initFileMap()
filenum := t.fileMap[file]
if filenum == 0 {
return 0
}
// Scan all functions.
// If this turns out to be a bottleneck, we could build a map[int32][]int32
// mapping file number to a list of functions with code from that file.
for i := uint32(0); i < t.nfunctab; i++ {
f := t.Data[t.uintptr(t.functab[2*t.ptrsize*i+t.ptrsize:]):]
entry := t.uintptr(f)
filetab := t.binary.Uint32(f[t.ptrsize+4*4:])
linetab := t.binary.Uint32(f[t.ptrsize+5*4:])
pc := t.findFileLine(entry, filetab, linetab, int32(filenum), int32(line))
if pc != 0 {
return pc
}
}
return 0
}
// initFileMap initializes the map from file name to file number.
func (t *LineTable) initFileMap() {
t.mu.Lock()
defer t.mu.Unlock()
if t.fileMap != nil {
return
}
m := make(map[string]uint32)
for i := uint32(1); i < t.nfiletab; i++ {
s := t.string(t.binary.Uint32(t.filetab[4*i:]))
m[s] = i
}
t.fileMap = m
}
// go12MapFiles adds to m a key for every file in the Go 1.2 LineTable.
// Every key maps to obj. That's not a very interesting map, but it provides
// a way for callers to obtain the list of files in the program.
func (t *LineTable) go12MapFiles(m map[string]*Obj, obj *Obj) {
defer func() {
recover()
}()
t.initFileMap()
for file := range t.fileMap {
m[file] = obj
}
}

View file

@ -0,0 +1,731 @@
// Copyright 2018 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package gosym implements access to the Go symbol
// and line number tables embedded in Go binaries generated
// by the gc compilers.
package gosym
// The table format is a variant of the format used in Plan 9's a.out
// format, documented at http://plan9.bell-labs.com/magic/man2html/6/a.out.
// The best reference for the differences between the Plan 9 format
// and the Go format is the runtime source, specifically ../../runtime/symtab.c.
import (
"bytes"
"encoding/binary"
"fmt"
"strconv"
"strings"
)
/*
* Symbols
*/
// A Sym represents a single symbol table entry.
type Sym struct {
Value uint64
Type byte
Name string
GoType uint64
// If this symbol if a function symbol, the corresponding Func
Func *Func
}
// Static reports whether this symbol is static (not visible outside its file).
func (s *Sym) Static() bool { return s.Type >= 'a' }
// PackageName returns the package part of the symbol name,
// or the empty string if there is none.
func (s *Sym) PackageName() string {
if i := strings.Index(s.Name, "."); i != -1 {
return s.Name[0:i]
}
return ""
}
// ReceiverName returns the receiver type name of this symbol,
// or the empty string if there is none.
func (s *Sym) ReceiverName() string {
l := strings.Index(s.Name, ".")
r := strings.LastIndex(s.Name, ".")
if l == -1 || r == -1 || l == r {
return ""
}
return s.Name[l+1 : r]
}
// BaseName returns the symbol name without the package or receiver name.
func (s *Sym) BaseName() string {
if i := strings.LastIndex(s.Name, "."); i != -1 {
return s.Name[i+1:]
}
return s.Name
}
// A Func collects information about a single function.
type Func struct {
Entry uint64
*Sym
End uint64
Params []*Sym
Locals []*Sym
FrameSize int
LineTable *LineTable
Obj *Obj
}
// An Obj represents a collection of functions in a symbol table.
//
// The exact method of division of a binary into separate Objs is an internal detail
// of the symbol table format.
//
// In early versions of Go each source file became a different Obj.
//
// In Go 1 and Go 1.1, each package produced one Obj for all Go sources
// and one Obj per C source file.
//
// In Go 1.2, there is a single Obj for the entire program.
type Obj struct {
// Funcs is a list of functions in the Obj.
Funcs []Func
// In Go 1.1 and earlier, Paths is a list of symbols corresponding
// to the source file names that produced the Obj.
// In Go 1.2, Paths is nil.
// Use the keys of Table.Files to obtain a list of source files.
Paths []Sym // meta
}
/*
* Symbol tables
*/
// Table represents a Go symbol table. It stores all of the
// symbols decoded from the program and provides methods to translate
// between symbols, names, and addresses.
type Table struct {
Syms []Sym
Funcs []Func
Files map[string]*Obj // nil for Go 1.2 and later binaries
Objs []Obj // nil for Go 1.2 and later binaries
go12line *LineTable // Go 1.2 line number table
}
type sym struct {
value uint64
gotype uint64
typ byte
name []byte
}
var (
littleEndianSymtab = []byte{0xFD, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00}
bigEndianSymtab = []byte{0xFF, 0xFF, 0xFF, 0xFD, 0x00, 0x00, 0x00}
oldLittleEndianSymtab = []byte{0xFE, 0xFF, 0xFF, 0xFF, 0x00, 0x00}
)
func walksymtab(data []byte, fn func(sym) error) error {
if len(data) == 0 { // missing symtab is okay
return nil
}
var order binary.ByteOrder = binary.BigEndian
newTable := false
switch {
case bytes.HasPrefix(data, oldLittleEndianSymtab):
// Same as Go 1.0, but little endian.
// Format was used during interim development between Go 1.0 and Go 1.1.
// Should not be widespread, but easy to support.
data = data[6:]
order = binary.LittleEndian
case bytes.HasPrefix(data, bigEndianSymtab):
newTable = true
case bytes.HasPrefix(data, littleEndianSymtab):
newTable = true
order = binary.LittleEndian
}
var ptrsz int
if newTable {
if len(data) < 8 {
return &DecodingError{len(data), "unexpected EOF", nil}
}
ptrsz = int(data[7])
if ptrsz != 4 && ptrsz != 8 {
return &DecodingError{7, "invalid pointer size", ptrsz}
}
data = data[8:]
}
var s sym
p := data
for len(p) >= 4 {
var typ byte
if newTable {
// Symbol type, value, Go type.
typ = p[0] & 0x3F
wideValue := p[0]&0x40 != 0
goType := p[0]&0x80 != 0
if typ < 26 {
typ += 'A'
} else {
typ += 'a' - 26
}
s.typ = typ
p = p[1:]
if wideValue {
if len(p) < ptrsz {
return &DecodingError{len(data), "unexpected EOF", nil}
}
// fixed-width value
if ptrsz == 8 {
s.value = order.Uint64(p[0:8])
p = p[8:]
} else {
s.value = uint64(order.Uint32(p[0:4]))
p = p[4:]
}
} else {
// varint value
s.value = 0
shift := uint(0)
for len(p) > 0 && p[0]&0x80 != 0 {
s.value |= uint64(p[0]&0x7F) << shift
shift += 7
p = p[1:]
}
if len(p) == 0 {
return &DecodingError{len(data), "unexpected EOF", nil}
}
s.value |= uint64(p[0]) << shift
p = p[1:]
}
if goType {
if len(p) < ptrsz {
return &DecodingError{len(data), "unexpected EOF", nil}
}
// fixed-width go type
if ptrsz == 8 {
s.gotype = order.Uint64(p[0:8])
p = p[8:]
} else {
s.gotype = uint64(order.Uint32(p[0:4]))
p = p[4:]
}
}
} else {
// Value, symbol type.
s.value = uint64(order.Uint32(p[0:4]))
if len(p) < 5 {
return &DecodingError{len(data), "unexpected EOF", nil}
}
typ = p[4]
if typ&0x80 == 0 {
return &DecodingError{len(data) - len(p) + 4, "bad symbol type", typ}
}
typ &^= 0x80
s.typ = typ
p = p[5:]
}
// Name.
var i int
var nnul int
for i = 0; i < len(p); i++ {
if p[i] == 0 {
nnul = 1
break
}
}
switch typ {
case 'z', 'Z':
p = p[i+nnul:]
for i = 0; i+2 <= len(p); i += 2 {
if p[i] == 0 && p[i+1] == 0 {
nnul = 2
break
}
}
}
if len(p) < i+nnul {
return &DecodingError{len(data), "unexpected EOF", nil}
}
s.name = p[0:i]
i += nnul
p = p[i:]
if !newTable {
if len(p) < 4 {
return &DecodingError{len(data), "unexpected EOF", nil}
}
// Go type.
s.gotype = uint64(order.Uint32(p[:4]))
p = p[4:]
}
fn(s)
}
return nil
}
// NewTable decodes the Go symbol table in data,
// returning an in-memory representation.
func NewTable(symtab []byte, pcln *LineTable) (*Table, error) {
var n int
err := walksymtab(symtab, func(s sym) error {
n++
return nil
})
if err != nil {
return nil, err
}
var t Table
if pcln.isGo12() {
t.go12line = pcln
}
fname := make(map[uint16]string)
t.Syms = make([]Sym, 0, n)
nf := 0
nz := 0
lasttyp := uint8(0)
err = walksymtab(symtab, func(s sym) error {
n := len(t.Syms)
t.Syms = t.Syms[0 : n+1]
ts := &t.Syms[n]
ts.Type = s.typ
ts.Value = uint64(s.value)
ts.GoType = uint64(s.gotype)
switch s.typ {
default:
// rewrite name to use . instead of · (c2 b7)
w := 0
b := s.name
for i := 0; i < len(b); i++ {
if b[i] == 0xc2 && i+1 < len(b) && b[i+1] == 0xb7 {
i++
b[i] = '.'
}
b[w] = b[i]
w++
}
ts.Name = string(s.name[0:w])
case 'z', 'Z':
if lasttyp != 'z' && lasttyp != 'Z' {
nz++
}
for i := 0; i < len(s.name); i += 2 {
eltIdx := binary.BigEndian.Uint16(s.name[i : i+2])
elt, ok := fname[eltIdx]
if !ok {
return &DecodingError{-1, "bad filename code", eltIdx}
}
if n := len(ts.Name); n > 0 && ts.Name[n-1] != '/' {
ts.Name += "/"
}
ts.Name += elt
}
}
switch s.typ {
case 'T', 't', 'L', 'l':
nf++
case 'f':
fname[uint16(s.value)] = ts.Name
}
lasttyp = s.typ
return nil
})
if err != nil {
return nil, err
}
t.Funcs = make([]Func, 0, nf)
t.Files = make(map[string]*Obj)
var obj *Obj
if t.go12line != nil {
// Put all functions into one Obj.
t.Objs = make([]Obj, 1)
obj = &t.Objs[0]
t.go12line.go12MapFiles(t.Files, obj)
} else {
t.Objs = make([]Obj, 0, nz)
}
// Count text symbols and attach frame sizes, parameters, and
// locals to them. Also, find object file boundaries.
lastf := 0
for i := 0; i < len(t.Syms); i++ {
sym := &t.Syms[i]
switch sym.Type {
case 'Z', 'z': // path symbol
if t.go12line != nil {
// Go 1.2 binaries have the file information elsewhere. Ignore.
break
}
// Finish the current object
if obj != nil {
obj.Funcs = t.Funcs[lastf:]
}
lastf = len(t.Funcs)
// Start new object
n := len(t.Objs)
t.Objs = t.Objs[0 : n+1]
obj = &t.Objs[n]
// Count & copy path symbols
var end int
for end = i + 1; end < len(t.Syms); end++ {
if c := t.Syms[end].Type; c != 'Z' && c != 'z' {
break
}
}
obj.Paths = t.Syms[i:end]
i = end - 1 // loop will i++
// Record file names
depth := 0
for j := range obj.Paths {
s := &obj.Paths[j]
if s.Name == "" {
depth--
} else {
if depth == 0 {
t.Files[s.Name] = obj
}
depth++
}
}
case 'T', 't', 'L', 'l': // text symbol
if n := len(t.Funcs); n > 0 {
t.Funcs[n-1].End = sym.Value
}
if sym.Name == "etext" {
continue
}
// Count parameter and local (auto) syms
var np, na int
var end int
countloop:
for end = i + 1; end < len(t.Syms); end++ {
switch t.Syms[end].Type {
case 'T', 't', 'L', 'l', 'Z', 'z':
break countloop
case 'p':
np++
case 'a':
na++
}
}
// Fill in the function symbol
n := len(t.Funcs)
t.Funcs = t.Funcs[0 : n+1]
fn := &t.Funcs[n]
sym.Func = fn
fn.Params = make([]*Sym, 0, np)
fn.Locals = make([]*Sym, 0, na)
fn.Sym = sym
fn.Entry = sym.Value
fn.Obj = obj
if t.go12line != nil {
// All functions share the same line table.
// It knows how to narrow down to a specific
// function quickly.
fn.LineTable = t.go12line
} else if pcln != nil {
fn.LineTable = pcln.slice(fn.Entry)
pcln = fn.LineTable
}
for j := i; j < end; j++ {
s := &t.Syms[j]
switch s.Type {
case 'm':
fn.FrameSize = int(s.Value)
case 'p':
n := len(fn.Params)
fn.Params = fn.Params[0 : n+1]
fn.Params[n] = s
case 'a':
n := len(fn.Locals)
fn.Locals = fn.Locals[0 : n+1]
fn.Locals[n] = s
}
}
i = end - 1 // loop will i++
}
}
if t.go12line != nil && nf == 0 {
t.Funcs = t.go12line.go12Funcs()
}
if obj != nil {
obj.Funcs = t.Funcs[lastf:]
}
return &t, nil
}
// PCToFunc returns the function containing the program counter pc,
// or nil if there is no such function.
func (t *Table) PCToFunc(pc uint64) *Func {
funcs := t.Funcs
for len(funcs) > 0 {
m := len(funcs) / 2
fn := &funcs[m]
switch {
case pc < fn.Entry:
funcs = funcs[0:m]
case fn.Entry <= pc && pc < fn.End:
return fn
default:
funcs = funcs[m+1:]
}
}
return nil
}
// PCToLine looks up line number information for a program counter.
// If there is no information, it returns fn == nil.
func (t *Table) PCToLine(pc uint64) (file string, line int, fn *Func) {
if fn = t.PCToFunc(pc); fn == nil {
return
}
if t.go12line != nil {
file = t.go12line.go12PCToFile(pc)
line = t.go12line.go12PCToLine(pc)
} else {
file, line = fn.Obj.lineFromAline(fn.LineTable.PCToLine(pc))
}
return
}
// PCToSPAdj returns the stack pointer adjustment for a program counter.
func (t *Table) PCToSPAdj(pc uint64) (spadj int) {
if fn := t.PCToFunc(pc); fn == nil {
return 0
}
if t.go12line != nil {
return t.go12line.go12PCToSPAdj(pc)
}
return 0
}
// LineToPC looks up the first program counter on the given line in
// the named file. It returns UnknownPathError or UnknownLineError if
// there is an error looking up this line.
func (t *Table) LineToPC(file string, line int) (pc uint64, fn *Func, err error) {
obj, ok := t.Files[file]
if !ok {
return 0, nil, UnknownFileError(file)
}
if t.go12line != nil {
pc := t.go12line.go12LineToPC(file, line)
if pc == 0 {
return 0, nil, &UnknownLineError{file, line}
}
return pc, t.PCToFunc(pc), nil
}
abs, err := obj.alineFromLine(file, line)
if err != nil {
return
}
for i := range obj.Funcs {
f := &obj.Funcs[i]
pc := f.LineTable.LineToPC(abs, f.End)
if pc != 0 {
return pc, f, nil
}
}
return 0, nil, &UnknownLineError{file, line}
}
// LookupSym returns the text, data, or bss symbol with the given name,
// or nil if no such symbol is found.
func (t *Table) LookupSym(name string) *Sym {
// TODO(austin) Maybe make a map
for i := range t.Syms {
s := &t.Syms[i]
switch s.Type {
case 'T', 't', 'L', 'l', 'D', 'd', 'B', 'b':
if s.Name == name {
return s
}
}
}
return nil
}
// LookupFunc returns the text, data, or bss symbol with the given name,
// or nil if no such symbol is found.
func (t *Table) LookupFunc(name string) *Func {
for i := range t.Funcs {
f := &t.Funcs[i]
if f.Sym.Name == name {
return f
}
}
return nil
}
// SymByAddr returns the text, data, or bss symbol starting at the given address.
func (t *Table) SymByAddr(addr uint64) *Sym {
for i := range t.Syms {
s := &t.Syms[i]
switch s.Type {
case 'T', 't', 'L', 'l', 'D', 'd', 'B', 'b':
if s.Value == addr {
return s
}
}
}
return nil
}
/*
* Object files
*/
// This is legacy code for Go 1.1 and earlier, which used the
// Plan 9 format for pc-line tables. This code was never quite
// correct. It's probably very close, and it's usually correct, but
// we never quite found all the corner cases.
//
// Go 1.2 and later use a simpler format, documented at golang.org/s/go12symtab.
func (o *Obj) lineFromAline(aline int) (string, int) {
type stackEnt struct {
path string
start int
offset int
prev *stackEnt
}
noPath := &stackEnt{"", 0, 0, nil}
tos := noPath
pathloop:
for _, s := range o.Paths {
val := int(s.Value)
switch {
case val > aline:
break pathloop
case val == 1:
// Start a new stack
tos = &stackEnt{s.Name, val, 0, noPath}
case s.Name == "":
// Pop
if tos == noPath {
return "<malformed symbol table>", 0
}
tos.prev.offset += val - tos.start
tos = tos.prev
default:
// Push
tos = &stackEnt{s.Name, val, 0, tos}
}
}
if tos == noPath {
return "", 0
}
return tos.path, aline - tos.start - tos.offset + 1
}
func (o *Obj) alineFromLine(path string, line int) (int, error) {
if line < 1 {
return 0, &UnknownLineError{path, line}
}
for i, s := range o.Paths {
// Find this path
if s.Name != path {
continue
}
// Find this line at this stack level
depth := 0
var incstart int
line += int(s.Value)
pathloop:
for _, s := range o.Paths[i:] {
val := int(s.Value)
switch {
case depth == 1 && val >= line:
return line - 1, nil
case s.Name == "":
depth--
if depth == 0 {
break pathloop
} else if depth == 1 {
line += val - incstart
}
default:
if depth == 1 {
incstart = val
}
depth++
}
}
return 0, &UnknownLineError{path, line}
}
return 0, UnknownFileError(path)
}
/*
* Errors
*/
// UnknownFileError represents a failure to find the specific file in
// the symbol table.
type UnknownFileError string
func (e UnknownFileError) Error() string { return "unknown file: " + string(e) }
// UnknownLineError represents a failure to map a line to a program
// counter, either because the line is beyond the bounds of the file
// or because there is no code on the given line.
type UnknownLineError struct {
File string
Line int
}
func (e *UnknownLineError) Error() string {
return "no code at " + e.File + ":" + strconv.Itoa(e.Line)
}
// DecodingError represents an error during the decoding of
// the symbol table.
type DecodingError struct {
off int
msg string
val interface{}
}
func (e *DecodingError) Error() string {
msg := e.msg
if e.val != nil {
msg += fmt.Sprintf(" '%v'", e.val)
}
msg += fmt.Sprintf(" at byte %#x", e.off)
return msg
}

View file

@ -1,4 +1,4 @@
// Copyright 2014 Google Inc. All Rights Reserved.
// Copyright 2014 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -20,6 +20,7 @@
package metadata // import "cloud.google.com/go/compute/metadata"
import (
"context"
"encoding/json"
"fmt"
"io/ioutil"
@ -31,9 +32,6 @@ import (
"strings"
"sync"
"time"
"golang.org/x/net/context"
"golang.org/x/net/context/ctxhttp"
)
const (
@ -64,7 +62,7 @@ var (
)
var (
metaClient = &http.Client{
defaultClient = &Client{hc: &http.Client{
Transport: &http.Transport{
Dial: (&net.Dialer{
Timeout: 2 * time.Second,
@ -72,15 +70,15 @@ var (
}).Dial,
ResponseHeaderTimeout: 2 * time.Second,
},
}
subscribeClient = &http.Client{
}}
subscribeClient = &Client{hc: &http.Client{
Transport: &http.Transport{
Dial: (&net.Dialer{
Timeout: 2 * time.Second,
KeepAlive: 30 * time.Second,
}).Dial,
},
}
}}
)
// NotDefinedError is returned when requested metadata is not defined.
@ -95,74 +93,16 @@ func (suffix NotDefinedError) Error() string {
return fmt.Sprintf("metadata: GCE metadata %q not defined", string(suffix))
}
// Get returns a value from the metadata service.
// The suffix is appended to "http://${GCE_METADATA_HOST}/computeMetadata/v1/".
//
// If the GCE_METADATA_HOST environment variable is not defined, a default of
// 169.254.169.254 will be used instead.
//
// If the requested metadata is not defined, the returned error will
// be of type NotDefinedError.
func Get(suffix string) (string, error) {
val, _, err := getETag(metaClient, suffix)
return val, err
}
// getETag returns a value from the metadata service as well as the associated
// ETag using the provided client. This func is otherwise equivalent to Get.
func getETag(client *http.Client, suffix string) (value, etag string, err error) {
// Using a fixed IP makes it very difficult to spoof the metadata service in
// a container, which is an important use-case for local testing of cloud
// deployments. To enable spoofing of the metadata service, the environment
// variable GCE_METADATA_HOST is first inspected to decide where metadata
// requests shall go.
host := os.Getenv(metadataHostEnv)
if host == "" {
// Using 169.254.169.254 instead of "metadata" here because Go
// binaries built with the "netgo" tag and without cgo won't
// know the search suffix for "metadata" is
// ".google.internal", and this IP address is documented as
// being stable anyway.
host = metadataIP
}
url := "http://" + host + "/computeMetadata/v1/" + suffix
req, _ := http.NewRequest("GET", url, nil)
req.Header.Set("Metadata-Flavor", "Google")
req.Header.Set("User-Agent", userAgent)
res, err := client.Do(req)
if err != nil {
return "", "", err
}
defer res.Body.Close()
if res.StatusCode == http.StatusNotFound {
return "", "", NotDefinedError(suffix)
}
if res.StatusCode != 200 {
return "", "", fmt.Errorf("status code %d trying to fetch %s", res.StatusCode, url)
}
all, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", "", err
}
return string(all), res.Header.Get("Etag"), nil
}
func getTrimmed(suffix string) (s string, err error) {
s, err = Get(suffix)
s = strings.TrimSpace(s)
return
}
func (c *cachedValue) get() (v string, err error) {
func (c *cachedValue) get(cl *Client) (v string, err error) {
defer c.mu.Unlock()
c.mu.Lock()
if c.v != "" {
return c.v, nil
}
if c.trim {
v, err = getTrimmed(c.k)
v, err = cl.getTrimmed(c.k)
} else {
v, err = Get(c.k)
v, err = cl.Get(c.k)
}
if err == nil {
c.v = v
@ -197,11 +137,11 @@ func testOnGCE() bool {
resc := make(chan bool, 2)
// Try two strategies in parallel.
// See https://github.com/GoogleCloudPlatform/google-cloud-go/issues/194
// See https://github.com/googleapis/google-cloud-go/issues/194
go func() {
req, _ := http.NewRequest("GET", "http://"+metadataIP, nil)
req.Header.Set("User-Agent", userAgent)
res, err := ctxhttp.Do(ctx, metaClient, req)
res, err := defaultClient.hc.Do(req.WithContext(ctx))
if err != nil {
resc <- false
return
@ -266,6 +206,255 @@ func systemInfoSuggestsGCE() bool {
return name == "Google" || name == "Google Compute Engine"
}
// Subscribe calls Client.Subscribe on a client designed for subscribing (one with no
// ResponseHeaderTimeout).
func Subscribe(suffix string, fn func(v string, ok bool) error) error {
return subscribeClient.Subscribe(suffix, fn)
}
// Get calls Client.Get on the default client.
func Get(suffix string) (string, error) { return defaultClient.Get(suffix) }
// ProjectID returns the current instance's project ID string.
func ProjectID() (string, error) { return defaultClient.ProjectID() }
// NumericProjectID returns the current instance's numeric project ID.
func NumericProjectID() (string, error) { return defaultClient.NumericProjectID() }
// InternalIP returns the instance's primary internal IP address.
func InternalIP() (string, error) { return defaultClient.InternalIP() }
// ExternalIP returns the instance's primary external (public) IP address.
func ExternalIP() (string, error) { return defaultClient.ExternalIP() }
// Hostname returns the instance's hostname. This will be of the form
// "<instanceID>.c.<projID>.internal".
func Hostname() (string, error) { return defaultClient.Hostname() }
// InstanceTags returns the list of user-defined instance tags,
// assigned when initially creating a GCE instance.
func InstanceTags() ([]string, error) { return defaultClient.InstanceTags() }
// InstanceID returns the current VM's numeric instance ID.
func InstanceID() (string, error) { return defaultClient.InstanceID() }
// InstanceName returns the current VM's instance ID string.
func InstanceName() (string, error) { return defaultClient.InstanceName() }
// Zone returns the current VM's zone, such as "us-central1-b".
func Zone() (string, error) { return defaultClient.Zone() }
// InstanceAttributes calls Client.InstanceAttributes on the default client.
func InstanceAttributes() ([]string, error) { return defaultClient.InstanceAttributes() }
// ProjectAttributes calls Client.ProjectAttributes on the default client.
func ProjectAttributes() ([]string, error) { return defaultClient.ProjectAttributes() }
// InstanceAttributeValue calls Client.InstanceAttributeValue on the default client.
func InstanceAttributeValue(attr string) (string, error) {
return defaultClient.InstanceAttributeValue(attr)
}
// ProjectAttributeValue calls Client.ProjectAttributeValue on the default client.
func ProjectAttributeValue(attr string) (string, error) {
return defaultClient.ProjectAttributeValue(attr)
}
// Scopes calls Client.Scopes on the default client.
func Scopes(serviceAccount string) ([]string, error) { return defaultClient.Scopes(serviceAccount) }
func strsContains(ss []string, s string) bool {
for _, v := range ss {
if v == s {
return true
}
}
return false
}
// A Client provides metadata.
type Client struct {
hc *http.Client
}
// NewClient returns a Client that can be used to fetch metadata. All HTTP requests
// will use the given http.Client instead of the default client.
func NewClient(c *http.Client) *Client {
return &Client{hc: c}
}
// getETag returns a value from the metadata service as well as the associated ETag.
// This func is otherwise equivalent to Get.
func (c *Client) getETag(suffix string) (value, etag string, err error) {
// Using a fixed IP makes it very difficult to spoof the metadata service in
// a container, which is an important use-case for local testing of cloud
// deployments. To enable spoofing of the metadata service, the environment
// variable GCE_METADATA_HOST is first inspected to decide where metadata
// requests shall go.
host := os.Getenv(metadataHostEnv)
if host == "" {
// Using 169.254.169.254 instead of "metadata" here because Go
// binaries built with the "netgo" tag and without cgo won't
// know the search suffix for "metadata" is
// ".google.internal", and this IP address is documented as
// being stable anyway.
host = metadataIP
}
u := "http://" + host + "/computeMetadata/v1/" + suffix
req, _ := http.NewRequest("GET", u, nil)
req.Header.Set("Metadata-Flavor", "Google")
req.Header.Set("User-Agent", userAgent)
res, err := c.hc.Do(req)
if err != nil {
return "", "", err
}
defer res.Body.Close()
if res.StatusCode == http.StatusNotFound {
return "", "", NotDefinedError(suffix)
}
all, err := ioutil.ReadAll(res.Body)
if err != nil {
return "", "", err
}
if res.StatusCode != 200 {
return "", "", &Error{Code: res.StatusCode, Message: string(all)}
}
return string(all), res.Header.Get("Etag"), nil
}
// Get returns a value from the metadata service.
// The suffix is appended to "http://${GCE_METADATA_HOST}/computeMetadata/v1/".
//
// If the GCE_METADATA_HOST environment variable is not defined, a default of
// 169.254.169.254 will be used instead.
//
// If the requested metadata is not defined, the returned error will
// be of type NotDefinedError.
func (c *Client) Get(suffix string) (string, error) {
val, _, err := c.getETag(suffix)
return val, err
}
func (c *Client) getTrimmed(suffix string) (s string, err error) {
s, err = c.Get(suffix)
s = strings.TrimSpace(s)
return
}
func (c *Client) lines(suffix string) ([]string, error) {
j, err := c.Get(suffix)
if err != nil {
return nil, err
}
s := strings.Split(strings.TrimSpace(j), "\n")
for i := range s {
s[i] = strings.TrimSpace(s[i])
}
return s, nil
}
// ProjectID returns the current instance's project ID string.
func (c *Client) ProjectID() (string, error) { return projID.get(c) }
// NumericProjectID returns the current instance's numeric project ID.
func (c *Client) NumericProjectID() (string, error) { return projNum.get(c) }
// InstanceID returns the current VM's numeric instance ID.
func (c *Client) InstanceID() (string, error) { return instID.get(c) }
// InternalIP returns the instance's primary internal IP address.
func (c *Client) InternalIP() (string, error) {
return c.getTrimmed("instance/network-interfaces/0/ip")
}
// ExternalIP returns the instance's primary external (public) IP address.
func (c *Client) ExternalIP() (string, error) {
return c.getTrimmed("instance/network-interfaces/0/access-configs/0/external-ip")
}
// Hostname returns the instance's hostname. This will be of the form
// "<instanceID>.c.<projID>.internal".
func (c *Client) Hostname() (string, error) {
return c.getTrimmed("instance/hostname")
}
// InstanceTags returns the list of user-defined instance tags,
// assigned when initially creating a GCE instance.
func (c *Client) InstanceTags() ([]string, error) {
var s []string
j, err := c.Get("instance/tags")
if err != nil {
return nil, err
}
if err := json.NewDecoder(strings.NewReader(j)).Decode(&s); err != nil {
return nil, err
}
return s, nil
}
// InstanceName returns the current VM's instance ID string.
func (c *Client) InstanceName() (string, error) {
host, err := c.Hostname()
if err != nil {
return "", err
}
return strings.Split(host, ".")[0], nil
}
// Zone returns the current VM's zone, such as "us-central1-b".
func (c *Client) Zone() (string, error) {
zone, err := c.getTrimmed("instance/zone")
// zone is of the form "projects/<projNum>/zones/<zoneName>".
if err != nil {
return "", err
}
return zone[strings.LastIndex(zone, "/")+1:], nil
}
// InstanceAttributes returns the list of user-defined attributes,
// assigned when initially creating a GCE VM instance. The value of an
// attribute can be obtained with InstanceAttributeValue.
func (c *Client) InstanceAttributes() ([]string, error) { return c.lines("instance/attributes/") }
// ProjectAttributes returns the list of user-defined attributes
// applying to the project as a whole, not just this VM. The value of
// an attribute can be obtained with ProjectAttributeValue.
func (c *Client) ProjectAttributes() ([]string, error) { return c.lines("project/attributes/") }
// InstanceAttributeValue returns the value of the provided VM
// instance attribute.
//
// If the requested attribute is not defined, the returned error will
// be of type NotDefinedError.
//
// InstanceAttributeValue may return ("", nil) if the attribute was
// defined to be the empty string.
func (c *Client) InstanceAttributeValue(attr string) (string, error) {
return c.Get("instance/attributes/" + attr)
}
// ProjectAttributeValue returns the value of the provided
// project attribute.
//
// If the requested attribute is not defined, the returned error will
// be of type NotDefinedError.
//
// ProjectAttributeValue may return ("", nil) if the attribute was
// defined to be the empty string.
func (c *Client) ProjectAttributeValue(attr string) (string, error) {
return c.Get("project/attributes/" + attr)
}
// Scopes returns the service account scopes for the given account.
// The account may be empty or the string "default" to use the instance's
// main account.
func (c *Client) Scopes(serviceAccount string) ([]string, error) {
if serviceAccount == "" {
serviceAccount = "default"
}
return c.lines("instance/service-accounts/" + serviceAccount + "/scopes")
}
// Subscribe subscribes to a value from the metadata service.
// The suffix is appended to "http://${GCE_METADATA_HOST}/computeMetadata/v1/".
// The suffix may contain query parameters.
@ -275,11 +464,11 @@ func systemInfoSuggestsGCE() bool {
// and ok false. Subscribe blocks until fn returns a non-nil error or the value
// is deleted. Subscribe returns the error value returned from the last call to
// fn, which may be nil when ok == false.
func Subscribe(suffix string, fn func(v string, ok bool) error) error {
func (c *Client) Subscribe(suffix string, fn func(v string, ok bool) error) error {
const failedSubscribeSleep = time.Second * 5
// First check to see if the metadata value exists at all.
val, lastETag, err := getETag(subscribeClient, suffix)
val, lastETag, err := c.getETag(suffix)
if err != nil {
return err
}
@ -295,7 +484,7 @@ func Subscribe(suffix string, fn func(v string, ok bool) error) error {
suffix += "?wait_for_change=true&last_etag="
}
for {
val, etag, err := getETag(subscribeClient, suffix+url.QueryEscape(lastETag))
val, etag, err := c.getETag(suffix + url.QueryEscape(lastETag))
if err != nil {
if _, deleted := err.(NotDefinedError); !deleted {
time.Sleep(failedSubscribeSleep)
@ -311,127 +500,14 @@ func Subscribe(suffix string, fn func(v string, ok bool) error) error {
}
}
// ProjectID returns the current instance's project ID string.
func ProjectID() (string, error) { return projID.get() }
// NumericProjectID returns the current instance's numeric project ID.
func NumericProjectID() (string, error) { return projNum.get() }
// InternalIP returns the instance's primary internal IP address.
func InternalIP() (string, error) {
return getTrimmed("instance/network-interfaces/0/ip")
// Error contains an error response from the server.
type Error struct {
// Code is the HTTP response status code.
Code int
// Message is the server response message.
Message string
}
// ExternalIP returns the instance's primary external (public) IP address.
func ExternalIP() (string, error) {
return getTrimmed("instance/network-interfaces/0/access-configs/0/external-ip")
}
// Hostname returns the instance's hostname. This will be of the form
// "<instanceID>.c.<projID>.internal".
func Hostname() (string, error) {
return getTrimmed("instance/hostname")
}
// InstanceTags returns the list of user-defined instance tags,
// assigned when initially creating a GCE instance.
func InstanceTags() ([]string, error) {
var s []string
j, err := Get("instance/tags")
if err != nil {
return nil, err
}
if err := json.NewDecoder(strings.NewReader(j)).Decode(&s); err != nil {
return nil, err
}
return s, nil
}
// InstanceID returns the current VM's numeric instance ID.
func InstanceID() (string, error) {
return instID.get()
}
// InstanceName returns the current VM's instance ID string.
func InstanceName() (string, error) {
host, err := Hostname()
if err != nil {
return "", err
}
return strings.Split(host, ".")[0], nil
}
// Zone returns the current VM's zone, such as "us-central1-b".
func Zone() (string, error) {
zone, err := getTrimmed("instance/zone")
// zone is of the form "projects/<projNum>/zones/<zoneName>".
if err != nil {
return "", err
}
return zone[strings.LastIndex(zone, "/")+1:], nil
}
// InstanceAttributes returns the list of user-defined attributes,
// assigned when initially creating a GCE VM instance. The value of an
// attribute can be obtained with InstanceAttributeValue.
func InstanceAttributes() ([]string, error) { return lines("instance/attributes/") }
// ProjectAttributes returns the list of user-defined attributes
// applying to the project as a whole, not just this VM. The value of
// an attribute can be obtained with ProjectAttributeValue.
func ProjectAttributes() ([]string, error) { return lines("project/attributes/") }
func lines(suffix string) ([]string, error) {
j, err := Get(suffix)
if err != nil {
return nil, err
}
s := strings.Split(strings.TrimSpace(j), "\n")
for i := range s {
s[i] = strings.TrimSpace(s[i])
}
return s, nil
}
// InstanceAttributeValue returns the value of the provided VM
// instance attribute.
//
// If the requested attribute is not defined, the returned error will
// be of type NotDefinedError.
//
// InstanceAttributeValue may return ("", nil) if the attribute was
// defined to be the empty string.
func InstanceAttributeValue(attr string) (string, error) {
return Get("instance/attributes/" + attr)
}
// ProjectAttributeValue returns the value of the provided
// project attribute.
//
// If the requested attribute is not defined, the returned error will
// be of type NotDefinedError.
//
// ProjectAttributeValue may return ("", nil) if the attribute was
// defined to be the empty string.
func ProjectAttributeValue(attr string) (string, error) {
return Get("project/attributes/" + attr)
}
// Scopes returns the service account scopes for the given account.
// The account may be empty or the string "default" to use the instance's
// main account.
func Scopes(serviceAccount string) ([]string, error) {
if serviceAccount == "" {
serviceAccount = "default"
}
return lines("instance/service-accounts/" + serviceAccount + "/scopes")
}
func strsContains(ss []string, s string) bool {
for _, v := range ss {
if v == s {
return true
}
}
return false
func (e *Error) Error() string {
return fmt.Sprintf("compute: Received %d `%s`", e.Code, e.Message)
}

29
vendor/cloud.google.com/go/go.mod generated vendored Normal file
View file

@ -0,0 +1,29 @@
module cloud.google.com/go
go 1.9
require (
cloud.google.com/go/datastore v1.0.0
github.com/golang/mock v1.3.1
github.com/golang/protobuf v1.3.2
github.com/google/btree v1.0.0
github.com/google/go-cmp v0.3.0
github.com/google/martian v2.1.0+incompatible
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f
github.com/googleapis/gax-go/v2 v2.0.5
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024
go.opencensus.io v0.22.0
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522
golang.org/x/lint v0.0.0-20190409202823-959b441ac422
golang.org/x/net v0.0.0-20190620200207-3b0461eec859
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
golang.org/x/sync v0.0.0-20190423024810-112230192c58
golang.org/x/text v0.3.2
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0
google.golang.org/api v0.8.0
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64
google.golang.org/grpc v1.21.1
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a
rsc.io/binaryregexp v0.2.0
)

View file

@ -1,4 +1,4 @@
// Copyright 2016 Google Inc. All Rights Reserved.
// Copyright 2016 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -26,7 +26,7 @@ import (
// Repo is the current version of the client libraries in this
// repo. It should be a date in YYYYMMDD format.
const Repo = "20180226"
const Repo = "20190802"
// Go returns the Go runtime version. The returned string
// has no whitespace.
@ -67,5 +67,5 @@ func goVer(s string) string {
}
func notSemverRune(r rune) bool {
return strings.IndexRune("0123456789.", r) < 0
return !strings.ContainsRune("0123456789.", r)
}

35
vendor/cloud.google.com/go/logging/README.md generated vendored Normal file
View file

@ -0,0 +1,35 @@
## Stackdriver Logging [![GoDoc](https://godoc.org/cloud.google.com/go/logging?status.svg)](https://godoc.org/cloud.google.com/go/logging)
- [About Stackdriver Logging](https://cloud.google.com/logging/)
- [API documentation](https://cloud.google.com/logging/docs)
- [Go client documentation](https://godoc.org/cloud.google.com/go/logging)
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/logging)
### Example Usage
First create a `logging.Client` to use throughout your application:
[snip]:# (logging-1)
```go
ctx := context.Background()
client, err := logging.NewClient(ctx, "my-project")
if err != nil {
// TODO: Handle error.
}
```
Usually, you'll want to add log entries to a buffer to be periodically flushed
(automatically and asynchronously) to the Stackdriver Logging service.
[snip]:# (logging-2)
```go
logger := client.Logger("my-log")
logger.Log(logging.Entry{Payload: "something happened!"})
```
Close your client before your program exits, to flush any buffered log entries.
[snip]:# (logging-3)
```go
err = client.Close()
if err != nil {
// TODO: Handle error.
}
```

View file

@ -1,4 +1,4 @@
// Copyright 2018 Google LLC
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -12,17 +12,19 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// AUTO-GENERATED CODE. DO NOT EDIT.
// Code generated by gapic-generator. DO NOT EDIT.
package logging
import (
"context"
"fmt"
"math"
"net/url"
"time"
"cloud.google.com/go/internal/version"
gax "github.com/googleapis/gax-go"
"golang.org/x/net/context"
"github.com/golang/protobuf/proto"
gax "github.com/googleapis/gax-go/v2"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
"google.golang.org/api/transport"
@ -63,8 +65,8 @@ func defaultConfigCallOptions() *ConfigCallOptions {
codes.Unavailable,
}, gax.Backoff{
Initial: 100 * time.Millisecond,
Max: 1000 * time.Millisecond,
Multiplier: 1.2,
Max: 60000 * time.Millisecond,
Multiplier: 1.3,
})
}),
},
@ -73,7 +75,7 @@ func defaultConfigCallOptions() *ConfigCallOptions {
ListSinks: retry[[2]string{"default", "idempotent"}],
GetSink: retry[[2]string{"default", "idempotent"}],
CreateSink: retry[[2]string{"default", "non_idempotent"}],
UpdateSink: retry[[2]string{"default", "non_idempotent"}],
UpdateSink: retry[[2]string{"default", "idempotent"}],
DeleteSink: retry[[2]string{"default", "idempotent"}],
ListExclusions: retry[[2]string{"default", "idempotent"}],
GetExclusion: retry[[2]string{"default", "idempotent"}],
@ -84,6 +86,8 @@ func defaultConfigCallOptions() *ConfigCallOptions {
}
// ConfigClient is a client for interacting with Stackdriver Logging API.
//
// Methods, except Close, may be called concurrently. However, fields must not be modified concurrently with method calls.
type ConfigClient struct {
// The connection to the service.
conn *grpc.ClientConn
@ -100,8 +104,8 @@ type ConfigClient struct {
// NewConfigClient creates a new config service v2 client.
//
// Service for configuring sinks used to export log entries outside of
// Stackdriver Logging.
// Service for configuring sinks used to export log entries out of
// Logging.
func NewConfigClient(ctx context.Context, opts ...option.ClientOption) (*ConfigClient, error) {
conn, err := transport.DialGRPC(ctx, append(defaultConfigClientOptions(), opts...)...)
if err != nil {
@ -132,16 +136,18 @@ func (c *ConfigClient) Close() error {
// the `x-goog-api-client` header passed on each request. Intended for
// use by Google-written clients.
func (c *ConfigClient) SetGoogleClientInfo(keyval ...string) {
kv := append([]string{"gl-go", version.Go()}, keyval...)
kv = append(kv, "gapic", version.Repo, "gax", gax.Version, "grpc", grpc.Version)
kv := append([]string{"gl-go", versionGo()}, keyval...)
kv = append(kv, "gapic", versionClient, "gax", gax.Version, "grpc", grpc.Version)
c.xGoogMetadata = metadata.Pairs("x-goog-api-client", gax.XGoogHeader(kv...))
}
// ListSinks lists sinks.
func (c *ConfigClient) ListSinks(ctx context.Context, req *loggingpb.ListSinksRequest, opts ...gax.CallOption) *LogSinkIterator {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "parent", url.QueryEscape(req.GetParent())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.ListSinks[0:len(c.CallOptions.ListSinks):len(c.CallOptions.ListSinks)], opts...)
it := &LogSinkIterator{}
req = proto.Clone(req).(*loggingpb.ListSinksRequest)
it.InternalFetch = func(pageSize int, pageToken string) ([]*loggingpb.LogSink, string, error) {
var resp *loggingpb.ListSinksResponse
req.PageToken = pageToken
@ -169,12 +175,15 @@ func (c *ConfigClient) ListSinks(ctx context.Context, req *loggingpb.ListSinksRe
return nextPageToken, nil
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(fetch, it.bufLen, it.takeBuf)
it.pageInfo.MaxSize = int(req.PageSize)
it.pageInfo.Token = req.PageToken
return it
}
// GetSink gets a sink.
func (c *ConfigClient) GetSink(ctx context.Context, req *loggingpb.GetSinkRequest, opts ...gax.CallOption) (*loggingpb.LogSink, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "sink_name", url.QueryEscape(req.GetSinkName())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.GetSink[0:len(c.CallOptions.GetSink):len(c.CallOptions.GetSink)], opts...)
var resp *loggingpb.LogSink
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
@ -193,7 +202,8 @@ func (c *ConfigClient) GetSink(ctx context.Context, req *loggingpb.GetSinkReques
// writer_identity is not permitted to write to the destination. A sink can
// export log entries only from the resource owning the sink.
func (c *ConfigClient) CreateSink(ctx context.Context, req *loggingpb.CreateSinkRequest, opts ...gax.CallOption) (*loggingpb.LogSink, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "parent", url.QueryEscape(req.GetParent())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.CreateSink[0:len(c.CallOptions.CreateSink):len(c.CallOptions.CreateSink)], opts...)
var resp *loggingpb.LogSink
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
@ -212,7 +222,8 @@ func (c *ConfigClient) CreateSink(ctx context.Context, req *loggingpb.CreateSink
// The updated sink might also have a new writer_identity; see the
// unique_writer_identity field.
func (c *ConfigClient) UpdateSink(ctx context.Context, req *loggingpb.UpdateSinkRequest, opts ...gax.CallOption) (*loggingpb.LogSink, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "sink_name", url.QueryEscape(req.GetSinkName())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.UpdateSink[0:len(c.CallOptions.UpdateSink):len(c.CallOptions.UpdateSink)], opts...)
var resp *loggingpb.LogSink
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
@ -229,7 +240,8 @@ func (c *ConfigClient) UpdateSink(ctx context.Context, req *loggingpb.UpdateSink
// DeleteSink deletes a sink. If the sink has a unique writer_identity, then that
// service account is also deleted.
func (c *ConfigClient) DeleteSink(ctx context.Context, req *loggingpb.DeleteSinkRequest, opts ...gax.CallOption) error {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "sink_name", url.QueryEscape(req.GetSinkName())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.DeleteSink[0:len(c.CallOptions.DeleteSink):len(c.CallOptions.DeleteSink)], opts...)
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
@ -241,9 +253,11 @@ func (c *ConfigClient) DeleteSink(ctx context.Context, req *loggingpb.DeleteSink
// ListExclusions lists all the exclusions in a parent resource.
func (c *ConfigClient) ListExclusions(ctx context.Context, req *loggingpb.ListExclusionsRequest, opts ...gax.CallOption) *LogExclusionIterator {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "parent", url.QueryEscape(req.GetParent())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.ListExclusions[0:len(c.CallOptions.ListExclusions):len(c.CallOptions.ListExclusions)], opts...)
it := &LogExclusionIterator{}
req = proto.Clone(req).(*loggingpb.ListExclusionsRequest)
it.InternalFetch = func(pageSize int, pageToken string) ([]*loggingpb.LogExclusion, string, error) {
var resp *loggingpb.ListExclusionsResponse
req.PageToken = pageToken
@ -271,12 +285,15 @@ func (c *ConfigClient) ListExclusions(ctx context.Context, req *loggingpb.ListEx
return nextPageToken, nil
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(fetch, it.bufLen, it.takeBuf)
it.pageInfo.MaxSize = int(req.PageSize)
it.pageInfo.Token = req.PageToken
return it
}
// GetExclusion gets the description of an exclusion.
func (c *ConfigClient) GetExclusion(ctx context.Context, req *loggingpb.GetExclusionRequest, opts ...gax.CallOption) (*loggingpb.LogExclusion, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "name", url.QueryEscape(req.GetName())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.GetExclusion[0:len(c.CallOptions.GetExclusion):len(c.CallOptions.GetExclusion)], opts...)
var resp *loggingpb.LogExclusion
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
@ -294,7 +311,8 @@ func (c *ConfigClient) GetExclusion(ctx context.Context, req *loggingpb.GetExclu
// Only log entries belonging to that resource can be excluded.
// You can have up to 10 exclusions in a resource.
func (c *ConfigClient) CreateExclusion(ctx context.Context, req *loggingpb.CreateExclusionRequest, opts ...gax.CallOption) (*loggingpb.LogExclusion, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "parent", url.QueryEscape(req.GetParent())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.CreateExclusion[0:len(c.CallOptions.CreateExclusion):len(c.CallOptions.CreateExclusion)], opts...)
var resp *loggingpb.LogExclusion
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
@ -310,7 +328,8 @@ func (c *ConfigClient) CreateExclusion(ctx context.Context, req *loggingpb.Creat
// UpdateExclusion changes one or more properties of an existing exclusion.
func (c *ConfigClient) UpdateExclusion(ctx context.Context, req *loggingpb.UpdateExclusionRequest, opts ...gax.CallOption) (*loggingpb.LogExclusion, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "name", url.QueryEscape(req.GetName())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.UpdateExclusion[0:len(c.CallOptions.UpdateExclusion):len(c.CallOptions.UpdateExclusion)], opts...)
var resp *loggingpb.LogExclusion
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
@ -326,7 +345,8 @@ func (c *ConfigClient) UpdateExclusion(ctx context.Context, req *loggingpb.Updat
// DeleteExclusion deletes an exclusion.
func (c *ConfigClient) DeleteExclusion(ctx context.Context, req *loggingpb.DeleteExclusionRequest, opts ...gax.CallOption) error {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "name", url.QueryEscape(req.GetName())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.DeleteExclusion[0:len(c.CallOptions.DeleteExclusion):len(c.CallOptions.DeleteExclusion)], opts...)
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error

View file

@ -1,4 +1,4 @@
// Copyright 2018 Google LLC
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -12,20 +12,35 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// AUTO-GENERATED CODE. DO NOT EDIT.
// Code generated by gapic-generator. DO NOT EDIT.
// Package logging is an auto-generated package for the
// Stackdriver Logging API.
//
// NOTE: This package is in alpha. It is not stable, and is likely to change.
//
// Writes log entries and manages your Stackdriver Logging configuration.
// Writes log entries and manages your Logging configuration.
//
// Use of Context
//
// The ctx passed to NewClient is used for authentication requests and
// for creating the underlying connection, but is not used for subsequent calls.
// Individual methods on the client use the ctx given to them.
//
// To close the open connection, use the Close() method.
//
// For information about setting deadlines, reusing contexts, and more
// please visit godoc.org/cloud.google.com/go.
//
// Use the client at cloud.google.com/go/logging in preference to this.
package logging // import "cloud.google.com/go/logging/apiv2"
import (
"golang.org/x/net/context"
"context"
"runtime"
"strings"
"unicode"
"google.golang.org/grpc/metadata"
)
@ -50,3 +65,42 @@ func DefaultAuthScopes() []string {
"https://www.googleapis.com/auth/logging.write",
}
}
// versionGo returns the Go runtime version. The returned string
// has no whitespace, suitable for reporting in header.
func versionGo() string {
const develPrefix = "devel +"
s := runtime.Version()
if strings.HasPrefix(s, develPrefix) {
s = s[len(develPrefix):]
if p := strings.IndexFunc(s, unicode.IsSpace); p >= 0 {
s = s[:p]
}
return s
}
notSemverRune := func(r rune) bool {
return strings.IndexRune("0123456789.", r) < 0
}
if strings.HasPrefix(s, "go1") {
s = s[2:]
var prerelease string
if p := strings.IndexFunc(s, notSemverRune); p >= 0 {
s, prerelease = s[:p], s[p:]
}
if strings.HasSuffix(s, ".") {
s += "0"
} else if strings.Count(s, ".") < 2 {
s += ".0"
}
if prerelease != "" {
s += "-" + prerelease
}
return s
}
return "UNKNOWN"
}
const versionClient = "20190801"

View file

@ -1,4 +1,4 @@
// Copyright 2018 Google LLC
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -12,17 +12,19 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// AUTO-GENERATED CODE. DO NOT EDIT.
// Code generated by gapic-generator. DO NOT EDIT.
package logging
import (
"context"
"fmt"
"math"
"net/url"
"time"
"cloud.google.com/go/internal/version"
gax "github.com/googleapis/gax-go"
"golang.org/x/net/context"
"github.com/golang/protobuf/proto"
gax "github.com/googleapis/gax-go/v2"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
"google.golang.org/api/transport"
@ -59,35 +61,24 @@ func defaultCallOptions() *CallOptions {
codes.Unavailable,
}, gax.Backoff{
Initial: 100 * time.Millisecond,
Max: 1000 * time.Millisecond,
Multiplier: 1.2,
})
}),
},
{"list", "idempotent"}: {
gax.WithRetry(func() gax.Retryer {
return gax.OnCodes([]codes.Code{
codes.DeadlineExceeded,
codes.Internal,
codes.Unavailable,
}, gax.Backoff{
Initial: 100 * time.Millisecond,
Max: 1000 * time.Millisecond,
Multiplier: 1.2,
Max: 60000 * time.Millisecond,
Multiplier: 1.3,
})
}),
},
}
return &CallOptions{
DeleteLog: retry[[2]string{"default", "idempotent"}],
WriteLogEntries: retry[[2]string{"default", "non_idempotent"}],
ListLogEntries: retry[[2]string{"list", "idempotent"}],
WriteLogEntries: retry[[2]string{"default", "idempotent"}],
ListLogEntries: retry[[2]string{"default", "idempotent"}],
ListMonitoredResourceDescriptors: retry[[2]string{"default", "idempotent"}],
ListLogs: retry[[2]string{"default", "idempotent"}],
ListLogs: retry[[2]string{"default", "idempotent"}],
}
}
// Client is a client for interacting with Stackdriver Logging API.
//
// Methods, except Close, may be called concurrently. However, fields must not be modified concurrently with method calls.
type Client struct {
// The connection to the service.
conn *grpc.ClientConn
@ -135,8 +126,8 @@ func (c *Client) Close() error {
// the `x-goog-api-client` header passed on each request. Intended for
// use by Google-written clients.
func (c *Client) SetGoogleClientInfo(keyval ...string) {
kv := append([]string{"gl-go", version.Go()}, keyval...)
kv = append(kv, "gapic", version.Repo, "gax", gax.Version, "grpc", grpc.Version)
kv := append([]string{"gl-go", versionGo()}, keyval...)
kv = append(kv, "gapic", versionClient, "gax", gax.Version, "grpc", grpc.Version)
c.xGoogMetadata = metadata.Pairs("x-goog-api-client", gax.XGoogHeader(kv...))
}
@ -145,7 +136,8 @@ func (c *Client) SetGoogleClientInfo(keyval ...string) {
// Log entries written shortly before the delete operation might not be
// deleted.
func (c *Client) DeleteLog(ctx context.Context, req *loggingpb.DeleteLogRequest, opts ...gax.CallOption) error {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "log_name", url.QueryEscape(req.GetLogName())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.DeleteLog[0:len(c.CallOptions.DeleteLog):len(c.CallOptions.DeleteLog)], opts...)
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
@ -155,13 +147,13 @@ func (c *Client) DeleteLog(ctx context.Context, req *loggingpb.DeleteLogRequest,
return err
}
// WriteLogEntries ## Log entry resources
//
// Writes log entries to Stackdriver Logging. This API method is the
// only way to send log entries to Stackdriver Logging. This method
// is used, directly or indirectly, by the Stackdriver Logging agent
// (fluentd) and all logging libraries configured to use Stackdriver
// Logging.
// WriteLogEntries writes log entries to Logging. This API method is the
// only way to send log entries to Logging. This method
// is used, directly or indirectly, by the Logging agent
// (fluentd) and all logging libraries configured to use Logging.
// A single request may contain log entries for a maximum of 1000
// different resources (projects, organizations, billing accounts or
// folders)
func (c *Client) WriteLogEntries(ctx context.Context, req *loggingpb.WriteLogEntriesRequest, opts ...gax.CallOption) (*loggingpb.WriteLogEntriesResponse, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.WriteLogEntries[0:len(c.CallOptions.WriteLogEntries):len(c.CallOptions.WriteLogEntries)], opts...)
@ -178,12 +170,13 @@ func (c *Client) WriteLogEntries(ctx context.Context, req *loggingpb.WriteLogEnt
}
// ListLogEntries lists log entries. Use this method to retrieve log entries from
// Stackdriver Logging. For ways to export log entries, see
// Logging. For ways to export log entries, see
// Exporting Logs (at /logging/docs/export).
func (c *Client) ListLogEntries(ctx context.Context, req *loggingpb.ListLogEntriesRequest, opts ...gax.CallOption) *LogEntryIterator {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.ListLogEntries[0:len(c.CallOptions.ListLogEntries):len(c.CallOptions.ListLogEntries)], opts...)
it := &LogEntryIterator{}
req = proto.Clone(req).(*loggingpb.ListLogEntriesRequest)
it.InternalFetch = func(pageSize int, pageToken string) ([]*loggingpb.LogEntry, string, error) {
var resp *loggingpb.ListLogEntriesResponse
req.PageToken = pageToken
@ -211,15 +204,17 @@ func (c *Client) ListLogEntries(ctx context.Context, req *loggingpb.ListLogEntri
return nextPageToken, nil
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(fetch, it.bufLen, it.takeBuf)
it.pageInfo.MaxSize = int(req.PageSize)
it.pageInfo.Token = req.PageToken
return it
}
// ListMonitoredResourceDescriptors lists the descriptors for monitored resource types used by Stackdriver
// Logging.
// ListMonitoredResourceDescriptors lists the descriptors for monitored resource types used by Logging.
func (c *Client) ListMonitoredResourceDescriptors(ctx context.Context, req *loggingpb.ListMonitoredResourceDescriptorsRequest, opts ...gax.CallOption) *MonitoredResourceDescriptorIterator {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.ListMonitoredResourceDescriptors[0:len(c.CallOptions.ListMonitoredResourceDescriptors):len(c.CallOptions.ListMonitoredResourceDescriptors)], opts...)
it := &MonitoredResourceDescriptorIterator{}
req = proto.Clone(req).(*loggingpb.ListMonitoredResourceDescriptorsRequest)
it.InternalFetch = func(pageSize int, pageToken string) ([]*monitoredrespb.MonitoredResourceDescriptor, string, error) {
var resp *loggingpb.ListMonitoredResourceDescriptorsResponse
req.PageToken = pageToken
@ -247,15 +242,19 @@ func (c *Client) ListMonitoredResourceDescriptors(ctx context.Context, req *logg
return nextPageToken, nil
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(fetch, it.bufLen, it.takeBuf)
it.pageInfo.MaxSize = int(req.PageSize)
it.pageInfo.Token = req.PageToken
return it
}
// ListLogs lists the logs in projects, organizations, folders, or billing accounts.
// Only logs that have entries are listed.
func (c *Client) ListLogs(ctx context.Context, req *loggingpb.ListLogsRequest, opts ...gax.CallOption) *StringIterator {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "parent", url.QueryEscape(req.GetParent())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.ListLogs[0:len(c.CallOptions.ListLogs):len(c.CallOptions.ListLogs)], opts...)
it := &StringIterator{}
req = proto.Clone(req).(*loggingpb.ListLogsRequest)
it.InternalFetch = func(pageSize int, pageToken string) ([]string, string, error) {
var resp *loggingpb.ListLogsResponse
req.PageToken = pageToken
@ -283,6 +282,8 @@ func (c *Client) ListLogs(ctx context.Context, req *loggingpb.ListLogsRequest, o
return nextPageToken, nil
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(fetch, it.bufLen, it.takeBuf)
it.pageInfo.MaxSize = int(req.PageSize)
it.pageInfo.Token = req.PageToken
return it
}

View file

@ -1,4 +1,4 @@
// Copyright 2018 Google LLC
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -12,17 +12,19 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// AUTO-GENERATED CODE. DO NOT EDIT.
// Code generated by gapic-generator. DO NOT EDIT.
package logging
import (
"context"
"fmt"
"math"
"net/url"
"time"
"cloud.google.com/go/internal/version"
gax "github.com/googleapis/gax-go"
"golang.org/x/net/context"
"github.com/golang/protobuf/proto"
gax "github.com/googleapis/gax-go/v2"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
"google.golang.org/api/transport"
@ -58,8 +60,8 @@ func defaultMetricsCallOptions() *MetricsCallOptions {
codes.Unavailable,
}, gax.Backoff{
Initial: 100 * time.Millisecond,
Max: 1000 * time.Millisecond,
Multiplier: 1.2,
Max: 60000 * time.Millisecond,
Multiplier: 1.3,
})
}),
},
@ -68,12 +70,14 @@ func defaultMetricsCallOptions() *MetricsCallOptions {
ListLogMetrics: retry[[2]string{"default", "idempotent"}],
GetLogMetric: retry[[2]string{"default", "idempotent"}],
CreateLogMetric: retry[[2]string{"default", "non_idempotent"}],
UpdateLogMetric: retry[[2]string{"default", "non_idempotent"}],
UpdateLogMetric: retry[[2]string{"default", "idempotent"}],
DeleteLogMetric: retry[[2]string{"default", "idempotent"}],
}
}
// MetricsClient is a client for interacting with Stackdriver Logging API.
//
// Methods, except Close, may be called concurrently. However, fields must not be modified concurrently with method calls.
type MetricsClient struct {
// The connection to the service.
conn *grpc.ClientConn
@ -121,16 +125,18 @@ func (c *MetricsClient) Close() error {
// the `x-goog-api-client` header passed on each request. Intended for
// use by Google-written clients.
func (c *MetricsClient) SetGoogleClientInfo(keyval ...string) {
kv := append([]string{"gl-go", version.Go()}, keyval...)
kv = append(kv, "gapic", version.Repo, "gax", gax.Version, "grpc", grpc.Version)
kv := append([]string{"gl-go", versionGo()}, keyval...)
kv = append(kv, "gapic", versionClient, "gax", gax.Version, "grpc", grpc.Version)
c.xGoogMetadata = metadata.Pairs("x-goog-api-client", gax.XGoogHeader(kv...))
}
// ListLogMetrics lists logs-based metrics.
func (c *MetricsClient) ListLogMetrics(ctx context.Context, req *loggingpb.ListLogMetricsRequest, opts ...gax.CallOption) *LogMetricIterator {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "parent", url.QueryEscape(req.GetParent())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.ListLogMetrics[0:len(c.CallOptions.ListLogMetrics):len(c.CallOptions.ListLogMetrics)], opts...)
it := &LogMetricIterator{}
req = proto.Clone(req).(*loggingpb.ListLogMetricsRequest)
it.InternalFetch = func(pageSize int, pageToken string) ([]*loggingpb.LogMetric, string, error) {
var resp *loggingpb.ListLogMetricsResponse
req.PageToken = pageToken
@ -158,12 +164,15 @@ func (c *MetricsClient) ListLogMetrics(ctx context.Context, req *loggingpb.ListL
return nextPageToken, nil
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(fetch, it.bufLen, it.takeBuf)
it.pageInfo.MaxSize = int(req.PageSize)
it.pageInfo.Token = req.PageToken
return it
}
// GetLogMetric gets a logs-based metric.
func (c *MetricsClient) GetLogMetric(ctx context.Context, req *loggingpb.GetLogMetricRequest, opts ...gax.CallOption) (*loggingpb.LogMetric, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "metric_name", url.QueryEscape(req.GetMetricName())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.GetLogMetric[0:len(c.CallOptions.GetLogMetric):len(c.CallOptions.GetLogMetric)], opts...)
var resp *loggingpb.LogMetric
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
@ -179,7 +188,8 @@ func (c *MetricsClient) GetLogMetric(ctx context.Context, req *loggingpb.GetLogM
// CreateLogMetric creates a logs-based metric.
func (c *MetricsClient) CreateLogMetric(ctx context.Context, req *loggingpb.CreateLogMetricRequest, opts ...gax.CallOption) (*loggingpb.LogMetric, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "parent", url.QueryEscape(req.GetParent())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.CreateLogMetric[0:len(c.CallOptions.CreateLogMetric):len(c.CallOptions.CreateLogMetric)], opts...)
var resp *loggingpb.LogMetric
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
@ -195,7 +205,8 @@ func (c *MetricsClient) CreateLogMetric(ctx context.Context, req *loggingpb.Crea
// UpdateLogMetric creates or updates a logs-based metric.
func (c *MetricsClient) UpdateLogMetric(ctx context.Context, req *loggingpb.UpdateLogMetricRequest, opts ...gax.CallOption) (*loggingpb.LogMetric, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "metric_name", url.QueryEscape(req.GetMetricName())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.UpdateLogMetric[0:len(c.CallOptions.UpdateLogMetric):len(c.CallOptions.UpdateLogMetric)], opts...)
var resp *loggingpb.LogMetric
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
@ -211,7 +222,8 @@ func (c *MetricsClient) UpdateLogMetric(ctx context.Context, req *loggingpb.Upda
// DeleteLogMetric deletes a logs-based metric.
func (c *MetricsClient) DeleteLogMetric(ctx context.Context, req *loggingpb.DeleteLogMetricRequest, opts ...gax.CallOption) error {
ctx = insertMetadata(ctx, c.xGoogMetadata)
md := metadata.Pairs("x-goog-request-params", fmt.Sprintf("%s=%v", "metric_name", url.QueryEscape(req.GetMetricName())))
ctx = insertMetadata(ctx, c.xGoogMetadata, md)
opts = append(c.CallOptions.DeleteLogMetric[0:len(c.CallOptions.DeleteLogMetric):len(c.CallOptions.DeleteLogMetric)], opts...)
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error

View file

@ -1,4 +1,4 @@
// Copyright 2016 Google Inc. All Rights Reserved.
// Copyright 2016 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -21,9 +21,6 @@ This client uses Logging API v2.
See https://cloud.google.com/logging/docs/api/v2/ for an introduction to the API.
Note: This package is in beta. Some backwards-incompatible changes may occur.
Creating a Client
Use a Client to interact with the Stackdriver Logging API.
@ -65,7 +62,10 @@ For critical errors, you may want to send your log entries immediately.
LogSync is slow and will block until the log entry has been sent, so it is
not recommended for normal use.
lg.LogSync(ctx, logging.Entry{Payload: "ALERT! Something critical happened!"})
err = lg.LogSync(ctx, logging.Entry{Payload: "ALERT! Something critical happened!"})
if err != nil {
// TODO: Handle error.
}
Payloads
@ -85,11 +85,11 @@ If you have a []byte of JSON, wrap it in json.RawMessage:
lg.Log(logging.Entry{Payload: json.RawMessage(j)})
The Standard Logger Interface
The Standard Logger
You may want use a standard log.Logger in your program.
// stdlg implements log.Logger
// stdlg is an instance of *log.Logger.
stdlg := lg.StandardLogger(logging.Info)
stdlg.Println("some info")
@ -113,5 +113,22 @@ running from a Google Cloud Platform VM, select "GCE VM Instance". Otherwise, se
accounts can be viewed on the command line with the "gcloud logging read" command.
Grouping Logs by Request
To group all the log entries written during a single HTTP request, create two
Loggers, a "parent" and a "child," with different log IDs. Both should be in the same
project, and have the same MonitoredResouce type and labels.
- Parent entries must have HTTPRequest.Request populated. (Strictly speaking, only the URL is necessary.)
- A child entry's timestamp must be within the time interval covered by the parent request (i.e., older
than parent.Timestamp, and newer than parent.Timestamp - parent.HTTPRequest.Latency, assuming the
parent timestamp marks the end of the request.
- The trace field must be populated in all of the entries and match exactly.
You should observe the child log entries grouped under the parent on the console. The
parent entry will not inherit the severity of its children; you must update the
parent severity yourself.
*/
package logging // import "cloud.google.com/go/logging"

15
vendor/cloud.google.com/go/logging/go.mod generated vendored Normal file
View file

@ -0,0 +1,15 @@
module cloud.google.com/go/logging
go 1.9
require (
cloud.google.com/go v0.43.0
github.com/golang/protobuf v1.3.1
github.com/google/go-cmp v0.3.0
github.com/googleapis/gax-go/v2 v2.0.5
go.opencensus.io v0.22.0
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
google.golang.org/api v0.7.0
google.golang.org/genproto v0.0.0-20190708153700-3bdd9d9f5532
google.golang.org/grpc v1.21.1
)

View file

@ -1,4 +1,4 @@
// Copyright 2018 Google Inc. All Rights Reserved.
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -12,15 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// +build go1.8
// This file, and the cloud.google.com/go import, won't actually become part of
// the resultant binary.
// +build modhack
package grpc
package logging
import (
"go.opencensus.io/plugin/ocgrpc"
"google.golang.org/grpc"
)
func addOCStatsHandler(opts []grpc.DialOption) []grpc.DialOption {
return append(opts, grpc.WithStatsHandler(&ocgrpc.ClientHandler{}))
}
// Necessary for safely adding multi-module repo. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository
import _ "cloud.google.com/go"

View file

@ -1,4 +1,4 @@
// Copyright 2016 Google Inc. All Rights Reserved.
// Copyright 2016 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -20,15 +20,17 @@ import (
)
const (
// ProdAddr is the production address.
ProdAddr = "logging.googleapis.com:443"
Version = "0.2.0"
)
// LogPath creates a formatted path from a parent and a logID.
func LogPath(parent, logID string) string {
logID = strings.Replace(logID, "/", "%2F", -1)
return fmt.Sprintf("%s/logs/%s", parent, logID)
}
// LogIDFromPath parses and returns the ID from a log path.
func LogIDFromPath(parent, path string) string {
start := len(parent) + len("/logs/")
if len(path) < start {

View file

@ -1,4 +1,4 @@
// Copyright 2016 Google Inc. All Rights Reserved.
// Copyright 2016 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@ -25,16 +25,19 @@
package logging
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"log"
"math"
"net/http"
"regexp"
"strconv"
"strings"
"sync"
"time"
"unicode/utf8"
"cloud.google.com/go/compute/metadata"
"cloud.google.com/go/internal/version"
@ -44,7 +47,6 @@ import (
"github.com/golang/protobuf/ptypes"
structpb "github.com/golang/protobuf/ptypes/struct"
tspb "github.com/golang/protobuf/ptypes/timestamp"
"golang.org/x/net/context"
"google.golang.org/api/option"
"google.golang.org/api/support/bundler"
mrpb "google.golang.org/genproto/googleapis/api/monitoredres"
@ -53,13 +55,13 @@ import (
)
const (
// Scope for reading from the logging service.
// ReadScope is the scope for reading from the logging service.
ReadScope = "https://www.googleapis.com/auth/logging.read"
// Scope for writing to the logging service.
// WriteScope is the scope for writing to the logging service.
WriteScope = "https://www.googleapis.com/auth/logging.write"
// Scope for administrative actions on the logging service.
// AdminScope is the scope for administrative actions on the logging service.
AdminScope = "https://www.googleapis.com/auth/logging.admin"
)
@ -234,7 +236,7 @@ type Logger struct {
// Options
commonResource *mrpb.MonitoredResource
commonLabels map[string]string
writeTimeout time.Duration
ctxFunc func() (context.Context, func())
}
// A LoggerOption is a configuration option for a Logger.
@ -274,12 +276,17 @@ func detectResource() *mrpb.MonitoredResource {
if err != nil {
return
}
name, err := metadata.InstanceName()
if err != nil {
return
}
detectedResource.pb = &mrpb.MonitoredResource{
Type: "gce_instance",
Labels: map[string]string{
"project_id": projectID,
"instance_id": id,
"zone": zone,
"project_id": projectID,
"instance_id": id,
"instance_name": name,
"zone": zone,
},
}
})
@ -398,6 +405,23 @@ type bufferedByteLimit int
func (b bufferedByteLimit) set(l *Logger) { l.bundler.BufferedByteLimit = int(b) }
// ContextFunc is a function that will be called to obtain a context.Context for the
// WriteLogEntries RPC executed in the background for calls to Logger.Log. The
// default is a function that always returns context.Background. The second return
// value of the function is a function to call after the RPC completes.
//
// The function is not used for calls to Logger.LogSync, since the caller can pass
// in the context directly.
//
// This option is EXPERIMENTAL. It may be changed or removed.
func ContextFunc(f func() (ctx context.Context, afterCall func())) LoggerOption {
return contextFunc(f)
}
type contextFunc func() (ctx context.Context, afterCall func())
func (c contextFunc) set(l *Logger) { l.ctxFunc = c }
// Logger returns a Logger that will write entries with the given log ID, such as
// "syslog". A log ID must be less than 512 characters long and can only
// include the following characters: upper and lower case alphanumeric
@ -412,6 +436,7 @@ func (c *Client) Logger(logID string, opts ...LoggerOption) *Logger {
client: c,
logName: internal.LogPath(c.parent, logID),
commonResource: r,
ctxFunc: func() (context.Context, func()) { return context.Background(), nil },
}
l.bundler = bundler.NewBundler(&logpb.LogEntry{}, func(entries interface{}) {
l.writeLogEntries(entries.([]*logpb.LogEntry))
@ -578,6 +603,17 @@ type Entry struct {
// if any. If it contains a relative resource name, the name is assumed to
// be relative to //tracing.googleapis.com.
Trace string
// ID of the span within the trace associated with the log entry.
// The ID is a 16-character hexadecimal encoding of an 8-byte array.
SpanID string
// If set, symbolizes that this request was sampled.
TraceSampled bool
// Optional. Source code location information associated with the log entry,
// if any.
SourceLocation *logpb.LogEntrySourceLocation
}
// HTTPRequest contains an http.Request as well as additional
@ -631,7 +667,7 @@ func fromHTTPRequest(r *HTTPRequest) *logtypepb.HttpRequest {
u.Fragment = ""
pb := &logtypepb.HttpRequest{
RequestMethod: r.Request.Method,
RequestUrl: u.String(),
RequestUrl: fixUTF8(u.String()),
RequestSize: r.RequestSize,
Status: int32(r.Status),
ResponseSize: r.ResponseSize,
@ -648,6 +684,27 @@ func fromHTTPRequest(r *HTTPRequest) *logtypepb.HttpRequest {
return pb
}
// fixUTF8 is a helper that fixes an invalid UTF-8 string by replacing
// invalid UTF-8 runes with the Unicode replacement character (U+FFFD).
// See Issue https://github.com/googleapis/google-cloud-go/issues/1383.
func fixUTF8(s string) string {
if utf8.ValidString(s) {
return s
}
// Otherwise time to build the sequence.
buf := new(bytes.Buffer)
buf.Grow(len(s))
for _, r := range s {
if utf8.ValidRune(r) {
buf.WriteRune(r)
} else {
buf.WriteRune('\uFFFD')
}
}
return buf.String()
}
// toProtoStruct converts v, which must marshal into a JSON object,
// into a Google Struct proto.
func toProtoStruct(v interface{}) (*structpb.Struct, error) {
@ -713,7 +770,7 @@ func jsonValueToStructValue(v interface{}) *structpb.Value {
// Prefer Log for most uses.
// TODO(jba): come up with a better name (LogNow?) or eliminate.
func (l *Logger) LogSync(ctx context.Context, e Entry) error {
ent, err := toLogEntry(e)
ent, err := l.toLogEntry(e)
if err != nil {
return err
}
@ -728,7 +785,7 @@ func (l *Logger) LogSync(ctx context.Context, e Entry) error {
// Log buffers the Entry for output to the logging service. It never blocks.
func (l *Logger) Log(e Entry) {
ent, err := toLogEntry(e)
ent, err := l.toLogEntry(e)
if err != nil {
l.client.error(err)
return
@ -756,12 +813,16 @@ func (l *Logger) writeLogEntries(entries []*logpb.LogEntry) {
Labels: l.commonLabels,
Entries: entries,
}
ctx, cancel := context.WithTimeout(context.Background(), defaultWriteTimeout)
ctx, afterCall := l.ctxFunc()
ctx, cancel := context.WithTimeout(ctx, defaultWriteTimeout)
defer cancel()
_, err := l.client.client.WriteLogEntries(ctx, req)
if err != nil {
l.client.error(err)
}
if afterCall != nil {
afterCall()
}
}
// StandardLogger returns a *log.Logger for the provided severity.
@ -771,14 +832,38 @@ func (l *Logger) writeLogEntries(entries []*logpb.LogEntry) {
// (for example by calling SetFlags or SetPrefix).
func (l *Logger) StandardLogger(s Severity) *log.Logger { return l.stdLoggers[s] }
func trunc32(i int) int32 {
if i > math.MaxInt32 {
i = math.MaxInt32
var reCloudTraceContext = regexp.MustCompile(`([a-f\d]+)/([a-f\d]+);o=(\d)`)
func deconstructXCloudTraceContext(s string) (traceID, spanID string, traceSampled bool) {
// As per the format described at https://cloud.google.com/trace/docs/troubleshooting#force-trace
// "X-Cloud-Trace-Context: TRACE_ID/SPAN_ID;o=TRACE_TRUE"
// for example:
// "X-Cloud-Trace-Context: 105445aa7843bc8bf206b120001000/0;o=1"
//
// We expect:
// * traceID: "105445aa7843bc8bf206b120001000"
// * spanID: ""
// * traceSampled: true
matches := reCloudTraceContext.FindAllStringSubmatch(s, -1)
if len(matches) != 1 {
return
}
return int32(i)
sub := matches[0]
if len(sub) != 4 {
return
}
traceID, spanID = sub[1], sub[2]
if spanID == "0" {
spanID = ""
}
traceSampled = sub[3] == "1"
return
}
func toLogEntry(e Entry) (*logpb.LogEntry, error) {
func (l *Logger) toLogEntry(e Entry) (*logpb.LogEntry, error) {
if e.LogName != "" {
return nil, errors.New("logging: Entry.LogName should be not be set when writing")
}
@ -790,15 +875,37 @@ func toLogEntry(e Entry) (*logpb.LogEntry, error) {
if err != nil {
return nil, err
}
if e.Trace == "" && e.HTTPRequest != nil && e.HTTPRequest.Request != nil {
traceHeader := e.HTTPRequest.Request.Header.Get("X-Cloud-Trace-Context")
if traceHeader != "" {
// Set to a relative resource name, as described at
// https://cloud.google.com/appengine/docs/flexible/go/writing-application-logs.
traceID, spanID, traceSampled := deconstructXCloudTraceContext(traceHeader)
if traceID != "" {
e.Trace = fmt.Sprintf("%s/traces/%s", l.client.parent, traceID)
}
if e.SpanID == "" {
e.SpanID = spanID
}
// If we previously hadn't set TraceSampled, let's retrieve it
// from the HTTP request's header, as per:
// https://cloud.google.com/trace/docs/troubleshooting#force-trace
e.TraceSampled = e.TraceSampled || traceSampled
}
}
ent := &logpb.LogEntry{
Timestamp: ts,
Severity: logtypepb.LogSeverity(e.Severity),
InsertId: e.InsertID,
HttpRequest: fromHTTPRequest(e.HTTPRequest),
Operation: e.Operation,
Labels: e.Labels,
Trace: e.Trace,
Resource: e.Resource,
Timestamp: ts,
Severity: logtypepb.LogSeverity(e.Severity),
InsertId: e.InsertID,
HttpRequest: fromHTTPRequest(e.HTTPRequest),
Operation: e.Operation,
Labels: e.Labels,
Trace: e.Trace,
SpanId: e.SpanID,
Resource: e.Resource,
SourceLocation: e.SourceLocation,
TraceSampled: e.TraceSampled,
}
switch p := e.Payload.(type) {
case string:

33
vendor/cloud.google.com/go/tools.go generated vendored Normal file
View file

@ -0,0 +1,33 @@
// +build tools
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// This package exists to cause `go mod` and `go get` to believe these tools
// are dependencies, even though they are not runtime dependencies of any
// package (these are tools used by our CI builds). This means they will appear
// in our `go.mod` file, but will not be a part of the build. Also, since the
// build target is something non-existent, these should not be included in any
// binaries.
package cloud
import (
_ "github.com/golang/protobuf/protoc-gen-go"
_ "github.com/jstemmer/go-junit-report"
_ "golang.org/x/exp/cmd/apidiff"
_ "golang.org/x/lint/golint"
_ "golang.org/x/tools/cmd/goimports"
_ "honnef.co/go/tools/cmd/staticcheck"
)

191
vendor/github.com/golang/groupcache/LICENSE generated vendored Normal file
View file

@ -0,0 +1,191 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and
distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright
owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities
that control, are controlled by, or are under common control with that entity.
For the purposes of this definition, "control" means (i) the power, direct or
indirect, to cause the direction or management of such entity, whether by
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising
permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including
but not limited to software source code, documentation source, and configuration
files.
"Object" form shall mean any form resulting from mechanical transformation or
translation of a Source form, including but not limited to compiled object code,
generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made
available under the License, as indicated by a copyright notice that is included
in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that
is based on (or derived from) the Work and for which the editorial revisions,
annotations, elaborations, or other modifications represent, as a whole, an
original work of authorship. For the purposes of this License, Derivative Works
shall not include works that remain separable from, or merely link (or bind by
name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version
of the Work and any modifications or additions to that Work or Derivative Works
thereof, that is intentionally submitted to Licensor for inclusion in the Work
by the copyright owner or by an individual or Legal Entity authorized to submit
on behalf of the copyright owner. For the purposes of this definition,
"submitted" means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems, and
issue tracking systems that are managed by, or on behalf of, the Licensor for
the purpose of discussing and improving the Work, but excluding communication
that is conspicuously marked or otherwise designated in writing by the copyright
owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
of whom a Contribution has been received by Licensor and subsequently
incorporated within the Work.
2. Grant of Copyright License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the Work and such
Derivative Works in Source or Object form.
3. Grant of Patent License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free,
irrevocable (except as stated in this section) patent license to make, have
made, use, offer to sell, sell, import, and otherwise transfer the Work, where
such license applies only to those patent claims licensable by such Contributor
that are necessarily infringed by their Contribution(s) alone or by combination
of their Contribution(s) with the Work to which such Contribution(s) was
submitted. If You institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work or a
Contribution incorporated within the Work constitutes direct or contributory
patent infringement, then any patent licenses granted to You under this License
for that Work shall terminate as of the date such litigation is filed.
4. Redistribution.
You may reproduce and distribute copies of the Work or Derivative Works thereof
in any medium, with or without modifications, and in Source or Object form,
provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a copy of
this License; and
You must cause any modified files to carry prominent notices stating that You
changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute,
all copyright, patent, trademark, and attribution notices from the Source form
of the Work, excluding those notices that do not pertain to any part of the
Derivative Works; and
If the Work includes a "NOTICE" text file as part of its distribution, then any
Derivative Works that You distribute must include a readable copy of the
attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the
following places: within a NOTICE text file distributed as part of the
Derivative Works; within the Source form or documentation, if provided along
with the Derivative Works; or, within a display generated by the Derivative
Works, if and wherever such third-party notices normally appear. The contents of
the NOTICE file are for informational purposes only and do not modify the
License. You may add Your own attribution notices within Derivative Works that
You distribute, alongside or as an addendum to the NOTICE text from the Work,
provided that such additional attribution notices cannot be construed as
modifying the License.
You may add Your own copyright statement to Your modifications and may provide
additional or different license terms and conditions for use, reproduction, or
distribution of Your modifications, or for any such Derivative Works as a whole,
provided Your use, reproduction, and distribution of the Work otherwise complies
with the conditions stated in this License.
5. Submission of Contributions.
Unless You explicitly state otherwise, any Contribution intentionally submitted
for inclusion in the Work by You to the Licensor shall be under the terms and
conditions of this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify the terms of
any separate license agreement you may have executed with Licensor regarding
such Contributions.
6. Trademarks.
This License does not grant permission to use the trade names, trademarks,
service marks, or product names of the Licensor, except as required for
reasonable and customary use in describing the origin of the Work and
reproducing the content of the NOTICE file.
7. Disclaimer of Warranty.
Unless required by applicable law or agreed to in writing, Licensor provides the
Work (and each Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE,
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are
solely responsible for determining the appropriateness of using or
redistributing the Work and assume any risks associated with Your exercise of
permissions under this License.
8. Limitation of Liability.
In no event and under no legal theory, whether in tort (including negligence),
contract, or otherwise, unless required by applicable law (such as deliberate
and grossly negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special, incidental,
or consequential damages of any character arising as a result of this License or
out of the use or inability to use the Work (including but not limited to
damages for loss of goodwill, work stoppage, computer failure or malfunction, or
any and all other commercial damages or losses), even if such Contributor has
been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability.
While redistributing the Work or Derivative Works thereof, You may choose to
offer, and charge a fee for, acceptance of support, warranty, indemnity, or
other liability obligations and/or rights consistent with this License. However,
in accepting such obligations, You may act only on Your own behalf and on Your
sole responsibility, not on behalf of any other Contributor, and only if You
agree to indemnify, defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason of your
accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate
notice, with the fields enclosed by brackets "[]" replaced with your own
identifying information. (Don't include the brackets!) The text should be
enclosed in the appropriate comment syntax for the file format. We also
recommend that a file or class name and description of purpose be included on
the same "printed page" as the copyright notice for easier identification within
third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

73
vendor/github.com/golang/groupcache/README.md generated vendored Normal file
View file

@ -0,0 +1,73 @@
# groupcache
## Summary
groupcache is a caching and cache-filling library, intended as a
replacement for memcached in many cases.
For API docs and examples, see http://godoc.org/github.com/golang/groupcache
## Comparison to memcached
### **Like memcached**, groupcache:
* shards by key to select which peer is responsible for that key
### **Unlike memcached**, groupcache:
* does not require running a separate set of servers, thus massively
reducing deployment/configuration pain. groupcache is a client
library as well as a server. It connects to its own peers.
* comes with a cache filling mechanism. Whereas memcached just says
"Sorry, cache miss", often resulting in a thundering herd of
database (or whatever) loads from an unbounded number of clients
(which has resulted in several fun outages), groupcache coordinates
cache fills such that only one load in one process of an entire
replicated set of processes populates the cache, then multiplexes
the loaded value to all callers.
* does not support versioned values. If key "foo" is value "bar",
key "foo" must always be "bar". There are neither cache expiration
times, nor explicit cache evictions. Thus there is also no CAS,
nor Increment/Decrement. This also means that groupcache....
* ... supports automatic mirroring of super-hot items to multiple
processes. This prevents memcached hot spotting where a machine's
CPU and/or NIC are overloaded by very popular keys/values.
* is currently only available for Go. It's very unlikely that I
(bradfitz@) will port the code to any other language.
## Loading process
In a nutshell, a groupcache lookup of **Get("foo")** looks like:
(On machine #5 of a set of N machines running the same code)
1. Is the value of "foo" in local memory because it's super hot? If so, use it.
2. Is the value of "foo" in local memory because peer #5 (the current
peer) is the owner of it? If so, use it.
3. Amongst all the peers in my set of N, am I the owner of the key
"foo"? (e.g. does it consistent hash to 5?) If so, load it. If
other callers come in, via the same process or via RPC requests
from peers, they block waiting for the load to finish and get the
same answer. If not, RPC to the peer that's the owner and get
the answer. If the RPC fails, just load it locally (still with
local dup suppression).
## Users
groupcache is in production use by dl.google.com (its original user),
parts of Blogger, parts of Google Code, parts of Google Fiber, parts
of Google production monitoring systems, etc.
## Presentations
See http://talks.golang.org/2013/oscon-dl.slide
## Help
Use the golang-nuts mailing list for any discussion or questions.

133
vendor/github.com/golang/groupcache/lru/lru.go generated vendored Normal file
View file

@ -0,0 +1,133 @@
/*
Copyright 2013 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package lru implements an LRU cache.
package lru
import "container/list"
// Cache is an LRU cache. It is not safe for concurrent access.
type Cache struct {
// MaxEntries is the maximum number of cache entries before
// an item is evicted. Zero means no limit.
MaxEntries int
// OnEvicted optionally specifies a callback function to be
// executed when an entry is purged from the cache.
OnEvicted func(key Key, value interface{})
ll *list.List
cache map[interface{}]*list.Element
}
// A Key may be any value that is comparable. See http://golang.org/ref/spec#Comparison_operators
type Key interface{}
type entry struct {
key Key
value interface{}
}
// New creates a new Cache.
// If maxEntries is zero, the cache has no limit and it's assumed
// that eviction is done by the caller.
func New(maxEntries int) *Cache {
return &Cache{
MaxEntries: maxEntries,
ll: list.New(),
cache: make(map[interface{}]*list.Element),
}
}
// Add adds a value to the cache.
func (c *Cache) Add(key Key, value interface{}) {
if c.cache == nil {
c.cache = make(map[interface{}]*list.Element)
c.ll = list.New()
}
if ee, ok := c.cache[key]; ok {
c.ll.MoveToFront(ee)
ee.Value.(*entry).value = value
return
}
ele := c.ll.PushFront(&entry{key, value})
c.cache[key] = ele
if c.MaxEntries != 0 && c.ll.Len() > c.MaxEntries {
c.RemoveOldest()
}
}
// Get looks up a key's value from the cache.
func (c *Cache) Get(key Key) (value interface{}, ok bool) {
if c.cache == nil {
return
}
if ele, hit := c.cache[key]; hit {
c.ll.MoveToFront(ele)
return ele.Value.(*entry).value, true
}
return
}
// Remove removes the provided key from the cache.
func (c *Cache) Remove(key Key) {
if c.cache == nil {
return
}
if ele, hit := c.cache[key]; hit {
c.removeElement(ele)
}
}
// RemoveOldest removes the oldest item from the cache.
func (c *Cache) RemoveOldest() {
if c.cache == nil {
return
}
ele := c.ll.Back()
if ele != nil {
c.removeElement(ele)
}
}
func (c *Cache) removeElement(e *list.Element) {
c.ll.Remove(e)
kv := e.Value.(*entry)
delete(c.cache, kv.key)
if c.OnEvicted != nil {
c.OnEvicted(kv.key, kv.value)
}
}
// Len returns the number of items in the cache.
func (c *Cache) Len() int {
if c.cache == nil {
return 0
}
return c.ll.Len()
}
// Clear purges all stored items from the cache.
func (c *Cache) Clear() {
if c.OnEvicted != nil {
for _, e := range c.cache {
kv := e.Value.(*entry)
c.OnEvicted(kv.key, kv.value)
}
}
c.ll = nil
c.cache = nil
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,117 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2017 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
/*
Package remap handles tracking the locations of Go tokens in a source text
across a rewrite by the Go formatter.
*/
package remap
import (
"fmt"
"go/scanner"
"go/token"
)
// A Location represents a span of byte offsets in the source text.
type Location struct {
Pos, End int // End is exclusive
}
// A Map represents a mapping between token locations in an input source text
// and locations in the corresponding output text.
type Map map[Location]Location
// Find reports whether the specified span is recorded by m, and if so returns
// the new location it was mapped to. If the input span was not found, the
// returned location is the same as the input.
func (m Map) Find(pos, end int) (Location, bool) {
key := Location{
Pos: pos,
End: end,
}
if loc, ok := m[key]; ok {
return loc, true
}
return key, false
}
func (m Map) add(opos, oend, npos, nend int) {
m[Location{Pos: opos, End: oend}] = Location{Pos: npos, End: nend}
}
// Compute constructs a location mapping from input to output. An error is
// reported if any of the tokens of output cannot be mapped.
func Compute(input, output []byte) (Map, error) {
itok := tokenize(input)
otok := tokenize(output)
if len(itok) != len(otok) {
return nil, fmt.Errorf("wrong number of tokens, %d ≠ %d", len(itok), len(otok))
}
m := make(Map)
for i, ti := range itok {
to := otok[i]
if ti.Token != to.Token {
return nil, fmt.Errorf("token %d type mismatch: %s ≠ %s", i+1, ti, to)
}
m.add(ti.pos, ti.end, to.pos, to.end)
}
return m, nil
}
// tokinfo records the span and type of a source token.
type tokinfo struct {
pos, end int
token.Token
}
func tokenize(src []byte) []tokinfo {
fs := token.NewFileSet()
var s scanner.Scanner
s.Init(fs.AddFile("src", fs.Base(), len(src)), src, nil, scanner.ScanComments)
var info []tokinfo
for {
pos, next, lit := s.Scan()
switch next {
case token.SEMICOLON:
continue
}
info = append(info, tokinfo{
pos: int(pos - 1),
end: int(pos + token.Pos(len(lit)) - 1),
Token: next,
})
if next == token.EOF {
break
}
}
return info
}

View file

@ -0,0 +1,545 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2015 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Package grpc outputs gRPC service descriptions in Go code.
// It runs as a plugin for the Go protocol buffer compiler plugin.
// It is linked in to protoc-gen-go.
package grpc
import (
"fmt"
"strconv"
"strings"
pb "github.com/golang/protobuf/protoc-gen-go/descriptor"
"github.com/golang/protobuf/protoc-gen-go/generator"
)
// generatedCodeVersion indicates a version of the generated code.
// It is incremented whenever an incompatibility between the generated code and
// the grpc package is introduced; the generated code references
// a constant, grpc.SupportPackageIsVersionN (where N is generatedCodeVersion).
const generatedCodeVersion = 6
// Paths for packages used by code generated in this file,
// relative to the import_prefix of the generator.Generator.
const (
contextPkgPath = "context"
grpcPkgPath = "google.golang.org/grpc"
codePkgPath = "google.golang.org/grpc/codes"
statusPkgPath = "google.golang.org/grpc/status"
)
func init() {
generator.RegisterPlugin(new(grpc))
}
// grpc is an implementation of the Go protocol buffer compiler's
// plugin architecture. It generates bindings for gRPC support.
type grpc struct {
gen *generator.Generator
}
// Name returns the name of this plugin, "grpc".
func (g *grpc) Name() string {
return "grpc"
}
// The names for packages imported in the generated code.
// They may vary from the final path component of the import path
// if the name is used by other packages.
var (
contextPkg string
grpcPkg string
)
// Init initializes the plugin.
func (g *grpc) Init(gen *generator.Generator) {
g.gen = gen
}
// Given a type name defined in a .proto, return its object.
// Also record that we're using it, to guarantee the associated import.
func (g *grpc) objectNamed(name string) generator.Object {
g.gen.RecordTypeUse(name)
return g.gen.ObjectNamed(name)
}
// Given a type name defined in a .proto, return its name as we will print it.
func (g *grpc) typeName(str string) string {
return g.gen.TypeName(g.objectNamed(str))
}
// P forwards to g.gen.P.
func (g *grpc) P(args ...interface{}) { g.gen.P(args...) }
// Generate generates code for the services in the given file.
func (g *grpc) Generate(file *generator.FileDescriptor) {
if len(file.FileDescriptorProto.Service) == 0 {
return
}
contextPkg = string(g.gen.AddImport(contextPkgPath))
grpcPkg = string(g.gen.AddImport(grpcPkgPath))
g.P("// Reference imports to suppress errors if they are not otherwise used.")
g.P("var _ ", contextPkg, ".Context")
g.P("var _ ", grpcPkg, ".ClientConnInterface")
g.P()
// Assert version compatibility.
g.P("// This is a compile-time assertion to ensure that this generated file")
g.P("// is compatible with the grpc package it is being compiled against.")
g.P("const _ = ", grpcPkg, ".SupportPackageIsVersion", generatedCodeVersion)
g.P()
for i, service := range file.FileDescriptorProto.Service {
g.generateService(file, service, i)
}
}
// GenerateImports generates the import declaration for this file.
func (g *grpc) GenerateImports(file *generator.FileDescriptor) {
}
// reservedClientName records whether a client name is reserved on the client side.
var reservedClientName = map[string]bool{
// TODO: do we need any in gRPC?
}
func unexport(s string) string { return strings.ToLower(s[:1]) + s[1:] }
// deprecationComment is the standard comment added to deprecated
// messages, fields, enums, and enum values.
var deprecationComment = "// Deprecated: Do not use."
// generateService generates all the code for the named service.
func (g *grpc) generateService(file *generator.FileDescriptor, service *pb.ServiceDescriptorProto, index int) {
path := fmt.Sprintf("6,%d", index) // 6 means service.
origServName := service.GetName()
fullServName := origServName
if pkg := file.GetPackage(); pkg != "" {
fullServName = pkg + "." + fullServName
}
servName := generator.CamelCase(origServName)
deprecated := service.GetOptions().GetDeprecated()
g.P()
g.P(fmt.Sprintf(`// %sClient is the client API for %s service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.`, servName, servName))
// Client interface.
if deprecated {
g.P("//")
g.P(deprecationComment)
}
g.P("type ", servName, "Client interface {")
for i, method := range service.Method {
g.gen.PrintComments(fmt.Sprintf("%s,2,%d", path, i)) // 2 means method in a service.
if method.GetOptions().GetDeprecated() {
g.P("//")
g.P(deprecationComment)
}
g.P(g.generateClientSignature(servName, method))
}
g.P("}")
g.P()
// Client structure.
g.P("type ", unexport(servName), "Client struct {")
g.P("cc ", grpcPkg, ".ClientConnInterface")
g.P("}")
g.P()
// NewClient factory.
if deprecated {
g.P(deprecationComment)
}
g.P("func New", servName, "Client (cc ", grpcPkg, ".ClientConnInterface) ", servName, "Client {")
g.P("return &", unexport(servName), "Client{cc}")
g.P("}")
g.P()
var methodIndex, streamIndex int
serviceDescVar := "_" + servName + "_serviceDesc"
// Client method implementations.
for _, method := range service.Method {
var descExpr string
if !method.GetServerStreaming() && !method.GetClientStreaming() {
// Unary RPC method
descExpr = fmt.Sprintf("&%s.Methods[%d]", serviceDescVar, methodIndex)
methodIndex++
} else {
// Streaming RPC method
descExpr = fmt.Sprintf("&%s.Streams[%d]", serviceDescVar, streamIndex)
streamIndex++
}
g.generateClientMethod(servName, fullServName, serviceDescVar, method, descExpr)
}
// Server interface.
serverType := servName + "Server"
g.P("// ", serverType, " is the server API for ", servName, " service.")
if deprecated {
g.P("//")
g.P(deprecationComment)
}
g.P("type ", serverType, " interface {")
for i, method := range service.Method {
g.gen.PrintComments(fmt.Sprintf("%s,2,%d", path, i)) // 2 means method in a service.
if method.GetOptions().GetDeprecated() {
g.P("//")
g.P(deprecationComment)
}
g.P(g.generateServerSignature(servName, method))
}
g.P("}")
g.P()
// Server Unimplemented struct for forward compatibility.
if deprecated {
g.P(deprecationComment)
}
g.generateUnimplementedServer(servName, service)
// Server registration.
if deprecated {
g.P(deprecationComment)
}
g.P("func Register", servName, "Server(s *", grpcPkg, ".Server, srv ", serverType, ") {")
g.P("s.RegisterService(&", serviceDescVar, `, srv)`)
g.P("}")
g.P()
// Server handler implementations.
var handlerNames []string
for _, method := range service.Method {
hname := g.generateServerMethod(servName, fullServName, method)
handlerNames = append(handlerNames, hname)
}
// Service descriptor.
g.P("var ", serviceDescVar, " = ", grpcPkg, ".ServiceDesc {")
g.P("ServiceName: ", strconv.Quote(fullServName), ",")
g.P("HandlerType: (*", serverType, ")(nil),")
g.P("Methods: []", grpcPkg, ".MethodDesc{")
for i, method := range service.Method {
if method.GetServerStreaming() || method.GetClientStreaming() {
continue
}
g.P("{")
g.P("MethodName: ", strconv.Quote(method.GetName()), ",")
g.P("Handler: ", handlerNames[i], ",")
g.P("},")
}
g.P("},")
g.P("Streams: []", grpcPkg, ".StreamDesc{")
for i, method := range service.Method {
if !method.GetServerStreaming() && !method.GetClientStreaming() {
continue
}
g.P("{")
g.P("StreamName: ", strconv.Quote(method.GetName()), ",")
g.P("Handler: ", handlerNames[i], ",")
if method.GetServerStreaming() {
g.P("ServerStreams: true,")
}
if method.GetClientStreaming() {
g.P("ClientStreams: true,")
}
g.P("},")
}
g.P("},")
g.P("Metadata: \"", file.GetName(), "\",")
g.P("}")
g.P()
}
// generateUnimplementedServer creates the unimplemented server struct
func (g *grpc) generateUnimplementedServer(servName string, service *pb.ServiceDescriptorProto) {
serverType := servName + "Server"
g.P("// Unimplemented", serverType, " can be embedded to have forward compatible implementations.")
g.P("type Unimplemented", serverType, " struct {")
g.P("}")
g.P()
// Unimplemented<service_name>Server's concrete methods
for _, method := range service.Method {
g.generateServerMethodConcrete(servName, method)
}
g.P()
}
// generateServerMethodConcrete returns unimplemented methods which ensure forward compatibility
func (g *grpc) generateServerMethodConcrete(servName string, method *pb.MethodDescriptorProto) {
header := g.generateServerSignatureWithParamNames(servName, method)
g.P("func (*Unimplemented", servName, "Server) ", header, " {")
var nilArg string
if !method.GetServerStreaming() && !method.GetClientStreaming() {
nilArg = "nil, "
}
methName := generator.CamelCase(method.GetName())
statusPkg := string(g.gen.AddImport(statusPkgPath))
codePkg := string(g.gen.AddImport(codePkgPath))
g.P("return ", nilArg, statusPkg, `.Errorf(`, codePkg, `.Unimplemented, "method `, methName, ` not implemented")`)
g.P("}")
}
// generateClientSignature returns the client-side signature for a method.
func (g *grpc) generateClientSignature(servName string, method *pb.MethodDescriptorProto) string {
origMethName := method.GetName()
methName := generator.CamelCase(origMethName)
if reservedClientName[methName] {
methName += "_"
}
reqArg := ", in *" + g.typeName(method.GetInputType())
if method.GetClientStreaming() {
reqArg = ""
}
respName := "*" + g.typeName(method.GetOutputType())
if method.GetServerStreaming() || method.GetClientStreaming() {
respName = servName + "_" + generator.CamelCase(origMethName) + "Client"
}
return fmt.Sprintf("%s(ctx %s.Context%s, opts ...%s.CallOption) (%s, error)", methName, contextPkg, reqArg, grpcPkg, respName)
}
func (g *grpc) generateClientMethod(servName, fullServName, serviceDescVar string, method *pb.MethodDescriptorProto, descExpr string) {
sname := fmt.Sprintf("/%s/%s", fullServName, method.GetName())
methName := generator.CamelCase(method.GetName())
inType := g.typeName(method.GetInputType())
outType := g.typeName(method.GetOutputType())
if method.GetOptions().GetDeprecated() {
g.P(deprecationComment)
}
g.P("func (c *", unexport(servName), "Client) ", g.generateClientSignature(servName, method), "{")
if !method.GetServerStreaming() && !method.GetClientStreaming() {
g.P("out := new(", outType, ")")
// TODO: Pass descExpr to Invoke.
g.P(`err := c.cc.Invoke(ctx, "`, sname, `", in, out, opts...)`)
g.P("if err != nil { return nil, err }")
g.P("return out, nil")
g.P("}")
g.P()
return
}
streamType := unexport(servName) + methName + "Client"
g.P("stream, err := c.cc.NewStream(ctx, ", descExpr, `, "`, sname, `", opts...)`)
g.P("if err != nil { return nil, err }")
g.P("x := &", streamType, "{stream}")
if !method.GetClientStreaming() {
g.P("if err := x.ClientStream.SendMsg(in); err != nil { return nil, err }")
g.P("if err := x.ClientStream.CloseSend(); err != nil { return nil, err }")
}
g.P("return x, nil")
g.P("}")
g.P()
genSend := method.GetClientStreaming()
genRecv := method.GetServerStreaming()
genCloseAndRecv := !method.GetServerStreaming()
// Stream auxiliary types and methods.
g.P("type ", servName, "_", methName, "Client interface {")
if genSend {
g.P("Send(*", inType, ") error")
}
if genRecv {
g.P("Recv() (*", outType, ", error)")
}
if genCloseAndRecv {
g.P("CloseAndRecv() (*", outType, ", error)")
}
g.P(grpcPkg, ".ClientStream")
g.P("}")
g.P()
g.P("type ", streamType, " struct {")
g.P(grpcPkg, ".ClientStream")
g.P("}")
g.P()
if genSend {
g.P("func (x *", streamType, ") Send(m *", inType, ") error {")
g.P("return x.ClientStream.SendMsg(m)")
g.P("}")
g.P()
}
if genRecv {
g.P("func (x *", streamType, ") Recv() (*", outType, ", error) {")
g.P("m := new(", outType, ")")
g.P("if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err }")
g.P("return m, nil")
g.P("}")
g.P()
}
if genCloseAndRecv {
g.P("func (x *", streamType, ") CloseAndRecv() (*", outType, ", error) {")
g.P("if err := x.ClientStream.CloseSend(); err != nil { return nil, err }")
g.P("m := new(", outType, ")")
g.P("if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err }")
g.P("return m, nil")
g.P("}")
g.P()
}
}
// generateServerSignatureWithParamNames returns the server-side signature for a method with parameter names.
func (g *grpc) generateServerSignatureWithParamNames(servName string, method *pb.MethodDescriptorProto) string {
origMethName := method.GetName()
methName := generator.CamelCase(origMethName)
if reservedClientName[methName] {
methName += "_"
}
var reqArgs []string
ret := "error"
if !method.GetServerStreaming() && !method.GetClientStreaming() {
reqArgs = append(reqArgs, "ctx "+contextPkg+".Context")
ret = "(*" + g.typeName(method.GetOutputType()) + ", error)"
}
if !method.GetClientStreaming() {
reqArgs = append(reqArgs, "req *"+g.typeName(method.GetInputType()))
}
if method.GetServerStreaming() || method.GetClientStreaming() {
reqArgs = append(reqArgs, "srv "+servName+"_"+generator.CamelCase(origMethName)+"Server")
}
return methName + "(" + strings.Join(reqArgs, ", ") + ") " + ret
}
// generateServerSignature returns the server-side signature for a method.
func (g *grpc) generateServerSignature(servName string, method *pb.MethodDescriptorProto) string {
origMethName := method.GetName()
methName := generator.CamelCase(origMethName)
if reservedClientName[methName] {
methName += "_"
}
var reqArgs []string
ret := "error"
if !method.GetServerStreaming() && !method.GetClientStreaming() {
reqArgs = append(reqArgs, contextPkg+".Context")
ret = "(*" + g.typeName(method.GetOutputType()) + ", error)"
}
if !method.GetClientStreaming() {
reqArgs = append(reqArgs, "*"+g.typeName(method.GetInputType()))
}
if method.GetServerStreaming() || method.GetClientStreaming() {
reqArgs = append(reqArgs, servName+"_"+generator.CamelCase(origMethName)+"Server")
}
return methName + "(" + strings.Join(reqArgs, ", ") + ") " + ret
}
func (g *grpc) generateServerMethod(servName, fullServName string, method *pb.MethodDescriptorProto) string {
methName := generator.CamelCase(method.GetName())
hname := fmt.Sprintf("_%s_%s_Handler", servName, methName)
inType := g.typeName(method.GetInputType())
outType := g.typeName(method.GetOutputType())
if !method.GetServerStreaming() && !method.GetClientStreaming() {
g.P("func ", hname, "(srv interface{}, ctx ", contextPkg, ".Context, dec func(interface{}) error, interceptor ", grpcPkg, ".UnaryServerInterceptor) (interface{}, error) {")
g.P("in := new(", inType, ")")
g.P("if err := dec(in); err != nil { return nil, err }")
g.P("if interceptor == nil { return srv.(", servName, "Server).", methName, "(ctx, in) }")
g.P("info := &", grpcPkg, ".UnaryServerInfo{")
g.P("Server: srv,")
g.P("FullMethod: ", strconv.Quote(fmt.Sprintf("/%s/%s", fullServName, methName)), ",")
g.P("}")
g.P("handler := func(ctx ", contextPkg, ".Context, req interface{}) (interface{}, error) {")
g.P("return srv.(", servName, "Server).", methName, "(ctx, req.(*", inType, "))")
g.P("}")
g.P("return interceptor(ctx, in, info, handler)")
g.P("}")
g.P()
return hname
}
streamType := unexport(servName) + methName + "Server"
g.P("func ", hname, "(srv interface{}, stream ", grpcPkg, ".ServerStream) error {")
if !method.GetClientStreaming() {
g.P("m := new(", inType, ")")
g.P("if err := stream.RecvMsg(m); err != nil { return err }")
g.P("return srv.(", servName, "Server).", methName, "(m, &", streamType, "{stream})")
} else {
g.P("return srv.(", servName, "Server).", methName, "(&", streamType, "{stream})")
}
g.P("}")
g.P()
genSend := method.GetServerStreaming()
genSendAndClose := !method.GetServerStreaming()
genRecv := method.GetClientStreaming()
// Stream auxiliary types and methods.
g.P("type ", servName, "_", methName, "Server interface {")
if genSend {
g.P("Send(*", outType, ") error")
}
if genSendAndClose {
g.P("SendAndClose(*", outType, ") error")
}
if genRecv {
g.P("Recv() (*", inType, ", error)")
}
g.P(grpcPkg, ".ServerStream")
g.P("}")
g.P()
g.P("type ", streamType, " struct {")
g.P(grpcPkg, ".ServerStream")
g.P("}")
g.P()
if genSend {
g.P("func (x *", streamType, ") Send(m *", outType, ") error {")
g.P("return x.ServerStream.SendMsg(m)")
g.P("}")
g.P()
}
if genSendAndClose {
g.P("func (x *", streamType, ") SendAndClose(m *", outType, ") error {")
g.P("return x.ServerStream.SendMsg(m)")
g.P("}")
g.P()
}
if genRecv {
g.P("func (x *", streamType, ") Recv() (*", inType, ", error) {")
g.P("m := new(", inType, ")")
g.P("if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err }")
g.P("return m, nil")
g.P("}")
g.P()
}
return hname
}

View file

@ -0,0 +1,34 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2015 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import _ "github.com/golang/protobuf/protoc-gen-go/grpc"

View file

@ -0,0 +1,98 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// protoc-gen-go is a plugin for the Google protocol buffer compiler to generate
// Go code. Run it by building this program and putting it in your path with
// the name
// protoc-gen-go
// That word 'go' at the end becomes part of the option string set for the
// protocol compiler, so once the protocol compiler (protoc) is installed
// you can run
// protoc --go_out=output_directory input_directory/file.proto
// to generate Go bindings for the protocol defined by file.proto.
// With that input, the output will be written to
// output_directory/file.pb.go
//
// The generated code is documented in the package comment for
// the library.
//
// See the README and documentation for protocol buffers to learn more:
// https://developers.google.com/protocol-buffers/
package main
import (
"io/ioutil"
"os"
"github.com/golang/protobuf/proto"
"github.com/golang/protobuf/protoc-gen-go/generator"
)
func main() {
// Begin by allocating a generator. The request and response structures are stored there
// so we can do error handling easily - the response structure contains the field to
// report failure.
g := generator.New()
data, err := ioutil.ReadAll(os.Stdin)
if err != nil {
g.Error(err, "reading input")
}
if err := proto.Unmarshal(data, g.Request); err != nil {
g.Error(err, "parsing input proto")
}
if len(g.Request.FileToGenerate) == 0 {
g.Fail("no files to generate")
}
g.CommandLineParameters(g.Request.GetParameter())
// Create a wrapped version of the Descriptors and EnumDescriptors that
// point to the file that defines them.
g.WrapTypes()
g.SetPackageNames()
g.BuildTypeNameMap()
g.GenerateAllFiles()
// Send back the results.
data, err = proto.Marshal(g.Response)
if err != nil {
g.Error(err, "failed to marshal output proto")
}
_, err = os.Stdout.Write(data)
if err != nil {
g.Error(err, "failed to write output proto")
}
}

View file

@ -0,0 +1,369 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: google/protobuf/compiler/plugin.proto
/*
Package plugin_go is a generated protocol buffer package.
It is generated from these files:
google/protobuf/compiler/plugin.proto
It has these top-level messages:
Version
CodeGeneratorRequest
CodeGeneratorResponse
*/
package plugin_go
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import google_protobuf "github.com/golang/protobuf/protoc-gen-go/descriptor"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// The version number of protocol compiler.
type Version struct {
Major *int32 `protobuf:"varint,1,opt,name=major" json:"major,omitempty"`
Minor *int32 `protobuf:"varint,2,opt,name=minor" json:"minor,omitempty"`
Patch *int32 `protobuf:"varint,3,opt,name=patch" json:"patch,omitempty"`
// A suffix for alpha, beta or rc release, e.g., "alpha-1", "rc2". It should
// be empty for mainline stable releases.
Suffix *string `protobuf:"bytes,4,opt,name=suffix" json:"suffix,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Version) Reset() { *m = Version{} }
func (m *Version) String() string { return proto.CompactTextString(m) }
func (*Version) ProtoMessage() {}
func (*Version) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Version) Unmarshal(b []byte) error {
return xxx_messageInfo_Version.Unmarshal(m, b)
}
func (m *Version) Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Version.Marshal(b, m, deterministic)
}
func (dst *Version) XXX_Merge(src proto.Message) {
xxx_messageInfo_Version.Merge(dst, src)
}
func (m *Version) XXX_Size() int {
return xxx_messageInfo_Version.Size(m)
}
func (m *Version) XXX_DiscardUnknown() {
xxx_messageInfo_Version.DiscardUnknown(m)
}
var xxx_messageInfo_Version proto.InternalMessageInfo
func (m *Version) GetMajor() int32 {
if m != nil && m.Major != nil {
return *m.Major
}
return 0
}
func (m *Version) GetMinor() int32 {
if m != nil && m.Minor != nil {
return *m.Minor
}
return 0
}
func (m *Version) GetPatch() int32 {
if m != nil && m.Patch != nil {
return *m.Patch
}
return 0
}
func (m *Version) GetSuffix() string {
if m != nil && m.Suffix != nil {
return *m.Suffix
}
return ""
}
// An encoded CodeGeneratorRequest is written to the plugin's stdin.
type CodeGeneratorRequest struct {
// The .proto files that were explicitly listed on the command-line. The
// code generator should generate code only for these files. Each file's
// descriptor will be included in proto_file, below.
FileToGenerate []string `protobuf:"bytes,1,rep,name=file_to_generate,json=fileToGenerate" json:"file_to_generate,omitempty"`
// The generator parameter passed on the command-line.
Parameter *string `protobuf:"bytes,2,opt,name=parameter" json:"parameter,omitempty"`
// FileDescriptorProtos for all files in files_to_generate and everything
// they import. The files will appear in topological order, so each file
// appears before any file that imports it.
//
// protoc guarantees that all proto_files will be written after
// the fields above, even though this is not technically guaranteed by the
// protobuf wire format. This theoretically could allow a plugin to stream
// in the FileDescriptorProtos and handle them one by one rather than read
// the entire set into memory at once. However, as of this writing, this
// is not similarly optimized on protoc's end -- it will store all fields in
// memory at once before sending them to the plugin.
//
// Type names of fields and extensions in the FileDescriptorProto are always
// fully qualified.
ProtoFile []*google_protobuf.FileDescriptorProto `protobuf:"bytes,15,rep,name=proto_file,json=protoFile" json:"proto_file,omitempty"`
// The version number of protocol compiler.
CompilerVersion *Version `protobuf:"bytes,3,opt,name=compiler_version,json=compilerVersion" json:"compiler_version,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *CodeGeneratorRequest) Reset() { *m = CodeGeneratorRequest{} }
func (m *CodeGeneratorRequest) String() string { return proto.CompactTextString(m) }
func (*CodeGeneratorRequest) ProtoMessage() {}
func (*CodeGeneratorRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *CodeGeneratorRequest) Unmarshal(b []byte) error {
return xxx_messageInfo_CodeGeneratorRequest.Unmarshal(m, b)
}
func (m *CodeGeneratorRequest) Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_CodeGeneratorRequest.Marshal(b, m, deterministic)
}
func (dst *CodeGeneratorRequest) XXX_Merge(src proto.Message) {
xxx_messageInfo_CodeGeneratorRequest.Merge(dst, src)
}
func (m *CodeGeneratorRequest) XXX_Size() int {
return xxx_messageInfo_CodeGeneratorRequest.Size(m)
}
func (m *CodeGeneratorRequest) XXX_DiscardUnknown() {
xxx_messageInfo_CodeGeneratorRequest.DiscardUnknown(m)
}
var xxx_messageInfo_CodeGeneratorRequest proto.InternalMessageInfo
func (m *CodeGeneratorRequest) GetFileToGenerate() []string {
if m != nil {
return m.FileToGenerate
}
return nil
}
func (m *CodeGeneratorRequest) GetParameter() string {
if m != nil && m.Parameter != nil {
return *m.Parameter
}
return ""
}
func (m *CodeGeneratorRequest) GetProtoFile() []*google_protobuf.FileDescriptorProto {
if m != nil {
return m.ProtoFile
}
return nil
}
func (m *CodeGeneratorRequest) GetCompilerVersion() *Version {
if m != nil {
return m.CompilerVersion
}
return nil
}
// The plugin writes an encoded CodeGeneratorResponse to stdout.
type CodeGeneratorResponse struct {
// Error message. If non-empty, code generation failed. The plugin process
// should exit with status code zero even if it reports an error in this way.
//
// This should be used to indicate errors in .proto files which prevent the
// code generator from generating correct code. Errors which indicate a
// problem in protoc itself -- such as the input CodeGeneratorRequest being
// unparseable -- should be reported by writing a message to stderr and
// exiting with a non-zero status code.
Error *string `protobuf:"bytes,1,opt,name=error" json:"error,omitempty"`
File []*CodeGeneratorResponse_File `protobuf:"bytes,15,rep,name=file" json:"file,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *CodeGeneratorResponse) Reset() { *m = CodeGeneratorResponse{} }
func (m *CodeGeneratorResponse) String() string { return proto.CompactTextString(m) }
func (*CodeGeneratorResponse) ProtoMessage() {}
func (*CodeGeneratorResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *CodeGeneratorResponse) Unmarshal(b []byte) error {
return xxx_messageInfo_CodeGeneratorResponse.Unmarshal(m, b)
}
func (m *CodeGeneratorResponse) Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_CodeGeneratorResponse.Marshal(b, m, deterministic)
}
func (dst *CodeGeneratorResponse) XXX_Merge(src proto.Message) {
xxx_messageInfo_CodeGeneratorResponse.Merge(dst, src)
}
func (m *CodeGeneratorResponse) XXX_Size() int {
return xxx_messageInfo_CodeGeneratorResponse.Size(m)
}
func (m *CodeGeneratorResponse) XXX_DiscardUnknown() {
xxx_messageInfo_CodeGeneratorResponse.DiscardUnknown(m)
}
var xxx_messageInfo_CodeGeneratorResponse proto.InternalMessageInfo
func (m *CodeGeneratorResponse) GetError() string {
if m != nil && m.Error != nil {
return *m.Error
}
return ""
}
func (m *CodeGeneratorResponse) GetFile() []*CodeGeneratorResponse_File {
if m != nil {
return m.File
}
return nil
}
// Represents a single generated file.
type CodeGeneratorResponse_File struct {
// The file name, relative to the output directory. The name must not
// contain "." or ".." components and must be relative, not be absolute (so,
// the file cannot lie outside the output directory). "/" must be used as
// the path separator, not "\".
//
// If the name is omitted, the content will be appended to the previous
// file. This allows the generator to break large files into small chunks,
// and allows the generated text to be streamed back to protoc so that large
// files need not reside completely in memory at one time. Note that as of
// this writing protoc does not optimize for this -- it will read the entire
// CodeGeneratorResponse before writing files to disk.
Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
// If non-empty, indicates that the named file should already exist, and the
// content here is to be inserted into that file at a defined insertion
// point. This feature allows a code generator to extend the output
// produced by another code generator. The original generator may provide
// insertion points by placing special annotations in the file that look
// like:
// @@protoc_insertion_point(NAME)
// The annotation can have arbitrary text before and after it on the line,
// which allows it to be placed in a comment. NAME should be replaced with
// an identifier naming the point -- this is what other generators will use
// as the insertion_point. Code inserted at this point will be placed
// immediately above the line containing the insertion point (thus multiple
// insertions to the same point will come out in the order they were added).
// The double-@ is intended to make it unlikely that the generated code
// could contain things that look like insertion points by accident.
//
// For example, the C++ code generator places the following line in the
// .pb.h files that it generates:
// // @@protoc_insertion_point(namespace_scope)
// This line appears within the scope of the file's package namespace, but
// outside of any particular class. Another plugin can then specify the
// insertion_point "namespace_scope" to generate additional classes or
// other declarations that should be placed in this scope.
//
// Note that if the line containing the insertion point begins with
// whitespace, the same whitespace will be added to every line of the
// inserted text. This is useful for languages like Python, where
// indentation matters. In these languages, the insertion point comment
// should be indented the same amount as any inserted code will need to be
// in order to work correctly in that context.
//
// The code generator that generates the initial file and the one which
// inserts into it must both run as part of a single invocation of protoc.
// Code generators are executed in the order in which they appear on the
// command line.
//
// If |insertion_point| is present, |name| must also be present.
InsertionPoint *string `protobuf:"bytes,2,opt,name=insertion_point,json=insertionPoint" json:"insertion_point,omitempty"`
// The file contents.
Content *string `protobuf:"bytes,15,opt,name=content" json:"content,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *CodeGeneratorResponse_File) Reset() { *m = CodeGeneratorResponse_File{} }
func (m *CodeGeneratorResponse_File) String() string { return proto.CompactTextString(m) }
func (*CodeGeneratorResponse_File) ProtoMessage() {}
func (*CodeGeneratorResponse_File) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2, 0} }
func (m *CodeGeneratorResponse_File) Unmarshal(b []byte) error {
return xxx_messageInfo_CodeGeneratorResponse_File.Unmarshal(m, b)
}
func (m *CodeGeneratorResponse_File) Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_CodeGeneratorResponse_File.Marshal(b, m, deterministic)
}
func (dst *CodeGeneratorResponse_File) XXX_Merge(src proto.Message) {
xxx_messageInfo_CodeGeneratorResponse_File.Merge(dst, src)
}
func (m *CodeGeneratorResponse_File) XXX_Size() int {
return xxx_messageInfo_CodeGeneratorResponse_File.Size(m)
}
func (m *CodeGeneratorResponse_File) XXX_DiscardUnknown() {
xxx_messageInfo_CodeGeneratorResponse_File.DiscardUnknown(m)
}
var xxx_messageInfo_CodeGeneratorResponse_File proto.InternalMessageInfo
func (m *CodeGeneratorResponse_File) GetName() string {
if m != nil && m.Name != nil {
return *m.Name
}
return ""
}
func (m *CodeGeneratorResponse_File) GetInsertionPoint() string {
if m != nil && m.InsertionPoint != nil {
return *m.InsertionPoint
}
return ""
}
func (m *CodeGeneratorResponse_File) GetContent() string {
if m != nil && m.Content != nil {
return *m.Content
}
return ""
}
func init() {
proto.RegisterType((*Version)(nil), "google.protobuf.compiler.Version")
proto.RegisterType((*CodeGeneratorRequest)(nil), "google.protobuf.compiler.CodeGeneratorRequest")
proto.RegisterType((*CodeGeneratorResponse)(nil), "google.protobuf.compiler.CodeGeneratorResponse")
proto.RegisterType((*CodeGeneratorResponse_File)(nil), "google.protobuf.compiler.CodeGeneratorResponse.File")
}
func init() { proto.RegisterFile("google/protobuf/compiler/plugin.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 417 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0x92, 0xcf, 0x6a, 0x14, 0x41,
0x10, 0xc6, 0x19, 0x77, 0x63, 0x98, 0x8a, 0x64, 0x43, 0x13, 0xa5, 0x09, 0x39, 0x8c, 0x8b, 0xe2,
0x5c, 0x32, 0x0b, 0xc1, 0x8b, 0x78, 0x4b, 0x44, 0x3d, 0x78, 0x58, 0x1a, 0xf1, 0x20, 0xc8, 0x30,
0x99, 0xd4, 0x74, 0x5a, 0x66, 0xba, 0xc6, 0xee, 0x1e, 0xf1, 0x49, 0x7d, 0x0f, 0xdf, 0x40, 0xfa,
0xcf, 0x24, 0xb2, 0xb8, 0xa7, 0xee, 0xef, 0x57, 0xd5, 0xd5, 0x55, 0x1f, 0x05, 0x2f, 0x25, 0x91,
0xec, 0x71, 0x33, 0x1a, 0x72, 0x74, 0x33, 0x75, 0x9b, 0x96, 0x86, 0x51, 0xf5, 0x68, 0x36, 0x63,
0x3f, 0x49, 0xa5, 0xab, 0x10, 0x60, 0x3c, 0xa6, 0x55, 0x73, 0x5a, 0x35, 0xa7, 0x9d, 0x15, 0xbb,
0x05, 0x6e, 0xd1, 0xb6, 0x46, 0x8d, 0x8e, 0x4c, 0xcc, 0x5e, 0xb7, 0x70, 0xf8, 0x05, 0x8d, 0x55,
0xa4, 0xd9, 0x29, 0x1c, 0x0c, 0xcd, 0x77, 0x32, 0x3c, 0x2b, 0xb2, 0xf2, 0x40, 0x44, 0x11, 0xa8,
0xd2, 0x64, 0xf8, 0xa3, 0x44, 0xbd, 0xf0, 0x74, 0x6c, 0x5c, 0x7b, 0xc7, 0x17, 0x91, 0x06, 0xc1,
0x9e, 0xc1, 0x63, 0x3b, 0x75, 0x9d, 0xfa, 0xc5, 0x97, 0x45, 0x56, 0xe6, 0x22, 0xa9, 0xf5, 0x9f,
0x0c, 0x4e, 0xaf, 0xe9, 0x16, 0x3f, 0xa0, 0x46, 0xd3, 0x38, 0x32, 0x02, 0x7f, 0x4c, 0x68, 0x1d,
0x2b, 0xe1, 0xa4, 0x53, 0x3d, 0xd6, 0x8e, 0x6a, 0x19, 0x63, 0xc8, 0xb3, 0x62, 0x51, 0xe6, 0xe2,
0xd8, 0xf3, 0xcf, 0x94, 0x5e, 0x20, 0x3b, 0x87, 0x7c, 0x6c, 0x4c, 0x33, 0xa0, 0xc3, 0xd8, 0x4a,
0x2e, 0x1e, 0x00, 0xbb, 0x06, 0x08, 0xe3, 0xd4, 0xfe, 0x15, 0x5f, 0x15, 0x8b, 0xf2, 0xe8, 0xf2,
0x45, 0xb5, 0x6b, 0xcb, 0x7b, 0xd5, 0xe3, 0xbb, 0x7b, 0x03, 0xb6, 0x1e, 0x8b, 0x3c, 0x44, 0x7d,
0x84, 0x7d, 0x82, 0x93, 0xd9, 0xb8, 0xfa, 0x67, 0xf4, 0x24, 0x8c, 0x77, 0x74, 0xf9, 0xbc, 0xda,
0xe7, 0x70, 0x95, 0xcc, 0x13, 0xab, 0x99, 0x24, 0xb0, 0xfe, 0x9d, 0xc1, 0xd3, 0x9d, 0x99, 0xed,
0x48, 0xda, 0xa2, 0xf7, 0x0e, 0x8d, 0x49, 0x3e, 0xe7, 0x22, 0x0a, 0xf6, 0x11, 0x96, 0xff, 0x34,
0xff, 0x7a, 0xff, 0x8f, 0xff, 0x2d, 0x1a, 0x66, 0x13, 0xa1, 0xc2, 0xd9, 0x37, 0x58, 0x86, 0x79,
0x18, 0x2c, 0x75, 0x33, 0x60, 0xfa, 0x26, 0xdc, 0xd9, 0x2b, 0x58, 0x29, 0x6d, 0xd1, 0x38, 0x45,
0xba, 0x1e, 0x49, 0x69, 0x97, 0xcc, 0x3c, 0xbe, 0xc7, 0x5b, 0x4f, 0x19, 0x87, 0xc3, 0x96, 0xb4,
0x43, 0xed, 0xf8, 0x2a, 0x24, 0xcc, 0xf2, 0x4a, 0xc2, 0x79, 0x4b, 0xc3, 0xde, 0xfe, 0xae, 0x9e,
0x6c, 0xc3, 0x6e, 0x06, 0x7b, 0xed, 0xd7, 0x37, 0x52, 0xb9, 0xbb, 0xe9, 0xc6, 0x87, 0x37, 0x92,
0xfa, 0x46, 0xcb, 0x87, 0x65, 0x0c, 0x97, 0xf6, 0x42, 0xa2, 0xbe, 0x90, 0x94, 0x56, 0xfa, 0x6d,
0x3c, 0x6a, 0x49, 0x7f, 0x03, 0x00, 0x00, 0xff, 0xff, 0xf7, 0x15, 0x40, 0xc5, 0xfe, 0x02, 0x00,
0x00,
}

View file

@ -0,0 +1,167 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc. All rights reserved.
// https://developers.google.com/protocol-buffers/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Author: kenton@google.com (Kenton Varda)
//
// WARNING: The plugin interface is currently EXPERIMENTAL and is subject to
// change.
//
// protoc (aka the Protocol Compiler) can be extended via plugins. A plugin is
// just a program that reads a CodeGeneratorRequest from stdin and writes a
// CodeGeneratorResponse to stdout.
//
// Plugins written using C++ can use google/protobuf/compiler/plugin.h instead
// of dealing with the raw protocol defined here.
//
// A plugin executable needs only to be placed somewhere in the path. The
// plugin should be named "protoc-gen-$NAME", and will then be used when the
// flag "--${NAME}_out" is passed to protoc.
syntax = "proto2";
package google.protobuf.compiler;
option java_package = "com.google.protobuf.compiler";
option java_outer_classname = "PluginProtos";
option go_package = "github.com/golang/protobuf/protoc-gen-go/plugin;plugin_go";
import "google/protobuf/descriptor.proto";
// The version number of protocol compiler.
message Version {
optional int32 major = 1;
optional int32 minor = 2;
optional int32 patch = 3;
// A suffix for alpha, beta or rc release, e.g., "alpha-1", "rc2". It should
// be empty for mainline stable releases.
optional string suffix = 4;
}
// An encoded CodeGeneratorRequest is written to the plugin's stdin.
message CodeGeneratorRequest {
// The .proto files that were explicitly listed on the command-line. The
// code generator should generate code only for these files. Each file's
// descriptor will be included in proto_file, below.
repeated string file_to_generate = 1;
// The generator parameter passed on the command-line.
optional string parameter = 2;
// FileDescriptorProtos for all files in files_to_generate and everything
// they import. The files will appear in topological order, so each file
// appears before any file that imports it.
//
// protoc guarantees that all proto_files will be written after
// the fields above, even though this is not technically guaranteed by the
// protobuf wire format. This theoretically could allow a plugin to stream
// in the FileDescriptorProtos and handle them one by one rather than read
// the entire set into memory at once. However, as of this writing, this
// is not similarly optimized on protoc's end -- it will store all fields in
// memory at once before sending them to the plugin.
//
// Type names of fields and extensions in the FileDescriptorProto are always
// fully qualified.
repeated FileDescriptorProto proto_file = 15;
// The version number of protocol compiler.
optional Version compiler_version = 3;
}
// The plugin writes an encoded CodeGeneratorResponse to stdout.
message CodeGeneratorResponse {
// Error message. If non-empty, code generation failed. The plugin process
// should exit with status code zero even if it reports an error in this way.
//
// This should be used to indicate errors in .proto files which prevent the
// code generator from generating correct code. Errors which indicate a
// problem in protoc itself -- such as the input CodeGeneratorRequest being
// unparseable -- should be reported by writing a message to stderr and
// exiting with a non-zero status code.
optional string error = 1;
// Represents a single generated file.
message File {
// The file name, relative to the output directory. The name must not
// contain "." or ".." components and must be relative, not be absolute (so,
// the file cannot lie outside the output directory). "/" must be used as
// the path separator, not "\".
//
// If the name is omitted, the content will be appended to the previous
// file. This allows the generator to break large files into small chunks,
// and allows the generated text to be streamed back to protoc so that large
// files need not reside completely in memory at one time. Note that as of
// this writing protoc does not optimize for this -- it will read the entire
// CodeGeneratorResponse before writing files to disk.
optional string name = 1;
// If non-empty, indicates that the named file should already exist, and the
// content here is to be inserted into that file at a defined insertion
// point. This feature allows a code generator to extend the output
// produced by another code generator. The original generator may provide
// insertion points by placing special annotations in the file that look
// like:
// @@protoc_insertion_point(NAME)
// The annotation can have arbitrary text before and after it on the line,
// which allows it to be placed in a comment. NAME should be replaced with
// an identifier naming the point -- this is what other generators will use
// as the insertion_point. Code inserted at this point will be placed
// immediately above the line containing the insertion point (thus multiple
// insertions to the same point will come out in the order they were added).
// The double-@ is intended to make it unlikely that the generated code
// could contain things that look like insertion points by accident.
//
// For example, the C++ code generator places the following line in the
// .pb.h files that it generates:
// // @@protoc_insertion_point(namespace_scope)
// This line appears within the scope of the file's package namespace, but
// outside of any particular class. Another plugin can then specify the
// insertion_point "namespace_scope" to generate additional classes or
// other declarations that should be placed in this scope.
//
// Note that if the line containing the insertion point begins with
// whitespace, the same whitespace will be added to every line of the
// inserted text. This is useful for languages like Python, where
// indentation matters. In these languages, the insertion point comment
// should be indented the same amount as any inserted code will need to be
// in order to work correctly in that context.
//
// The code generator that generates the initial file and the one which
// inserts into it must both run as part of a single invocation of protoc.
// Code generators are executed in the order in which they appear on the
// command line.
//
// If |insertion_point| is present, |name| must also be present.
optional string insertion_point = 2;
// The file contents.
optional string content = 15;
}
repeated File file = 15;
}

View file

@ -1,19 +1,22 @@
Google API Extensions for Go
============================
[![Build Status](https://travis-ci.org/googleapis/gax-go.svg?branch=master)](https://travis-ci.org/googleapis/gax-go)
[![Code Coverage](https://img.shields.io/codecov/c/github/googleapis/gax-go.svg)](https://codecov.io/github/googleapis/gax-go)
[![GoDoc](https://godoc.org/github.com/googleapis/gax-go?status.svg)](https://godoc.org/github.com/googleapis/gax-go)
Google API Extensions for Go (gax-go) is a set of modules which aids the
development of APIs for clients and servers based on `gRPC` and Google API
conventions.
Application code will rarely need to use this library directly,
To install the API extensions, use:
```
go get -u github.com/googleapis/gax-go
```
**Note:** Application code will rarely need to use this library directly,
but the code generated automatically from API definition files can use it
to simplify code generation and to provide more convenient and idiomatic API surface.
**This project is currently experimental and not supported.**
Go Versions
===========
This library requires Go 1.6 or above.

11
vendor/github.com/googleapis/gax-go/go.mod generated vendored Normal file
View file

@ -0,0 +1,11 @@
module github.com/googleapis/gax-go
require (
github.com/golang/protobuf v1.3.1
github.com/googleapis/gax-go/v2 v2.0.2
golang.org/x/exp v0.0.0-20190221220918-438050ddec5e
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b
google.golang.org/grpc v1.19.0
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099
)

View file

@ -1,24 +0,0 @@
package gax
import "bytes"
// XGoogHeader is for use by the Google Cloud Libraries only.
//
// XGoogHeader formats key-value pairs.
// The resulting string is suitable for x-goog-api-client header.
func XGoogHeader(keyval ...string) string {
if len(keyval) == 0 {
return ""
}
if len(keyval)%2 != 0 {
panic("gax.Header: odd argument count")
}
var buf bytes.Buffer
for i := 0; i < len(keyval); i += 2 {
buf.WriteByte(' ')
buf.WriteString(keyval[i])
buf.WriteByte('/')
buf.WriteString(keyval[i+1])
}
return buf.String()[1:]
}

View file

@ -113,6 +113,7 @@ type Backoff struct {
cur time.Duration
}
// Pause returns the next time.Duration that the caller should use to backoff.
func (bo *Backoff) Pause() time.Duration {
if bo.Initial == 0 {
bo.Initial = time.Second
@ -126,10 +127,11 @@ func (bo *Backoff) Pause() time.Duration {
if bo.Multiplier < 1 {
bo.Multiplier = 2
}
// Select a duration between zero and the current max. It might seem counterintuitive to
// have so much jitter, but https://www.awsarchitectureblog.com/2015/03/backoff.html
// argues that that is the best strategy.
d := time.Duration(rand.Int63n(int64(bo.cur)))
// Select a duration between 1ns and the current max. It might seem
// counterintuitive to have so much jitter, but
// https://www.awsarchitectureblog.com/2015/03/backoff.html argues that
// that is the best strategy.
d := time.Duration(1 + rand.Int63n(int64(bo.cur)))
bo.cur = time.Duration(float64(bo.cur) * bo.Multiplier)
if bo.cur > bo.Max {
bo.cur = bo.Max
@ -143,10 +145,12 @@ func (o grpcOpt) Resolve(s *CallSettings) {
s.GRPC = o
}
// WithGRPCOptions allows passing gRPC call options during client creation.
func WithGRPCOptions(opt ...grpc.CallOption) CallOption {
return grpcOpt(append([]grpc.CallOption(nil), opt...))
}
// CallSettings allow fine-grained control over how calls are made.
type CallSettings struct {
// Retry returns a Retryer to be used to control retry logic of a method call.
// If Retry is nil or the returned Retryer is nil, the call will not be retried.

View file

@ -33,8 +33,7 @@
// Application code will rarely need to use this library directly.
// However, code generated automatically from API definition files can use it
// to simplify code generation and to provide more convenient and idiomatic API surfaces.
//
// This project is currently experimental and not supported.
package gax
const Version = "0.1.0"
// Version specifies the gax-go version being used.
const Version = "2.0.4"

3
vendor/github.com/googleapis/gax-go/v2/go.mod generated vendored Normal file
View file

@ -0,0 +1,3 @@
module github.com/googleapis/gax-go/v2
require google.golang.org/grpc v1.19.0

53
vendor/github.com/googleapis/gax-go/v2/header.go generated vendored Normal file
View file

@ -0,0 +1,53 @@
// Copyright 2018, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package gax
import "bytes"
// XGoogHeader is for use by the Google Cloud Libraries only.
//
// XGoogHeader formats key-value pairs.
// The resulting string is suitable for x-goog-api-client header.
func XGoogHeader(keyval ...string) string {
if len(keyval) == 0 {
return ""
}
if len(keyval)%2 != 0 {
panic("gax.Header: odd argument count")
}
var buf bytes.Buffer
for i := 0; i < len(keyval); i += 2 {
buf.WriteByte(' ')
buf.WriteString(keyval[i])
buf.WriteByte('/')
buf.WriteString(keyval[i+1])
}
return buf.String()[1:]
}

View file

@ -30,12 +30,12 @@
package gax
import (
"context"
"strings"
"time"
"golang.org/x/net/context"
)
// A user defined call stub.
// APICall is a user defined call stub.
type APICall func(context.Context, CallSettings) error
// Invoke calls the given APICall,
@ -74,6 +74,15 @@ func invoke(ctx context.Context, call APICall, settings CallSettings, sp sleeper
if settings.Retry == nil {
return err
}
// Never retry permanent certificate errors. (e.x. if ca-certificates
// are not installed). We should only make very few, targeted
// exceptions: many (other) status=Unavailable should be retried, such
// as if there's a network hiccup, or the internet goes out for a
// minute. This is also why here we are doing string parsing instead of
// simply making Unavailable a non-retried code elsewhere.
if strings.Contains(err.Error(), "x509: certificate signed by unknown authority") {
return err
}
if retryer == nil {
if r := settings.Retry(); r != nil {
retryer = r

127
vendor/go.opencensus.io/README.md generated vendored
View file

@ -7,7 +7,9 @@
OpenCensus Go is a Go implementation of OpenCensus, a toolkit for
collecting application performance and behavior monitoring data.
Currently it consists of three major components: tags, stats, and tracing.
Currently it consists of three major components: tags, stats and tracing.
#### OpenCensus and OpenTracing have merged to form OpenTelemetry, which serves as the next major version of OpenCensus and OpenTracing. OpenTelemetry will offer backwards compatibility with existing OpenCensus integrations, and we will continue to make security patches to existing OpenCensus libraries for two years. Read more about the merger [here](https://medium.com/opentracing/a-roadmap-to-convergence-b074e5815289).
## Installation
@ -22,17 +24,42 @@ The use of vendoring or a dependency management tool is recommended.
OpenCensus Go libraries require Go 1.8 or later.
## Getting Started
The easiest way to get started using OpenCensus in your application is to use an existing
integration with your RPC framework:
* [net/http](https://godoc.org/go.opencensus.io/plugin/ochttp)
* [gRPC](https://godoc.org/go.opencensus.io/plugin/ocgrpc)
* [database/sql](https://godoc.org/github.com/opencensus-integrations/ocsql)
* [Go kit](https://godoc.org/github.com/go-kit/kit/tracing/opencensus)
* [Groupcache](https://godoc.org/github.com/orijtech/groupcache)
* [Caddy webserver](https://godoc.org/github.com/orijtech/caddy)
* [MongoDB](https://godoc.org/github.com/orijtech/mongo-go-driver)
* [Redis gomodule/redigo](https://godoc.org/github.com/orijtech/redigo)
* [Redis goredis/redis](https://godoc.org/github.com/orijtech/redis)
* [Memcache](https://godoc.org/github.com/orijtech/gomemcache)
If you're using a framework not listed here, you could either implement your own middleware for your
framework or use [custom stats](#stats) and [spans](#spans) directly in your application.
## Exporters
OpenCensus can export instrumentation data to various backends.
Currently, OpenCensus supports:
OpenCensus can export instrumentation data to various backends.
OpenCensus has exporter implementations for the following, users
can implement their own exporters by implementing the exporter interfaces
([stats](https://godoc.org/go.opencensus.io/stats/view#Exporter),
[trace](https://godoc.org/go.opencensus.io/trace#Exporter)):
* [Prometheus][exporter-prom] for stats
* [OpenZipkin][exporter-zipkin] for traces
* Stackdriver [Monitoring][exporter-stackdriver] and [Trace][exporter-stackdriver]
* [Stackdriver][exporter-stackdriver] Monitoring for stats and Trace for traces
* [Jaeger][exporter-jaeger] for traces
* [AWS X-Ray][exporter-xray] for traces
* [Datadog][exporter-datadog] for stats and traces
* [Graphite][exporter-graphite] for stats
* [Honeycomb][exporter-honeycomb] for traces
* [New Relic][exporter-newrelic] for stats and traces
## Overview
@ -43,13 +70,6 @@ multiple services until there is a response. OpenCensus allows
you to instrument your services and collect diagnostics data all
through your services end-to-end.
Start with instrumenting HTTP and gRPC clients and servers,
then add additional custom instrumentation if needed.
* [HTTP guide](https://github.com/census-instrumentation/opencensus-go/tree/master/examples/http)
* [gRPC guide](https://github.com/census-instrumentation/opencensus-go/tree/master/examples/grpc)
## Tags
Tags represent propagated key-value pairs. They are propagated using `context.Context`
@ -57,11 +77,11 @@ in the same process or can be encoded to be transmitted on the wire. Usually, th
be handled by an integration plugin, e.g. `ocgrpc.ServerHandler` and `ocgrpc.ClientHandler`
for gRPC.
Package tag allows adding or modifying tags in the current context.
Package `tag` allows adding or modifying tags in the current context.
[embedmd]:# (internal/readme/tags.go new)
```go
ctx, err = tag.New(ctx,
ctx, err := tag.New(ctx,
tag.Insert(osKey, "macOS-10.12.5"),
tag.Upsert(userIDKey, "cde36753ed"),
)
@ -106,7 +126,7 @@ Currently three types of aggregations are supported:
[embedmd]:# (internal/readme/stats.go aggs)
```go
distAgg := view.Distribution(0, 1<<32, 2<<32, 3<<32)
distAgg := view.Distribution(1<<32, 2<<32, 3<<32)
countAgg := view.Count()
sumAgg := view.Sum()
```
@ -116,26 +136,79 @@ Here we create a view with the DistributionAggregation over our measure.
[embedmd]:# (internal/readme/stats.go view)
```go
if err := view.Register(&view.View{
Name: "my.org/video_size_distribution",
Name: "example.com/video_size_distribution",
Description: "distribution of processed video size over time",
Measure: videoSize,
Aggregation: view.Distribution(0, 1<<32, 2<<32, 3<<32),
Aggregation: view.Distribution(1<<32, 2<<32, 3<<32),
}); err != nil {
log.Fatalf("Failed to subscribe to view: %v", err)
log.Fatalf("Failed to register view: %v", err)
}
```
Subscribe begins collecting data for the view. Subscribed views' data will be
Register begins collecting data for the view. Registered views' data will be
exported via the registered exporters.
## Traces
A distributed trace tracks the progression of a single user request as
it is handled by the services and processes that make up an application.
Each step is called a span in the trace. Spans include metadata about the step,
including especially the time spent in the step, called the spans latency.
Below you see a trace and several spans underneath it.
![Traces and spans](https://i.imgur.com/7hZwRVj.png)
### Spans
Span is the unit step in a trace. Each span has a name, latency, status and
additional metadata.
Below we are starting a span for a cache read and ending it
when we are done:
[embedmd]:# (internal/readme/trace.go startend)
```go
ctx, span := trace.StartSpan(ctx, "your choice of name")
ctx, span := trace.StartSpan(ctx, "cache.Get")
defer span.End()
// Do work to get from cache.
```
### Propagation
Spans can have parents or can be root spans if they don't have any parents.
The current span is propagated in-process and across the network to allow associating
new child spans with the parent.
In the same process, `context.Context` is used to propagate spans.
`trace.StartSpan` creates a new span as a root if the current context
doesn't contain a span. Or, it creates a child of the span that is
already in current context. The returned context can be used to keep
propagating the newly created span in the current context.
[embedmd]:# (internal/readme/trace.go startend)
```go
ctx, span := trace.StartSpan(ctx, "cache.Get")
defer span.End()
// Do work to get from cache.
```
Across the network, OpenCensus provides different propagation
methods for different protocols.
* gRPC integrations use the OpenCensus' [binary propagation format](https://godoc.org/go.opencensus.io/trace/propagation).
* HTTP integrations use Zipkin's [B3](https://github.com/openzipkin/b3-propagation)
by default but can be configured to use a custom propagation method by setting another
[propagation.HTTPFormat](https://godoc.org/go.opencensus.io/trace/propagation#HTTPFormat).
## Execution Tracer
With Go 1.11, OpenCensus Go will support integration with the Go execution tracer.
See [Debugging Latency in Go](https://medium.com/observability/debugging-latency-in-go-1-11-9f97a7910d68)
for an example of their mutual use.
## Profiles
OpenCensus tags can be applied as profiler labels
@ -167,7 +240,7 @@ Before version 1.0.0, the following deprecation policy will be observed:
No backwards-incompatible changes will be made except for the removal of symbols that have
been marked as *Deprecated* for at least one minor release (e.g. 0.9.0 to 0.10.0). A release
removing the *Deprecated* functionality will be made no sooner than 28 days after the first
removing the *Deprecated* functionality will be made no sooner than 28 days after the first
release in which the functionality was marked *Deprecated*.
[travis-image]: https://travis-ci.org/census-instrumentation/opencensus-go.svg?branch=master
@ -183,8 +256,12 @@ release in which the functionality was marked *Deprecated*.
[new-ex]: https://godoc.org/go.opencensus.io/tag#example-NewMap
[new-replace-ex]: https://godoc.org/go.opencensus.io/tag#example-NewMap--Replace
[exporter-prom]: https://godoc.org/go.opencensus.io/exporter/prometheus
[exporter-prom]: https://godoc.org/contrib.go.opencensus.io/exporter/prometheus
[exporter-stackdriver]: https://godoc.org/contrib.go.opencensus.io/exporter/stackdriver
[exporter-zipkin]: https://godoc.org/go.opencensus.io/exporter/zipkin
[exporter-jaeger]: https://godoc.org/go.opencensus.io/exporter/jaeger
[exporter-xray]: https://github.com/census-instrumentation/opencensus-go-exporter-aws
[exporter-zipkin]: https://godoc.org/contrib.go.opencensus.io/exporter/zipkin
[exporter-jaeger]: https://godoc.org/contrib.go.opencensus.io/exporter/jaeger
[exporter-xray]: https://github.com/census-ecosystem/opencensus-go-exporter-aws
[exporter-datadog]: https://github.com/DataDog/opencensus-go-exporter-datadog
[exporter-graphite]: https://github.com/census-ecosystem/opencensus-go-exporter-graphite
[exporter-honeycomb]: https://github.com/honeycombio/opencensus-exporter
[exporter-newrelic]: https://github.com/newrelic/newrelic-opencensus-exporter-go

15
vendor/go.opencensus.io/go.mod generated vendored Normal file
View file

@ -0,0 +1,15 @@
module go.opencensus.io
require (
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6
github.com/golang/protobuf v1.3.1
github.com/google/go-cmp v0.3.0
github.com/stretchr/testify v1.4.0
golang.org/x/net v0.0.0-20190620200207-3b0461eec859
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd // indirect
golang.org/x/text v0.3.2 // indirect
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb // indirect
google.golang.org/grpc v1.20.1
)
go 1.13

View file

@ -14,11 +14,16 @@
package internal // import "go.opencensus.io/internal"
import "time"
import (
"fmt"
"time"
opencensus "go.opencensus.io"
)
// UserAgent is the user agent to be added to the outgoing
// requests from the exporters.
const UserAgent = "opencensus-go [0.11.0]"
var UserAgent = fmt.Sprintf("opencensus-go/%s", opencensus.Version())
// MonotonicEndTime returns the end time at present
// but offset from start, monotonically.
@ -28,5 +33,5 @@ const UserAgent = "opencensus-go [0.11.0]"
// end as a monotonic time.
// See https://golang.org/pkg/time/#hdr-Monotonic_Clocks
func MonotonicEndTime(start time.Time) time.Time {
return start.Add(time.Now().Sub(start))
return start.Add(time.Since(start))
}

View file

@ -17,6 +17,7 @@
// used interally by the stats collector.
package tagencoding // import "go.opencensus.io/internal/tagencoding"
// Values represent the encoded buffer for the values.
type Values struct {
Buffer []byte
WriteIndex int
@ -31,6 +32,7 @@ func (vb *Values) growIfRequired(expected int) {
}
}
// WriteValue is the helper method to encode Values from map[Key][]byte.
func (vb *Values) WriteValue(v []byte) {
length := len(v) & 0xff
vb.growIfRequired(1 + length)
@ -49,7 +51,7 @@ func (vb *Values) WriteValue(v []byte) {
vb.WriteIndex += length
}
// ReadValue is the helper method to read the values when decoding valuesBytes to a map[Key][]byte.
// ReadValue is the helper method to decode Values to a map[Key][]byte.
func (vb *Values) ReadValue() []byte {
// read length of v
length := int(vb.Buffer[vb.ReadIndex])
@ -67,6 +69,7 @@ func (vb *Values) ReadValue() []byte {
return v
}
// Bytes returns a reference to already written bytes in the Buffer.
func (vb *Values) Bytes() []byte {
return vb.Buffer[:vb.WriteIndex]
}

View file

@ -22,6 +22,7 @@ import (
// TODO(#412): remove this
var Trace interface{}
// LocalSpanStoreEnabled true if the local span store is enabled.
var LocalSpanStoreEnabled bool
// BucketConfiguration stores the number of samples to store for span buckets

View file

@ -12,17 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package internal // import "go.opencensus.io/stats/internal"
const (
MaxNameLength = 255
)
func IsPrintable(str string) bool {
for _, r := range str {
if !(r >= ' ' && r <= '~') {
return false
}
}
return true
}
// Package metricdata contains the metrics data model.
//
// This is an EXPERIMENTAL package, and may change in arbitrary ways without
// notice.
package metricdata // import "go.opencensus.io/metric/metricdata"

38
vendor/go.opencensus.io/metric/metricdata/exemplar.go generated vendored Normal file
View file

@ -0,0 +1,38 @@
// Copyright 2018, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package metricdata
import (
"time"
)
// Exemplars keys.
const (
AttachmentKeySpanContext = "SpanContext"
)
// Exemplar is an example data point associated with each bucket of a
// distribution type aggregation.
//
// Their purpose is to provide an example of the kind of thing
// (request, RPC, trace span, etc.) that resulted in that measurement.
type Exemplar struct {
Value float64 // the value that was recorded
Timestamp time.Time // the time the value was recorded
Attachments Attachments // attachments (if any)
}
// Attachments is a map of extra values associated with a recorded data point.
type Attachments map[string]interface{}

35
vendor/go.opencensus.io/metric/metricdata/label.go generated vendored Normal file
View file

@ -0,0 +1,35 @@
// Copyright 2018, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package metricdata
// LabelKey represents key of a label. It has optional
// description attribute.
type LabelKey struct {
Key string
Description string
}
// LabelValue represents the value of a label.
// The zero value represents a missing label value, which may be treated
// differently to an empty string value by some back ends.
type LabelValue struct {
Value string // string value of the label
Present bool // flag that indicated whether a value is present or not
}
// NewLabelValue creates a new non-nil LabelValue that represents the given string.
func NewLabelValue(val string) LabelValue {
return LabelValue{Value: val, Present: true}
}

46
vendor/go.opencensus.io/metric/metricdata/metric.go generated vendored Normal file
View file

@ -0,0 +1,46 @@
// Copyright 2018, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package metricdata
import (
"time"
"go.opencensus.io/resource"
)
// Descriptor holds metadata about a metric.
type Descriptor struct {
Name string // full name of the metric
Description string // human-readable description
Unit Unit // units for the measure
Type Type // type of measure
LabelKeys []LabelKey // label keys
}
// Metric represents a quantity measured against a resource with different
// label value combinations.
type Metric struct {
Descriptor Descriptor // metric descriptor
Resource *resource.Resource // resource against which this was measured
TimeSeries []*TimeSeries // one time series for each combination of label values
}
// TimeSeries is a sequence of points associated with a combination of label
// values.
type TimeSeries struct {
LabelValues []LabelValue // label values, same order as keys in the metric descriptor
Points []Point // points sequence
StartTime time.Time // time we started recording this time series
}

193
vendor/go.opencensus.io/metric/metricdata/point.go generated vendored Normal file
View file

@ -0,0 +1,193 @@
// Copyright 2018, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package metricdata
import (
"time"
)
// Point is a single data point of a time series.
type Point struct {
// Time is the point in time that this point represents in a time series.
Time time.Time
// Value is the value of this point. Prefer using ReadValue to switching on
// the value type, since new value types might be added.
Value interface{}
}
//go:generate stringer -type ValueType
// NewFloat64Point creates a new Point holding a float64 value.
func NewFloat64Point(t time.Time, val float64) Point {
return Point{
Value: val,
Time: t,
}
}
// NewInt64Point creates a new Point holding an int64 value.
func NewInt64Point(t time.Time, val int64) Point {
return Point{
Value: val,
Time: t,
}
}
// NewDistributionPoint creates a new Point holding a Distribution value.
func NewDistributionPoint(t time.Time, val *Distribution) Point {
return Point{
Value: val,
Time: t,
}
}
// NewSummaryPoint creates a new Point holding a Summary value.
func NewSummaryPoint(t time.Time, val *Summary) Point {
return Point{
Value: val,
Time: t,
}
}
// ValueVisitor allows reading the value of a point.
type ValueVisitor interface {
VisitFloat64Value(float64)
VisitInt64Value(int64)
VisitDistributionValue(*Distribution)
VisitSummaryValue(*Summary)
}
// ReadValue accepts a ValueVisitor and calls the appropriate method with the
// value of this point.
// Consumers of Point should use this in preference to switching on the type
// of the value directly, since new value types may be added.
func (p Point) ReadValue(vv ValueVisitor) {
switch v := p.Value.(type) {
case int64:
vv.VisitInt64Value(v)
case float64:
vv.VisitFloat64Value(v)
case *Distribution:
vv.VisitDistributionValue(v)
case *Summary:
vv.VisitSummaryValue(v)
default:
panic("unexpected value type")
}
}
// Distribution contains summary statistics for a population of values. It
// optionally contains a histogram representing the distribution of those
// values across a set of buckets.
type Distribution struct {
// Count is the number of values in the population. Must be non-negative. This value
// must equal the sum of the values in bucket_counts if a histogram is
// provided.
Count int64
// Sum is the sum of the values in the population. If count is zero then this field
// must be zero.
Sum float64
// SumOfSquaredDeviation is the sum of squared deviations from the mean of the values in the
// population. For values x_i this is:
//
// Sum[i=1..n]((x_i - mean)^2)
//
// Knuth, "The Art of Computer Programming", Vol. 2, page 323, 3rd edition
// describes Welford's method for accumulating this sum in one pass.
//
// If count is zero then this field must be zero.
SumOfSquaredDeviation float64
// BucketOptions describes the bounds of the histogram buckets in this
// distribution.
//
// A Distribution may optionally contain a histogram of the values in the
// population.
//
// If nil, there is no associated histogram.
BucketOptions *BucketOptions
// Bucket If the distribution does not have a histogram, then omit this field.
// If there is a histogram, then the sum of the values in the Bucket counts
// must equal the value in the count field of the distribution.
Buckets []Bucket
}
// BucketOptions describes the bounds of the histogram buckets in this
// distribution.
type BucketOptions struct {
// Bounds specifies a set of bucket upper bounds.
// This defines len(bounds) + 1 (= N) buckets. The boundaries for bucket
// index i are:
//
// [0, Bounds[i]) for i == 0
// [Bounds[i-1], Bounds[i]) for 0 < i < N-1
// [Bounds[i-1], +infinity) for i == N-1
Bounds []float64
}
// Bucket represents a single bucket (value range) in a distribution.
type Bucket struct {
// Count is the number of values in each bucket of the histogram, as described in
// bucket_bounds.
Count int64
// Exemplar associated with this bucket (if any).
Exemplar *Exemplar
}
// Summary is a representation of percentiles.
type Summary struct {
// Count is the cumulative count (if available).
Count int64
// Sum is the cumulative sum of values (if available).
Sum float64
// HasCountAndSum is true if Count and Sum are available.
HasCountAndSum bool
// Snapshot represents percentiles calculated over an arbitrary time window.
// The values in this struct can be reset at arbitrary unknown times, with
// the requirement that all of them are reset at the same time.
Snapshot Snapshot
}
// Snapshot represents percentiles over an arbitrary time.
// The values in this struct can be reset at arbitrary unknown times, with
// the requirement that all of them are reset at the same time.
type Snapshot struct {
// Count is the number of values in the snapshot. Optional since some systems don't
// expose this. Set to 0 if not available.
Count int64
// Sum is the sum of values in the snapshot. Optional since some systems don't
// expose this. If count is 0 then this field must be zero.
Sum float64
// Percentiles is a map from percentile (range (0-100.0]) to the value of
// the percentile.
Percentiles map[float64]float64
}
//go:generate stringer -type Type
// Type is the overall type of metric, including its value type and whether it
// represents a cumulative total (since the start time) or if it represents a
// gauge value.
type Type int
// Metric types.
const (
TypeGaugeInt64 Type = iota
TypeGaugeFloat64
TypeGaugeDistribution
TypeCumulativeInt64
TypeCumulativeFloat64
TypeCumulativeDistribution
TypeSummary
)

View file

@ -0,0 +1,16 @@
// Code generated by "stringer -type Type"; DO NOT EDIT.
package metricdata
import "strconv"
const _Type_name = "TypeGaugeInt64TypeGaugeFloat64TypeGaugeDistributionTypeCumulativeInt64TypeCumulativeFloat64TypeCumulativeDistributionTypeSummary"
var _Type_index = [...]uint8{0, 14, 30, 51, 70, 91, 117, 128}
func (i Type) String() string {
if i < 0 || i >= Type(len(_Type_index)-1) {
return "Type(" + strconv.FormatInt(int64(i), 10) + ")"
}
return _Type_name[_Type_index[i]:_Type_index[i+1]]
}

27
vendor/go.opencensus.io/metric/metricdata/unit.go generated vendored Normal file
View file

@ -0,0 +1,27 @@
// Copyright 2018, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package metricdata
// Unit is a string encoded according to the case-sensitive abbreviations from the
// Unified Code for Units of Measure: http://unitsofmeasure.org/ucum.html
type Unit string
// Predefined units. To record against a unit not represented here, create your
// own Unit type constant from a string.
const (
UnitDimensionless Unit = "1"
UnitBytes Unit = "By"
UnitMilliseconds Unit = "ms"
)

View file

@ -0,0 +1,78 @@
// Copyright 2019, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package metricproducer
import (
"sync"
)
// Manager maintains a list of active producers. Producers can register
// with the manager to allow readers to read all metrics provided by them.
// Readers can retrieve all producers registered with the manager,
// read metrics from the producers and export them.
type Manager struct {
mu sync.RWMutex
producers map[Producer]struct{}
}
var prodMgr *Manager
var once sync.Once
// GlobalManager is a single instance of producer manager
// that is used by all producers and all readers.
func GlobalManager() *Manager {
once.Do(func() {
prodMgr = &Manager{}
prodMgr.producers = make(map[Producer]struct{})
})
return prodMgr
}
// AddProducer adds the producer to the Manager if it is not already present.
func (pm *Manager) AddProducer(producer Producer) {
if producer == nil {
return
}
pm.mu.Lock()
defer pm.mu.Unlock()
pm.producers[producer] = struct{}{}
}
// DeleteProducer deletes the producer from the Manager if it is present.
func (pm *Manager) DeleteProducer(producer Producer) {
if producer == nil {
return
}
pm.mu.Lock()
defer pm.mu.Unlock()
delete(pm.producers, producer)
}
// GetAll returns a slice of all producer currently registered with
// the Manager. For each call it generates a new slice. The slice
// should not be cached as registration may change at any time. It is
// typically called periodically by exporter to read metrics from
// the producers.
func (pm *Manager) GetAll() []Producer {
pm.mu.Lock()
defer pm.mu.Unlock()
producers := make([]Producer, len(pm.producers))
i := 0
for producer := range pm.producers {
producers[i] = producer
i++
}
return producers
}

View file

@ -1,10 +1,10 @@
// Copyright 2018 Google Inc. All Rights Reserved.
// Copyright 2019, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@ -12,10 +12,17 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// +build !go1.8
package metricproducer
package grpc
import (
"go.opencensus.io/metric/metricdata"
)
import "google.golang.org/grpc"
func addOCStatsHandler(opts []grpc.DialOption) []grpc.DialOption { return opts }
// Producer is a source of metrics.
type Producer interface {
// Read should return the current values of all metrics supported by this
// metric provider.
// The returned metrics should be unique for each combination of name and
// resource.
Read() []*metricdata.Metric
}

View file

@ -1,10 +1,10 @@
// Copyright 2018 Google Inc. All Rights Reserved.
// Copyright 2017, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@ -12,10 +12,10 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// +build !go1.8
// Package opencensus contains Go support for OpenCensus.
package opencensus // import "go.opencensus.io"
package http
import "net/http"
func addOCTransport(trans http.RoundTripper) http.RoundTripper { return trans }
// Version is the current release version of OpenCensus in use.
func Version() string {
return "0.23.0"
}

View file

@ -15,8 +15,8 @@
package ocgrpc
import (
"context"
"go.opencensus.io/trace"
"golang.org/x/net/context"
"google.golang.org/grpc/stats"
)
@ -31,6 +31,7 @@ type ClientHandler struct {
StartOptions trace.StartOptions
}
// HandleConn exists to satisfy gRPC stats.Handler.
func (c *ClientHandler) HandleConn(ctx context.Context, cs stats.ConnStats) {
// no-op
}

View file

@ -31,9 +31,9 @@ var (
ClientServerLatency = stats.Float64("grpc.io/client/server_latency", `Propagated from the server and should have the same value as "grpc.io/server/latency".`, stats.UnitMilliseconds)
)
// Predefined views may be subscribed to collect data for the above measures.
// Predefined views may be registered to collect data for the above measures.
// As always, you may also define your own custom views over measures collected by this
// package. These are declared as a convenience only; none are subscribed by
// package. These are declared as a convenience only; none are registered by
// default.
var (
ClientSentBytesPerRPCView = &view.View{
@ -91,15 +91,6 @@ var (
TagKeys: []tag.Key{KeyClientMethod},
Aggregation: DefaultMillisecondsDistribution,
}
// Deprecated: This view is going to be removed, if you need it please define it
// yourself.
ClientRequestCountView = &view.View{
Name: "Count of request messages per client RPC",
TagKeys: []tag.Key{KeyClientMethod},
Measure: ClientRoundtripLatency,
Aggregation: view.Count(),
}
)
// DefaultClientViews are the default client views provided by this package.

View file

@ -16,10 +16,10 @@
package ocgrpc
import (
"context"
"time"
"go.opencensus.io/tag"
"golang.org/x/net/context"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/stats"
)
@ -30,7 +30,7 @@ func (h *ClientHandler) statsTagRPC(ctx context.Context, info *stats.RPCTagInfo)
startTime := time.Now()
if info == nil {
if grpclog.V(2) {
grpclog.Infof("clientHandler.TagRPC called with nil info.", info.FullMethodName)
grpclog.Info("clientHandler.TagRPC called with nil info.")
}
return ctx
}

View file

@ -15,8 +15,8 @@
package ocgrpc
import (
"context"
"go.opencensus.io/trace"
"golang.org/x/net/context"
"google.golang.org/grpc/stats"
)

View file

@ -34,9 +34,9 @@ var (
// mechanism to load these defaults from a common repository/config shared by
// all supported languages. Likely a serialized protobuf of these defaults.
// Predefined views may be subscribed to collect data for the above measures.
// Predefined views may be registered to collect data for the above measures.
// As always, you may also define your own custom views over measures collected by this
// package. These are declared as a convenience only; none are subscribed by
// package. These are declared as a convenience only; none are registered by
// default.
var (
ServerReceivedBytesPerRPCView = &view.View{

View file

@ -18,7 +18,7 @@ package ocgrpc
import (
"time"
"golang.org/x/net/context"
"context"
"go.opencensus.io/tag"
"google.golang.org/grpc/grpclog"

View file

@ -22,9 +22,11 @@ import (
"sync/atomic"
"time"
"go.opencensus.io/metric/metricdata"
ocstats "go.opencensus.io/stats"
"go.opencensus.io/stats/view"
"go.opencensus.io/tag"
"go.opencensus.io/trace"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/stats"
@ -51,16 +53,22 @@ type rpcData struct {
// The following variables define the default hard-coded auxiliary data used by
// both the default GRPC client and GRPC server metrics.
var (
DefaultBytesDistribution = view.Distribution(0, 1024, 2048, 4096, 16384, 65536, 262144, 1048576, 4194304, 16777216, 67108864, 268435456, 1073741824, 4294967296)
DefaultMillisecondsDistribution = view.Distribution(0, 0.01, 0.05, 0.1, 0.3, 0.6, 0.8, 1, 2, 3, 4, 5, 6, 8, 10, 13, 16, 20, 25, 30, 40, 50, 65, 80, 100, 130, 160, 200, 250, 300, 400, 500, 650, 800, 1000, 2000, 5000, 10000, 20000, 50000, 100000)
DefaultMessageCountDistribution = view.Distribution(0, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536)
DefaultBytesDistribution = view.Distribution(1024, 2048, 4096, 16384, 65536, 262144, 1048576, 4194304, 16777216, 67108864, 268435456, 1073741824, 4294967296)
DefaultMillisecondsDistribution = view.Distribution(0.01, 0.05, 0.1, 0.3, 0.6, 0.8, 1, 2, 3, 4, 5, 6, 8, 10, 13, 16, 20, 25, 30, 40, 50, 65, 80, 100, 130, 160, 200, 250, 300, 400, 500, 650, 800, 1000, 2000, 5000, 10000, 20000, 50000, 100000)
DefaultMessageCountDistribution = view.Distribution(1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536)
)
// Server tags are applied to the context used to process each RPC, as well as
// the measures at the end of each RPC.
var (
KeyServerMethod, _ = tag.NewKey("grpc_server_method")
KeyClientMethod, _ = tag.NewKey("grpc_client_method")
KeyServerStatus, _ = tag.NewKey("grpc_server_status")
KeyClientStatus, _ = tag.NewKey("grpc_client_status")
KeyServerMethod = tag.MustNewKey("grpc_server_method")
KeyServerStatus = tag.MustNewKey("grpc_server_status")
)
// Client tags are applied to measures at the end of each RPC.
var (
KeyClientMethod = tag.MustNewKey("grpc_client_method")
KeyClientStatus = tag.MustNewKey("grpc_client_status")
)
var (
@ -135,24 +143,31 @@ func handleRPCEnd(ctx context.Context, s *stats.End) {
}
latencyMillis := float64(elapsedTime) / float64(time.Millisecond)
attachments := getSpanCtxAttachment(ctx)
if s.Client {
ctx, _ = tag.New(ctx,
tag.Upsert(KeyClientMethod, methodName(d.method)),
tag.Upsert(KeyClientStatus, st))
ocstats.Record(ctx,
ClientSentBytesPerRPC.M(atomic.LoadInt64(&d.sentBytes)),
ClientSentMessagesPerRPC.M(atomic.LoadInt64(&d.sentCount)),
ClientReceivedMessagesPerRPC.M(atomic.LoadInt64(&d.recvCount)),
ClientReceivedBytesPerRPC.M(atomic.LoadInt64(&d.recvBytes)),
ClientRoundtripLatency.M(latencyMillis))
ocstats.RecordWithOptions(ctx,
ocstats.WithTags(
tag.Upsert(KeyClientMethod, methodName(d.method)),
tag.Upsert(KeyClientStatus, st)),
ocstats.WithAttachments(attachments),
ocstats.WithMeasurements(
ClientSentBytesPerRPC.M(atomic.LoadInt64(&d.sentBytes)),
ClientSentMessagesPerRPC.M(atomic.LoadInt64(&d.sentCount)),
ClientReceivedMessagesPerRPC.M(atomic.LoadInt64(&d.recvCount)),
ClientReceivedBytesPerRPC.M(atomic.LoadInt64(&d.recvBytes)),
ClientRoundtripLatency.M(latencyMillis)))
} else {
ctx, _ = tag.New(ctx, tag.Upsert(KeyServerStatus, st))
ocstats.Record(ctx,
ServerSentBytesPerRPC.M(atomic.LoadInt64(&d.sentBytes)),
ServerSentMessagesPerRPC.M(atomic.LoadInt64(&d.sentCount)),
ServerReceivedMessagesPerRPC.M(atomic.LoadInt64(&d.recvCount)),
ServerReceivedBytesPerRPC.M(atomic.LoadInt64(&d.recvBytes)),
ServerLatency.M(latencyMillis))
ocstats.RecordWithOptions(ctx,
ocstats.WithTags(
tag.Upsert(KeyServerStatus, st),
),
ocstats.WithAttachments(attachments),
ocstats.WithMeasurements(
ServerSentBytesPerRPC.M(atomic.LoadInt64(&d.sentBytes)),
ServerSentMessagesPerRPC.M(atomic.LoadInt64(&d.sentCount)),
ServerReceivedMessagesPerRPC.M(atomic.LoadInt64(&d.recvCount)),
ServerReceivedBytesPerRPC.M(atomic.LoadInt64(&d.recvBytes)),
ServerLatency.M(latencyMillis)))
}
}
@ -197,3 +212,16 @@ func statusCodeToString(s *status.Status) string {
return "CODE_" + strconv.FormatInt(int64(c), 10)
}
}
func getSpanCtxAttachment(ctx context.Context) metricdata.Attachments {
attachments := map[string]interface{}{}
span := trace.FromContext(ctx)
if span == nil {
return attachments
}
spanCtx := span.SpanContext()
if spanCtx.IsSampled() {
attachments[metricdata.AttachmentKeySpanContext] = spanCtx
}
return attachments
}

View file

@ -19,9 +19,9 @@ import (
"google.golang.org/grpc/codes"
"context"
"go.opencensus.io/trace"
"go.opencensus.io/trace/propagation"
"golang.org/x/net/context"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/stats"
"google.golang.org/grpc/status"

View file

@ -16,14 +16,18 @@ package ochttp
import (
"net/http"
"net/http/httptrace"
"go.opencensus.io/trace"
"go.opencensus.io/trace/propagation"
)
// Transport is an http.RoundTripper that instruments all outgoing requests with
// stats and tracing. The zero value is intended to be a useful default, but for
// now it's recommended that you explicitly set Propagation.
// OpenCensus stats and tracing.
//
// The zero value is intended to be a useful default, but for
// now it's recommended that you explicitly set Propagation, since the default
// for this may change.
type Transport struct {
// Base may be set to wrap another http.RoundTripper that does the actual
// requests. By default http.DefaultTransport is used.
@ -43,24 +47,53 @@ type Transport struct {
// for spans started by this transport.
StartOptions trace.StartOptions
// GetStartOptions allows to set start options per request. If set,
// StartOptions is going to be ignored.
GetStartOptions func(*http.Request) trace.StartOptions
// NameFromRequest holds the function to use for generating the span name
// from the information found in the outgoing HTTP Request. By default the
// name equals the URL Path.
FormatSpanName func(*http.Request) string
// NewClientTrace may be set to a function allowing the current *trace.Span
// to be annotated with HTTP request event information emitted by the
// httptrace package.
NewClientTrace func(*http.Request, *trace.Span) *httptrace.ClientTrace
// TODO: Implement tag propagation for HTTP.
}
// RoundTrip implements http.RoundTripper, delegating to Base and recording stats and traces for the request.
func (t *Transport) RoundTrip(req *http.Request) (*http.Response, error) {
rt := t.base()
if isHealthEndpoint(req.URL.Path) {
return rt.RoundTrip(req)
}
// TODO: remove excessive nesting of http.RoundTrippers here.
format := t.Propagation
if format == nil {
format = defaultFormat
}
spanNameFormatter := t.FormatSpanName
if spanNameFormatter == nil {
spanNameFormatter = spanNameFromURL
}
startOpts := t.StartOptions
if t.GetStartOptions != nil {
startOpts = t.GetStartOptions(req)
}
rt = &traceTransport{
base: rt,
format: format,
startOptions: trace.StartOptions{
Sampler: t.StartOptions.Sampler,
Sampler: startOpts.Sampler,
SpanKind: trace.SpanKindClient,
},
formatSpanName: spanNameFormatter,
newClientTrace: t.NewClientTrace,
}
rt = statsTransport{base: rt}
return rt.RoundTrip(req)

View file

@ -34,8 +34,11 @@ type statsTransport struct {
// RoundTrip implements http.RoundTripper, delegating to Base and recording stats for the request.
func (t statsTransport) RoundTrip(req *http.Request) (*http.Response, error) {
ctx, _ := tag.New(req.Context(),
tag.Upsert(Host, req.URL.Host),
tag.Upsert(KeyClientHost, req.Host),
tag.Upsert(Host, req.Host),
tag.Upsert(KeyClientPath, req.URL.Path),
tag.Upsert(Path, req.URL.Path),
tag.Upsert(KeyClientMethod, req.Method),
tag.Upsert(Method, req.Method))
req = req.WithContext(ctx)
track := &tracker{
@ -58,11 +61,14 @@ func (t statsTransport) RoundTrip(req *http.Request) (*http.Response, error) {
track.end()
} else {
track.statusCode = resp.StatusCode
if req.Method != "HEAD" {
track.respContentLength = resp.ContentLength
}
if resp.Body == nil {
track.end()
} else {
track.body = resp.Body
resp.Body = track
resp.Body = wrappedBody(track, resp.Body)
}
}
return resp, err
@ -79,36 +85,48 @@ func (t statsTransport) CancelRequest(req *http.Request) {
}
type tracker struct {
ctx context.Context
respSize int64
reqSize int64
start time.Time
body io.ReadCloser
statusCode int
endOnce sync.Once
ctx context.Context
respSize int64
respContentLength int64
reqSize int64
start time.Time
body io.ReadCloser
statusCode int
endOnce sync.Once
}
var _ io.ReadCloser = (*tracker)(nil)
func (t *tracker) end() {
t.endOnce.Do(func() {
latencyMs := float64(time.Since(t.start)) / float64(time.Millisecond)
respSize := t.respSize
if t.respSize == 0 && t.respContentLength > 0 {
respSize = t.respContentLength
}
m := []stats.Measurement{
ClientLatency.M(float64(time.Since(t.start)) / float64(time.Millisecond)),
ClientSentBytes.M(t.reqSize),
ClientReceivedBytes.M(respSize),
ClientRoundtripLatency.M(latencyMs),
ClientLatency.M(latencyMs),
ClientResponseBytes.M(t.respSize),
}
if t.reqSize >= 0 {
m = append(m, ClientRequestBytes.M(t.reqSize))
}
ctx, _ := tag.New(t.ctx, tag.Upsert(StatusCode, strconv.Itoa(t.statusCode)))
stats.Record(ctx, m...)
stats.RecordWithTags(t.ctx, []tag.Mutator{
tag.Upsert(StatusCode, strconv.Itoa(t.statusCode)),
tag.Upsert(KeyClientStatus, strconv.Itoa(t.statusCode)),
}, m...)
})
}
func (t *tracker) Read(b []byte) (int, error) {
n, err := t.body.Read(b)
t.respSize += int64(n)
switch err {
case nil:
t.respSize += int64(n)
return n, nil
case io.EOF:
t.end()

View file

@ -38,7 +38,7 @@ const (
// because there are additional fields not represented in the
// OpenCensus span context. Spans created from the incoming
// header will be the direct children of the client-side span.
// Similarly, reciever of the outgoing spans should use client-side
// Similarly, receiver of the outgoing spans should use client-side
// span created by OpenCensus as the parent.
type HTTPFormat struct{}

61
vendor/go.opencensus.io/plugin/ochttp/route.go generated vendored Normal file
View file

@ -0,0 +1,61 @@
// Copyright 2018, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package ochttp
import (
"context"
"net/http"
"go.opencensus.io/tag"
)
// SetRoute sets the http_server_route tag to the given value.
// It's useful when an HTTP framework does not support the http.Handler interface
// and using WithRouteTag is not an option, but provides a way to hook into the request flow.
func SetRoute(ctx context.Context, route string) {
if a, ok := ctx.Value(addedTagsKey{}).(*addedTags); ok {
a.t = append(a.t, tag.Upsert(KeyServerRoute, route))
}
}
// WithRouteTag returns an http.Handler that records stats with the
// http_server_route tag set to the given value.
func WithRouteTag(handler http.Handler, route string) http.Handler {
return taggedHandlerFunc(func(w http.ResponseWriter, r *http.Request) []tag.Mutator {
addRoute := []tag.Mutator{tag.Upsert(KeyServerRoute, route)}
ctx, _ := tag.New(r.Context(), addRoute...)
r = r.WithContext(ctx)
handler.ServeHTTP(w, r)
return addRoute
})
}
// taggedHandlerFunc is a http.Handler that returns tags describing the
// processing of the request. These tags will be recorded along with the
// measures in this package at the end of the request.
type taggedHandlerFunc func(w http.ResponseWriter, r *http.Request) []tag.Mutator
func (h taggedHandlerFunc) ServeHTTP(w http.ResponseWriter, r *http.Request) {
tags := h(w, r)
if a, ok := r.Context().Value(addedTagsKey{}).(*addedTags); ok {
a.t = append(a.t, tags...)
}
}
type addedTagsKey struct{}
type addedTags struct {
t []tag.Mutator
}

View file

@ -15,10 +15,8 @@
package ochttp
import (
"bufio"
"context"
"errors"
"net"
"io"
"net/http"
"strconv"
"sync"
@ -30,16 +28,19 @@ import (
"go.opencensus.io/trace/propagation"
)
// Handler is a http.Handler that is aware of the incoming request's span.
// Handler is an http.Handler wrapper to instrument your HTTP server with
// OpenCensus. It supports both stats and tracing.
//
// Tracing
//
// This handler is aware of the incoming request's span, reading it from request
// headers as configured using the Propagation field.
// The extracted span can be accessed from the incoming request's
// context.
//
// span := trace.FromContext(r.Context())
//
// The server span will be automatically ended at the end of ServeHTTP.
//
// Incoming propagation mechanism is determined by the given HTTP propagators.
type Handler struct {
// Propagation defines how traces are propagated. If unspecified,
// B3 propagation will be used.
@ -55,50 +56,86 @@ type Handler struct {
// for spans started by this transport.
StartOptions trace.StartOptions
// GetStartOptions allows to set start options per request. If set,
// StartOptions is going to be ignored.
GetStartOptions func(*http.Request) trace.StartOptions
// IsPublicEndpoint should be set to true for publicly accessible HTTP(S)
// servers. If true, any trace metadata set on the incoming request will
// be added as a linked trace instead of being added as a parent of the
// current trace.
IsPublicEndpoint bool
// FormatSpanName holds the function to use for generating the span name
// from the information found in the incoming HTTP Request. By default the
// name equals the URL Path.
FormatSpanName func(*http.Request) string
// IsHealthEndpoint holds the function to use for determining if the
// incoming HTTP request should be considered a health check. This is in
// addition to the private isHealthEndpoint func which may also indicate
// tracing should be skipped.
IsHealthEndpoint func(*http.Request) bool
}
func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
var traceEnd, statsEnd func()
r, traceEnd = h.startTrace(w, r)
var tags addedTags
r, traceEnd := h.startTrace(w, r)
defer traceEnd()
w, statsEnd = h.startStats(w, r)
defer statsEnd()
w, statsEnd := h.startStats(w, r)
defer statsEnd(&tags)
handler := h.Handler
if handler == nil {
handler = http.DefaultServeMux
}
r = r.WithContext(context.WithValue(r.Context(), addedTagsKey{}, &tags))
handler.ServeHTTP(w, r)
}
func (h *Handler) startTrace(w http.ResponseWriter, r *http.Request) (*http.Request, func()) {
name := spanNameFromURL(r.URL)
if h.IsHealthEndpoint != nil && h.IsHealthEndpoint(r) || isHealthEndpoint(r.URL.Path) {
return r, func() {}
}
var name string
if h.FormatSpanName == nil {
name = spanNameFromURL(r)
} else {
name = h.FormatSpanName(r)
}
ctx := r.Context()
startOpts := h.StartOptions
if h.GetStartOptions != nil {
startOpts = h.GetStartOptions(r)
}
var span *trace.Span
sc, ok := h.extractSpanContext(r)
if ok && !h.IsPublicEndpoint {
ctx, span = trace.StartSpanWithRemoteParent(ctx, name, sc,
trace.WithSampler(h.StartOptions.Sampler),
trace.WithSampler(startOpts.Sampler),
trace.WithSpanKind(trace.SpanKindServer))
} else {
ctx, span = trace.StartSpan(ctx, name,
trace.WithSampler(h.StartOptions.Sampler),
trace.WithSampler(startOpts.Sampler),
trace.WithSpanKind(trace.SpanKindServer),
)
if ok {
span.AddLink(trace.Link{
TraceID: sc.TraceID,
SpanID: sc.SpanID,
Type: trace.LinkTypeChild,
Type: trace.LinkTypeParent,
Attributes: nil,
})
}
}
span.AddAttributes(requestAttrs(r)...)
if r.Body == nil {
// TODO: Handle cases where ContentLength is not set.
} else if r.ContentLength > 0 {
span.AddMessageReceiveEvent(0, /* TODO: messageID */
r.ContentLength, -1)
}
return r.WithContext(ctx), span.End
}
@ -109,9 +146,9 @@ func (h *Handler) extractSpanContext(r *http.Request) (trace.SpanContext, bool)
return h.Propagation.SpanContextFromRequest(r)
}
func (h *Handler) startStats(w http.ResponseWriter, r *http.Request) (http.ResponseWriter, func()) {
func (h *Handler) startStats(w http.ResponseWriter, r *http.Request) (http.ResponseWriter, func(tags *addedTags)) {
ctx, _ := tag.New(r.Context(),
tag.Upsert(Host, r.URL.Host),
tag.Upsert(Host, r.Host),
tag.Upsert(Path, r.URL.Path),
tag.Upsert(Method, r.Method))
track := &trackingResponseWriter{
@ -126,7 +163,7 @@ func (h *Handler) startStats(w http.ResponseWriter, r *http.Request) (http.Respo
track.reqSize = r.ContentLength
}
stats.Record(ctx, ServerRequestCount.M(1))
return track, track.end
return track.wrappedResponseWriter(), track.end
}
type trackingResponseWriter struct {
@ -140,40 +177,10 @@ type trackingResponseWriter struct {
writer http.ResponseWriter
}
// Compile time assertions for widely used net/http interfaces
var _ http.CloseNotifier = (*trackingResponseWriter)(nil)
var _ http.Flusher = (*trackingResponseWriter)(nil)
var _ http.Hijacker = (*trackingResponseWriter)(nil)
var _ http.Pusher = (*trackingResponseWriter)(nil)
// Compile time assertion for ResponseWriter interface
var _ http.ResponseWriter = (*trackingResponseWriter)(nil)
var errHijackerUnimplemented = errors.New("ResponseWriter does not implement http.Hijacker")
func (t *trackingResponseWriter) Hijack() (net.Conn, *bufio.ReadWriter, error) {
hj, ok := t.writer.(http.Hijacker)
if !ok {
return nil, nil, errHijackerUnimplemented
}
return hj.Hijack()
}
func (t *trackingResponseWriter) CloseNotify() <-chan bool {
cn, ok := t.writer.(http.CloseNotifier)
if !ok {
return nil
}
return cn.CloseNotify()
}
func (t *trackingResponseWriter) Push(target string, opts *http.PushOptions) error {
pusher, ok := t.writer.(http.Pusher)
if !ok {
return http.ErrNotSupported
}
return pusher.Push(target, opts)
}
func (t *trackingResponseWriter) end() {
func (t *trackingResponseWriter) end(tags *addedTags) {
t.endOnce.Do(func() {
if t.statusCode == 0 {
t.statusCode = 200
@ -181,6 +188,7 @@ func (t *trackingResponseWriter) end() {
span := trace.FromContext(t.ctx)
span.SetStatus(TraceStatus(t.statusCode, t.statusLine))
span.AddAttributes(trace.Int64Attribute(StatusCodeAttribute, int64(t.statusCode)))
m := []stats.Measurement{
ServerLatency.M(float64(time.Since(t.start)) / float64(time.Millisecond)),
@ -189,8 +197,10 @@ func (t *trackingResponseWriter) end() {
if t.reqSize >= 0 {
m = append(m, ServerRequestBytes.M(t.reqSize))
}
ctx, _ := tag.New(t.ctx, tag.Upsert(StatusCode, strconv.Itoa(t.statusCode)))
stats.Record(ctx, m...)
allTags := make([]tag.Mutator, len(tags.t)+1)
allTags[0] = tag.Upsert(StatusCode, strconv.Itoa(t.statusCode))
copy(allTags[1:], tags.t)
stats.RecordWithTags(t.ctx, allTags, m...)
})
}
@ -201,6 +211,9 @@ func (t *trackingResponseWriter) Header() http.Header {
func (t *trackingResponseWriter) Write(data []byte) (int, error) {
n, err := t.writer.Write(data)
t.respSize += int64(n)
// Add message event for request bytes sent.
span := trace.FromContext(t.ctx)
span.AddMessageSendEvent(0 /* TODO: messageID */, int64(n), -1)
return n, err
}
@ -210,8 +223,231 @@ func (t *trackingResponseWriter) WriteHeader(statusCode int) {
t.statusLine = http.StatusText(t.statusCode)
}
func (t *trackingResponseWriter) Flush() {
if flusher, ok := t.writer.(http.Flusher); ok {
flusher.Flush()
// wrappedResponseWriter returns a wrapped version of the original
// ResponseWriter and only implements the same combination of additional
// interfaces as the original.
// This implementation is based on https://github.com/felixge/httpsnoop.
func (t *trackingResponseWriter) wrappedResponseWriter() http.ResponseWriter {
var (
hj, i0 = t.writer.(http.Hijacker)
cn, i1 = t.writer.(http.CloseNotifier)
pu, i2 = t.writer.(http.Pusher)
fl, i3 = t.writer.(http.Flusher)
rf, i4 = t.writer.(io.ReaderFrom)
)
switch {
case !i0 && !i1 && !i2 && !i3 && !i4:
return struct {
http.ResponseWriter
}{t}
case !i0 && !i1 && !i2 && !i3 && i4:
return struct {
http.ResponseWriter
io.ReaderFrom
}{t, rf}
case !i0 && !i1 && !i2 && i3 && !i4:
return struct {
http.ResponseWriter
http.Flusher
}{t, fl}
case !i0 && !i1 && !i2 && i3 && i4:
return struct {
http.ResponseWriter
http.Flusher
io.ReaderFrom
}{t, fl, rf}
case !i0 && !i1 && i2 && !i3 && !i4:
return struct {
http.ResponseWriter
http.Pusher
}{t, pu}
case !i0 && !i1 && i2 && !i3 && i4:
return struct {
http.ResponseWriter
http.Pusher
io.ReaderFrom
}{t, pu, rf}
case !i0 && !i1 && i2 && i3 && !i4:
return struct {
http.ResponseWriter
http.Pusher
http.Flusher
}{t, pu, fl}
case !i0 && !i1 && i2 && i3 && i4:
return struct {
http.ResponseWriter
http.Pusher
http.Flusher
io.ReaderFrom
}{t, pu, fl, rf}
case !i0 && i1 && !i2 && !i3 && !i4:
return struct {
http.ResponseWriter
http.CloseNotifier
}{t, cn}
case !i0 && i1 && !i2 && !i3 && i4:
return struct {
http.ResponseWriter
http.CloseNotifier
io.ReaderFrom
}{t, cn, rf}
case !i0 && i1 && !i2 && i3 && !i4:
return struct {
http.ResponseWriter
http.CloseNotifier
http.Flusher
}{t, cn, fl}
case !i0 && i1 && !i2 && i3 && i4:
return struct {
http.ResponseWriter
http.CloseNotifier
http.Flusher
io.ReaderFrom
}{t, cn, fl, rf}
case !i0 && i1 && i2 && !i3 && !i4:
return struct {
http.ResponseWriter
http.CloseNotifier
http.Pusher
}{t, cn, pu}
case !i0 && i1 && i2 && !i3 && i4:
return struct {
http.ResponseWriter
http.CloseNotifier
http.Pusher
io.ReaderFrom
}{t, cn, pu, rf}
case !i0 && i1 && i2 && i3 && !i4:
return struct {
http.ResponseWriter
http.CloseNotifier
http.Pusher
http.Flusher
}{t, cn, pu, fl}
case !i0 && i1 && i2 && i3 && i4:
return struct {
http.ResponseWriter
http.CloseNotifier
http.Pusher
http.Flusher
io.ReaderFrom
}{t, cn, pu, fl, rf}
case i0 && !i1 && !i2 && !i3 && !i4:
return struct {
http.ResponseWriter
http.Hijacker
}{t, hj}
case i0 && !i1 && !i2 && !i3 && i4:
return struct {
http.ResponseWriter
http.Hijacker
io.ReaderFrom
}{t, hj, rf}
case i0 && !i1 && !i2 && i3 && !i4:
return struct {
http.ResponseWriter
http.Hijacker
http.Flusher
}{t, hj, fl}
case i0 && !i1 && !i2 && i3 && i4:
return struct {
http.ResponseWriter
http.Hijacker
http.Flusher
io.ReaderFrom
}{t, hj, fl, rf}
case i0 && !i1 && i2 && !i3 && !i4:
return struct {
http.ResponseWriter
http.Hijacker
http.Pusher
}{t, hj, pu}
case i0 && !i1 && i2 && !i3 && i4:
return struct {
http.ResponseWriter
http.Hijacker
http.Pusher
io.ReaderFrom
}{t, hj, pu, rf}
case i0 && !i1 && i2 && i3 && !i4:
return struct {
http.ResponseWriter
http.Hijacker
http.Pusher
http.Flusher
}{t, hj, pu, fl}
case i0 && !i1 && i2 && i3 && i4:
return struct {
http.ResponseWriter
http.Hijacker
http.Pusher
http.Flusher
io.ReaderFrom
}{t, hj, pu, fl, rf}
case i0 && i1 && !i2 && !i3 && !i4:
return struct {
http.ResponseWriter
http.Hijacker
http.CloseNotifier
}{t, hj, cn}
case i0 && i1 && !i2 && !i3 && i4:
return struct {
http.ResponseWriter
http.Hijacker
http.CloseNotifier
io.ReaderFrom
}{t, hj, cn, rf}
case i0 && i1 && !i2 && i3 && !i4:
return struct {
http.ResponseWriter
http.Hijacker
http.CloseNotifier
http.Flusher
}{t, hj, cn, fl}
case i0 && i1 && !i2 && i3 && i4:
return struct {
http.ResponseWriter
http.Hijacker
http.CloseNotifier
http.Flusher
io.ReaderFrom
}{t, hj, cn, fl, rf}
case i0 && i1 && i2 && !i3 && !i4:
return struct {
http.ResponseWriter
http.Hijacker
http.CloseNotifier
http.Pusher
}{t, hj, cn, pu}
case i0 && i1 && i2 && !i3 && i4:
return struct {
http.ResponseWriter
http.Hijacker
http.CloseNotifier
http.Pusher
io.ReaderFrom
}{t, hj, cn, pu, rf}
case i0 && i1 && i2 && i3 && !i4:
return struct {
http.ResponseWriter
http.Hijacker
http.CloseNotifier
http.Pusher
http.Flusher
}{t, hj, cn, pu, fl}
case i0 && i1 && i2 && i3 && i4:
return struct {
http.ResponseWriter
http.Hijacker
http.CloseNotifier
http.Pusher
http.Flusher
io.ReaderFrom
}{t, hj, cn, pu, fl, rf}
default:
return struct {
http.ResponseWriter
}{t}
}
}

View file

@ -0,0 +1,169 @@
// Copyright 2018, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package ochttp
import (
"crypto/tls"
"net/http"
"net/http/httptrace"
"strings"
"go.opencensus.io/trace"
)
type spanAnnotator struct {
sp *trace.Span
}
// TODO: Remove NewSpanAnnotator at the next release.
// NewSpanAnnotator returns a httptrace.ClientTrace which annotates
// all emitted httptrace events on the provided Span.
// Deprecated: Use NewSpanAnnotatingClientTrace instead
func NewSpanAnnotator(r *http.Request, s *trace.Span) *httptrace.ClientTrace {
return NewSpanAnnotatingClientTrace(r, s)
}
// NewSpanAnnotatingClientTrace returns a httptrace.ClientTrace which annotates
// all emitted httptrace events on the provided Span.
func NewSpanAnnotatingClientTrace(_ *http.Request, s *trace.Span) *httptrace.ClientTrace {
sa := spanAnnotator{sp: s}
return &httptrace.ClientTrace{
GetConn: sa.getConn,
GotConn: sa.gotConn,
PutIdleConn: sa.putIdleConn,
GotFirstResponseByte: sa.gotFirstResponseByte,
Got100Continue: sa.got100Continue,
DNSStart: sa.dnsStart,
DNSDone: sa.dnsDone,
ConnectStart: sa.connectStart,
ConnectDone: sa.connectDone,
TLSHandshakeStart: sa.tlsHandshakeStart,
TLSHandshakeDone: sa.tlsHandshakeDone,
WroteHeaders: sa.wroteHeaders,
Wait100Continue: sa.wait100Continue,
WroteRequest: sa.wroteRequest,
}
}
func (s spanAnnotator) getConn(hostPort string) {
attrs := []trace.Attribute{
trace.StringAttribute("httptrace.get_connection.host_port", hostPort),
}
s.sp.Annotate(attrs, "GetConn")
}
func (s spanAnnotator) gotConn(info httptrace.GotConnInfo) {
attrs := []trace.Attribute{
trace.BoolAttribute("httptrace.got_connection.reused", info.Reused),
trace.BoolAttribute("httptrace.got_connection.was_idle", info.WasIdle),
}
if info.WasIdle {
attrs = append(attrs,
trace.StringAttribute("httptrace.got_connection.idle_time", info.IdleTime.String()))
}
s.sp.Annotate(attrs, "GotConn")
}
// PutIdleConn implements a httptrace.ClientTrace hook
func (s spanAnnotator) putIdleConn(err error) {
var attrs []trace.Attribute
if err != nil {
attrs = append(attrs,
trace.StringAttribute("httptrace.put_idle_connection.error", err.Error()))
}
s.sp.Annotate(attrs, "PutIdleConn")
}
func (s spanAnnotator) gotFirstResponseByte() {
s.sp.Annotate(nil, "GotFirstResponseByte")
}
func (s spanAnnotator) got100Continue() {
s.sp.Annotate(nil, "Got100Continue")
}
func (s spanAnnotator) dnsStart(info httptrace.DNSStartInfo) {
attrs := []trace.Attribute{
trace.StringAttribute("httptrace.dns_start.host", info.Host),
}
s.sp.Annotate(attrs, "DNSStart")
}
func (s spanAnnotator) dnsDone(info httptrace.DNSDoneInfo) {
var addrs []string
for _, addr := range info.Addrs {
addrs = append(addrs, addr.String())
}
attrs := []trace.Attribute{
trace.StringAttribute("httptrace.dns_done.addrs", strings.Join(addrs, " , ")),
}
if info.Err != nil {
attrs = append(attrs,
trace.StringAttribute("httptrace.dns_done.error", info.Err.Error()))
}
s.sp.Annotate(attrs, "DNSDone")
}
func (s spanAnnotator) connectStart(network, addr string) {
attrs := []trace.Attribute{
trace.StringAttribute("httptrace.connect_start.network", network),
trace.StringAttribute("httptrace.connect_start.addr", addr),
}
s.sp.Annotate(attrs, "ConnectStart")
}
func (s spanAnnotator) connectDone(network, addr string, err error) {
attrs := []trace.Attribute{
trace.StringAttribute("httptrace.connect_done.network", network),
trace.StringAttribute("httptrace.connect_done.addr", addr),
}
if err != nil {
attrs = append(attrs,
trace.StringAttribute("httptrace.connect_done.error", err.Error()))
}
s.sp.Annotate(attrs, "ConnectDone")
}
func (s spanAnnotator) tlsHandshakeStart() {
s.sp.Annotate(nil, "TLSHandshakeStart")
}
func (s spanAnnotator) tlsHandshakeDone(_ tls.ConnectionState, err error) {
var attrs []trace.Attribute
if err != nil {
attrs = append(attrs,
trace.StringAttribute("httptrace.tls_handshake_done.error", err.Error()))
}
s.sp.Annotate(attrs, "TLSHandshakeDone")
}
func (s spanAnnotator) wroteHeaders() {
s.sp.Annotate(nil, "WroteHeaders")
}
func (s spanAnnotator) wait100Continue() {
s.sp.Annotate(nil, "Wait100Continue")
}
func (s spanAnnotator) wroteRequest(info httptrace.WroteRequestInfo) {
var attrs []trace.Attribute
if info.Err != nil {
attrs = append(attrs,
trace.StringAttribute("httptrace.wrote_request.error", info.Err.Error()))
}
s.sp.Annotate(attrs, "WroteRequest")
}

View file

@ -20,20 +20,67 @@ import (
"go.opencensus.io/tag"
)
// Deprecated: client HTTP measures.
var (
// Deprecated: Use a Count aggregation over one of the other client measures to achieve the same effect.
ClientRequestCount = stats.Int64(
"opencensus.io/http/client/request_count",
"Number of HTTP requests started",
stats.UnitDimensionless)
// Deprecated: Use ClientSentBytes.
ClientRequestBytes = stats.Int64(
"opencensus.io/http/client/request_bytes",
"HTTP request body size if set as ContentLength (uncompressed)",
stats.UnitBytes)
// Deprecated: Use ClientReceivedBytes.
ClientResponseBytes = stats.Int64(
"opencensus.io/http/client/response_bytes",
"HTTP response body size (uncompressed)",
stats.UnitBytes)
// Deprecated: Use ClientRoundtripLatency.
ClientLatency = stats.Float64(
"opencensus.io/http/client/latency",
"End-to-end latency",
stats.UnitMilliseconds)
)
// The following client HTTP measures are supported for use in custom views.
var (
ClientRequestCount = stats.Int64("opencensus.io/http/client/request_count", "Number of HTTP requests started", stats.UnitDimensionless)
ClientRequestBytes = stats.Int64("opencensus.io/http/client/request_bytes", "HTTP request body size if set as ContentLength (uncompressed)", stats.UnitBytes)
ClientResponseBytes = stats.Int64("opencensus.io/http/client/response_bytes", "HTTP response body size (uncompressed)", stats.UnitBytes)
ClientLatency = stats.Float64("opencensus.io/http/client/latency", "End-to-end latency", stats.UnitMilliseconds)
ClientSentBytes = stats.Int64(
"opencensus.io/http/client/sent_bytes",
"Total bytes sent in request body (not including headers)",
stats.UnitBytes,
)
ClientReceivedBytes = stats.Int64(
"opencensus.io/http/client/received_bytes",
"Total bytes received in response bodies (not including headers but including error responses with bodies)",
stats.UnitBytes,
)
ClientRoundtripLatency = stats.Float64(
"opencensus.io/http/client/roundtrip_latency",
"Time between first byte of request headers sent to last byte of response received, or terminal error",
stats.UnitMilliseconds,
)
)
// The following server HTTP measures are supported for use in custom views:
var (
ServerRequestCount = stats.Int64("opencensus.io/http/server/request_count", "Number of HTTP requests started", stats.UnitDimensionless)
ServerRequestBytes = stats.Int64("opencensus.io/http/server/request_bytes", "HTTP request body size if set as ContentLength (uncompressed)", stats.UnitBytes)
ServerResponseBytes = stats.Int64("opencensus.io/http/server/response_bytes", "HTTP response body size (uncompressed)", stats.UnitBytes)
ServerLatency = stats.Float64("opencensus.io/http/server/latency", "End-to-end latency", stats.UnitMilliseconds)
ServerRequestCount = stats.Int64(
"opencensus.io/http/server/request_count",
"Number of HTTP requests started",
stats.UnitDimensionless)
ServerRequestBytes = stats.Int64(
"opencensus.io/http/server/request_bytes",
"HTTP request body size if set as ContentLength (uncompressed)",
stats.UnitBytes)
ServerResponseBytes = stats.Int64(
"opencensus.io/http/server/response_bytes",
"HTTP response body size (uncompressed)",
stats.UnitBytes)
ServerLatency = stats.Float64(
"opencensus.io/http/server/latency",
"End-to-end latency",
stats.UnitMilliseconds)
)
// The following tags are applied to stats recorded by this package. Host, Path
@ -41,28 +88,89 @@ var (
// ClientRequestCount or ServerRequestCount, since it is recorded before the status is known.
var (
// Host is the value of the HTTP Host header.
Host, _ = tag.NewKey("http.host")
//
// The value of this tag can be controlled by the HTTP client, so you need
// to watch out for potentially generating high-cardinality labels in your
// metrics backend if you use this tag in views.
Host = tag.MustNewKey("http.host")
// StatusCode is the numeric HTTP response status code,
// or "error" if a transport error occurred and no status code was read.
StatusCode, _ = tag.NewKey("http.status")
StatusCode = tag.MustNewKey("http.status")
// Path is the URL path (not including query string) in the request.
Path, _ = tag.NewKey("http.path")
//
// The value of this tag can be controlled by the HTTP client, so you need
// to watch out for potentially generating high-cardinality labels in your
// metrics backend if you use this tag in views.
Path = tag.MustNewKey("http.path")
// Method is the HTTP method of the request, capitalized (GET, POST, etc.).
Method, _ = tag.NewKey("http.method")
Method = tag.MustNewKey("http.method")
// KeyServerRoute is a low cardinality string representing the logical
// handler of the request. This is usually the pattern registered on the a
// ServeMux (or similar string).
KeyServerRoute = tag.MustNewKey("http_server_route")
)
// Client tag keys.
var (
// KeyClientMethod is the HTTP method, capitalized (i.e. GET, POST, PUT, DELETE, etc.).
KeyClientMethod = tag.MustNewKey("http_client_method")
// KeyClientPath is the URL path (not including query string).
KeyClientPath = tag.MustNewKey("http_client_path")
// KeyClientStatus is the HTTP status code as an integer (e.g. 200, 404, 500.), or "error" if no response status line was received.
KeyClientStatus = tag.MustNewKey("http_client_status")
// KeyClientHost is the value of the request Host header.
KeyClientHost = tag.MustNewKey("http_client_host")
)
// Default distributions used by views in this package.
var (
DefaultSizeDistribution = view.Distribution(0, 1024, 2048, 4096, 16384, 65536, 262144, 1048576, 4194304, 16777216, 67108864, 268435456, 1073741824, 4294967296)
DefaultLatencyDistribution = view.Distribution(0, 1, 2, 3, 4, 5, 6, 8, 10, 13, 16, 20, 25, 30, 40, 50, 65, 80, 100, 130, 160, 200, 250, 300, 400, 500, 650, 800, 1000, 2000, 5000, 10000, 20000, 50000, 100000)
DefaultSizeDistribution = view.Distribution(1024, 2048, 4096, 16384, 65536, 262144, 1048576, 4194304, 16777216, 67108864, 268435456, 1073741824, 4294967296)
DefaultLatencyDistribution = view.Distribution(1, 2, 3, 4, 5, 6, 8, 10, 13, 16, 20, 25, 30, 40, 50, 65, 80, 100, 130, 160, 200, 250, 300, 400, 500, 650, 800, 1000, 2000, 5000, 10000, 20000, 50000, 100000)
)
// Package ochttp provides some convenience views.
// You need to subscribe to the views for data to actually be collected.
// Package ochttp provides some convenience views for client measures.
// You still need to register these views for data to actually be collected.
var (
ClientSentBytesDistribution = &view.View{
Name: "opencensus.io/http/client/sent_bytes",
Measure: ClientSentBytes,
Aggregation: DefaultSizeDistribution,
Description: "Total bytes sent in request body (not including headers), by HTTP method and response status",
TagKeys: []tag.Key{KeyClientMethod, KeyClientStatus},
}
ClientReceivedBytesDistribution = &view.View{
Name: "opencensus.io/http/client/received_bytes",
Measure: ClientReceivedBytes,
Aggregation: DefaultSizeDistribution,
Description: "Total bytes received in response bodies (not including headers but including error responses with bodies), by HTTP method and response status",
TagKeys: []tag.Key{KeyClientMethod, KeyClientStatus},
}
ClientRoundtripLatencyDistribution = &view.View{
Name: "opencensus.io/http/client/roundtrip_latency",
Measure: ClientRoundtripLatency,
Aggregation: DefaultLatencyDistribution,
Description: "End-to-end latency, by HTTP method and response status",
TagKeys: []tag.Key{KeyClientMethod, KeyClientStatus},
}
ClientCompletedCount = &view.View{
Name: "opencensus.io/http/client/completed_count",
Measure: ClientRoundtripLatency,
Aggregation: view.Count(),
Description: "Count of completed requests, by HTTP method and response status",
TagKeys: []tag.Key{KeyClientMethod, KeyClientStatus},
}
)
// Deprecated: Old client Views.
var (
// Deprecated: No direct replacement, but see ClientCompletedCount.
ClientRequestCountView = &view.View{
Name: "opencensus.io/http/client/request_count",
Description: "Count of HTTP requests started",
@ -70,43 +178,52 @@ var (
Aggregation: view.Count(),
}
// Deprecated: Use ClientSentBytesDistribution.
ClientRequestBytesView = &view.View{
Name: "opencensus.io/http/client/request_bytes",
Description: "Size distribution of HTTP request body",
Measure: ClientRequestBytes,
Measure: ClientSentBytes,
Aggregation: DefaultSizeDistribution,
}
// Deprecated: Use ClientReceivedBytesDistribution instead.
ClientResponseBytesView = &view.View{
Name: "opencensus.io/http/client/response_bytes",
Description: "Size distribution of HTTP response body",
Measure: ClientResponseBytes,
Measure: ClientReceivedBytes,
Aggregation: DefaultSizeDistribution,
}
// Deprecated: Use ClientRoundtripLatencyDistribution instead.
ClientLatencyView = &view.View{
Name: "opencensus.io/http/client/latency",
Description: "Latency distribution of HTTP requests",
Measure: ClientLatency,
Measure: ClientRoundtripLatency,
Aggregation: DefaultLatencyDistribution,
}
// Deprecated: Use ClientCompletedCount instead.
ClientRequestCountByMethod = &view.View{
Name: "opencensus.io/http/client/request_count_by_method",
Description: "Client request count by HTTP method",
TagKeys: []tag.Key{Method},
Measure: ClientRequestCount,
Measure: ClientSentBytes,
Aggregation: view.Count(),
}
// Deprecated: Use ClientCompletedCount instead.
ClientResponseCountByStatusCode = &view.View{
Name: "opencensus.io/http/client/response_count_by_status_code",
Description: "Client response count by status code",
TagKeys: []tag.Key{StatusCode},
Measure: ClientLatency,
Measure: ClientRoundtripLatency,
Aggregation: view.Count(),
}
)
// Package ochttp provides some convenience views for server measures.
// You still need to register these views for data to actually be collected.
var (
ServerRequestCountView = &view.View{
Name: "opencensus.io/http/server/request_count",
Description: "Count of HTTP requests started",
@ -153,6 +270,7 @@ var (
)
// DefaultClientViews are the default client views provided by this package.
// Deprecated: No replacement. Register the views you would like individually.
var DefaultClientViews = []*view.View{
ClientRequestCountView,
ClientRequestBytesView,
@ -163,6 +281,7 @@ var DefaultClientViews = []*view.View{
}
// DefaultServerViews are the default server views provided by this package.
// Deprecated: No replacement. Register the views you would like individually.
var DefaultServerViews = []*view.View{
ServerRequestCountView,
ServerRequestBytesView,

View file

@ -17,7 +17,7 @@ package ochttp
import (
"io"
"net/http"
"net/url"
"net/http/httptrace"
"go.opencensus.io/plugin/ochttp/propagation/b3"
"go.opencensus.io/trace"
@ -34,14 +34,17 @@ const (
HostAttribute = "http.host"
MethodAttribute = "http.method"
PathAttribute = "http.path"
URLAttribute = "http.url"
UserAgentAttribute = "http.user_agent"
StatusCodeAttribute = "http.status_code"
)
type traceTransport struct {
base http.RoundTripper
startOptions trace.StartOptions
format propagation.HTTPFormat
base http.RoundTripper
startOptions trace.StartOptions
format propagation.HTTPFormat
formatSpanName func(*http.Request) string
newClientTrace func(*http.Request, *trace.Span) *httptrace.ClientTrace
}
// TODO(jbd): Add message events for request and response size.
@ -50,15 +53,30 @@ type traceTransport struct {
// The created span can follow a parent span, if a parent is presented in
// the request's context.
func (t *traceTransport) RoundTrip(req *http.Request) (*http.Response, error) {
name := spanNameFromURL(req.URL)
name := t.formatSpanName(req)
// TODO(jbd): Discuss whether we want to prefix
// outgoing requests with Sent.
_, span := trace.StartSpan(req.Context(), name,
ctx, span := trace.StartSpan(req.Context(), name,
trace.WithSampler(t.startOptions.Sampler),
trace.WithSpanKind(trace.SpanKindClient))
req = req.WithContext(trace.WithSpan(req.Context(), span))
if t.newClientTrace != nil {
req = req.WithContext(httptrace.WithClientTrace(ctx, t.newClientTrace(req, span)))
} else {
req = req.WithContext(ctx)
}
if t.format != nil {
// SpanContextToRequest will modify its Request argument, which is
// contrary to the contract for http.RoundTripper, so we need to
// pass it a copy of the Request.
// However, the Request struct itself was already copied by
// the WithContext calls above and so we just need to copy the header.
header := make(http.Header)
for k, v := range req.Header {
header[k] = v
}
req.Header = header
t.format.SpanContextToRequest(span.SpanContext(), req)
}
@ -76,7 +94,8 @@ func (t *traceTransport) RoundTrip(req *http.Request) (*http.Response, error) {
// span.End() will be invoked after
// a read from resp.Body returns io.EOF or when
// resp.Body.Close() is invoked.
resp.Body = &bodyTracker{rc: resp.Body, span: span}
bt := &bodyTracker{rc: resp.Body, span: span}
resp.Body = wrappedBody(bt, resp.Body)
return resp, err
}
@ -127,17 +146,26 @@ func (t *traceTransport) CancelRequest(req *http.Request) {
}
}
func spanNameFromURL(u *url.URL) string {
return u.Path
func spanNameFromURL(req *http.Request) string {
return req.URL.Path
}
func requestAttrs(r *http.Request) []trace.Attribute {
return []trace.Attribute{
userAgent := r.UserAgent()
attrs := make([]trace.Attribute, 0, 5)
attrs = append(attrs,
trace.StringAttribute(PathAttribute, r.URL.Path),
trace.StringAttribute(HostAttribute, r.URL.Host),
trace.StringAttribute(URLAttribute, r.URL.String()),
trace.StringAttribute(HostAttribute, r.Host),
trace.StringAttribute(MethodAttribute, r.Method),
trace.StringAttribute(UserAgentAttribute, r.UserAgent()),
)
if userAgent != "" {
attrs = append(attrs, trace.StringAttribute(UserAgentAttribute, userAgent))
}
return attrs
}
func responseAttrs(resp *http.Response) []trace.Attribute {
@ -146,7 +174,7 @@ func responseAttrs(resp *http.Response) []trace.Attribute {
}
}
// HTTPStatusToTraceStatus converts the HTTP status code to a trace.Status that
// TraceStatus is a utility to convert the HTTP status code to a trace.Status that
// represents the outcome as closely as possible.
func TraceStatus(httpStatusCode int, statusLine string) trace.Status {
var code int32
@ -158,6 +186,8 @@ func TraceStatus(httpStatusCode int, statusLine string) trace.Status {
code = trace.StatusCodeCancelled
case http.StatusBadRequest:
code = trace.StatusCodeInvalidArgument
case http.StatusUnprocessableEntity:
code = trace.StatusCodeInvalidArgument
case http.StatusGatewayTimeout:
code = trace.StatusCodeDeadlineExceeded
case http.StatusNotFound:
@ -174,26 +204,41 @@ func TraceStatus(httpStatusCode int, statusLine string) trace.Status {
code = trace.StatusCodeUnavailable
case http.StatusOK:
code = trace.StatusCodeOK
case http.StatusConflict:
code = trace.StatusCodeAlreadyExists
}
return trace.Status{Code: code, Message: codeToStr[code]}
}
var codeToStr = map[int32]string{
trace.StatusCodeOK: `"OK"`,
trace.StatusCodeCancelled: `"CANCELLED"`,
trace.StatusCodeUnknown: `"UNKNOWN"`,
trace.StatusCodeInvalidArgument: `"INVALID_ARGUMENT"`,
trace.StatusCodeDeadlineExceeded: `"DEADLINE_EXCEEDED"`,
trace.StatusCodeNotFound: `"NOT_FOUND"`,
trace.StatusCodeAlreadyExists: `"ALREADY_EXISTS"`,
trace.StatusCodePermissionDenied: `"PERMISSION_DENIED"`,
trace.StatusCodeResourceExhausted: `"RESOURCE_EXHAUSTED"`,
trace.StatusCodeFailedPrecondition: `"FAILED_PRECONDITION"`,
trace.StatusCodeAborted: `"ABORTED"`,
trace.StatusCodeOutOfRange: `"OUT_OF_RANGE"`,
trace.StatusCodeUnimplemented: `"UNIMPLEMENTED"`,
trace.StatusCodeInternal: `"INTERNAL"`,
trace.StatusCodeUnavailable: `"UNAVAILABLE"`,
trace.StatusCodeDataLoss: `"DATA_LOSS"`,
trace.StatusCodeUnauthenticated: `"UNAUTHENTICATED"`,
trace.StatusCodeOK: `OK`,
trace.StatusCodeCancelled: `CANCELLED`,
trace.StatusCodeUnknown: `UNKNOWN`,
trace.StatusCodeInvalidArgument: `INVALID_ARGUMENT`,
trace.StatusCodeDeadlineExceeded: `DEADLINE_EXCEEDED`,
trace.StatusCodeNotFound: `NOT_FOUND`,
trace.StatusCodeAlreadyExists: `ALREADY_EXISTS`,
trace.StatusCodePermissionDenied: `PERMISSION_DENIED`,
trace.StatusCodeResourceExhausted: `RESOURCE_EXHAUSTED`,
trace.StatusCodeFailedPrecondition: `FAILED_PRECONDITION`,
trace.StatusCodeAborted: `ABORTED`,
trace.StatusCodeOutOfRange: `OUT_OF_RANGE`,
trace.StatusCodeUnimplemented: `UNIMPLEMENTED`,
trace.StatusCodeInternal: `INTERNAL`,
trace.StatusCodeUnavailable: `UNAVAILABLE`,
trace.StatusCodeDataLoss: `DATA_LOSS`,
trace.StatusCodeUnauthenticated: `UNAUTHENTICATED`,
}
func isHealthEndpoint(path string) bool {
// Health checking is pretty frequent and
// traces collected for health endpoints
// can be extremely noisy and expensive.
// Disable canonical health checking endpoints
// like /healthz and /_ah/health for now.
if path == "/healthz" || path == "/_ah/health" {
return true
}
return false
}

44
vendor/go.opencensus.io/plugin/ochttp/wrapped_body.go generated vendored Normal file
View file

@ -0,0 +1,44 @@
// Copyright 2019, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package ochttp
import (
"io"
)
// wrappedBody returns a wrapped version of the original
// Body and only implements the same combination of additional
// interfaces as the original.
func wrappedBody(wrapper io.ReadCloser, body io.ReadCloser) io.ReadCloser {
var (
wr, i0 = body.(io.Writer)
)
switch {
case !i0:
return struct {
io.ReadCloser
}{wrapper}
case i0:
return struct {
io.ReadCloser
io.Writer
}{wrapper, wr}
default:
return struct {
io.ReadCloser
}{wrapper}
}
}

164
vendor/go.opencensus.io/resource/resource.go generated vendored Normal file
View file

@ -0,0 +1,164 @@
// Copyright 2018, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package resource provides functionality for resource, which capture
// identifying information about the entities for which signals are exported.
package resource
import (
"context"
"fmt"
"os"
"regexp"
"sort"
"strconv"
"strings"
)
// Environment variables used by FromEnv to decode a resource.
const (
EnvVarType = "OC_RESOURCE_TYPE"
EnvVarLabels = "OC_RESOURCE_LABELS"
)
// Resource describes an entity about which identifying information and metadata is exposed.
// For example, a type "k8s.io/container" may hold labels describing the pod name and namespace.
type Resource struct {
Type string
Labels map[string]string
}
// EncodeLabels encodes a labels map to a string as provided via the OC_RESOURCE_LABELS environment variable.
func EncodeLabels(labels map[string]string) string {
sortedKeys := make([]string, 0, len(labels))
for k := range labels {
sortedKeys = append(sortedKeys, k)
}
sort.Strings(sortedKeys)
s := ""
for i, k := range sortedKeys {
if i > 0 {
s += ","
}
s += k + "=" + strconv.Quote(labels[k])
}
return s
}
var labelRegex = regexp.MustCompile(`^\s*([[:ascii:]]{1,256}?)=("[[:ascii:]]{0,256}?")\s*,`)
// DecodeLabels decodes a serialized label map as used in the OC_RESOURCE_LABELS variable.
// A list of labels of the form `<key1>="<value1>",<key2>="<value2>",...` is accepted.
// Domain names and paths are accepted as label keys.
// Most users will want to use FromEnv instead.
func DecodeLabels(s string) (map[string]string, error) {
m := map[string]string{}
// Ensure a trailing comma, which allows us to keep the regex simpler
s = strings.TrimRight(strings.TrimSpace(s), ",") + ","
for len(s) > 0 {
match := labelRegex.FindStringSubmatch(s)
if len(match) == 0 {
return nil, fmt.Errorf("invalid label formatting, remainder: %s", s)
}
v := match[2]
if v == "" {
v = match[3]
} else {
var err error
if v, err = strconv.Unquote(v); err != nil {
return nil, fmt.Errorf("invalid label formatting, remainder: %s, err: %s", s, err)
}
}
m[match[1]] = v
s = s[len(match[0]):]
}
return m, nil
}
// FromEnv is a detector that loads resource information from the OC_RESOURCE_TYPE
// and OC_RESOURCE_labelS environment variables.
func FromEnv(context.Context) (*Resource, error) {
res := &Resource{
Type: strings.TrimSpace(os.Getenv(EnvVarType)),
}
labels := strings.TrimSpace(os.Getenv(EnvVarLabels))
if labels == "" {
return res, nil
}
var err error
if res.Labels, err = DecodeLabels(labels); err != nil {
return nil, err
}
return res, nil
}
var _ Detector = FromEnv
// merge resource information from b into a. In case of a collision, a takes precedence.
func merge(a, b *Resource) *Resource {
if a == nil {
return b
}
if b == nil {
return a
}
res := &Resource{
Type: a.Type,
Labels: map[string]string{},
}
if res.Type == "" {
res.Type = b.Type
}
for k, v := range b.Labels {
res.Labels[k] = v
}
// Labels from resource a overwrite labels from resource b.
for k, v := range a.Labels {
res.Labels[k] = v
}
return res
}
// Detector attempts to detect resource information.
// If the detector cannot find resource information, the returned resource is nil but no
// error is returned.
// An error is only returned on unexpected failures.
type Detector func(context.Context) (*Resource, error)
// MultiDetector returns a Detector that calls all input detectors in order and
// merges each result with the previous one. In case a type of label key is already set,
// the first set value is takes precedence.
// It returns on the first error that a sub-detector encounters.
func MultiDetector(detectors ...Detector) Detector {
return func(ctx context.Context) (*Resource, error) {
return detectAll(ctx, detectors...)
}
}
// detectall calls all input detectors sequentially an merges each result with the previous one.
// It returns on the first error that a sub-detector encounters.
func detectAll(ctx context.Context, detectors ...Detector) (*Resource, error) {
var res *Resource
for _, d := range detectors {
r, err := d(ctx)
if err != nil {
return nil, err
}
res = merge(res, r)
}
return res, nil
}

38
vendor/go.opencensus.io/stats/doc.go generated vendored
View file

@ -21,35 +21,49 @@ aggregate the collected data, and export the aggregated data.
Measures
A measure represents a type of metric to be tracked and recorded.
A measure represents a type of data point to be tracked and recorded.
For example, latency, request Mb/s, and response Mb/s are measures
to collect from a server.
Each measure needs to be registered before being used. Measure
constructors such as Int64 and Float64 automatically
Measure constructors such as Int64 and Float64 automatically
register the measure by the given name. Each registered measure needs
to be unique by name. Measures also have a description and a unit.
Libraries can define and export measures for their end users to
create views and collect instrumentation data.
Libraries can define and export measures. Application authors can then
create views and collect and break down measures by the tags they are
interested in.
Recording measurements
Measurement is a data point to be collected for a measure. For example,
for a latency (ms) measure, 100 is a measurement that represents a 100ms
latency event. Users collect data points on the existing measures with
latency event. Measurements are created from measures with
the current context. Tags from the current context are recorded with the
measurements if they are any.
Recorded measurements are dropped immediately if user is not aggregating
them via views. Users don't necessarily need to conditionally enable/disable
Recorded measurements are dropped immediately if no views are registered for them.
There is usually no need to conditionally enable and disable
recording to reduce cost. Recording of measurements is cheap.
Libraries can always record measurements, and end-users can later decide
Libraries can always record measurements, and applications can later decide
on which measurements they want to collect by registering views. This allows
libraries to turn on the instrumentation by default.
Exemplars
For a given recorded measurement, the associated exemplar is a diagnostic map
that gives more information about the measurement.
When aggregated using a Distribution aggregation, an exemplar is kept for each
bucket in the Distribution. This allows you to easily find an example of a
measurement that fell into each bucket.
For example, if you also use the OpenCensus trace package and you
record a measurement with a context that contains a sampled trace span,
then the trace span will be added to the exemplar associated with the measurement.
When exported to a supporting back end, you should be able to easily navigate
to example traces that fell into each bucket in the Distribution.
*/
package stats // import "go.opencensus.io/stats"
// TODO(acetechnologist): Add a link to the language independent OpenCensus
// spec when it is available.

View file

@ -19,7 +19,7 @@ import (
)
// DefaultRecorder will be called for each Record call.
var DefaultRecorder func(*tag.Map, interface{})
var DefaultRecorder func(tags *tag.Map, measurement interface{}, attachments map[string]interface{})
// SubscriptionReporter reports when a view subscribed with a measure.
var SubscriptionReporter func(measure string)

View file

@ -20,19 +20,31 @@ import (
"sync/atomic"
)
// Measure represents a type of metric to be tracked and recorded.
// For example, latency, request Mb/s, and response Mb/s are measures
// Measure represents a single numeric value to be tracked and recorded.
// For example, latency, request bytes, and response bytes could be measures
// to collect from a server.
//
// Each measure needs to be registered before being used.
// Measure constructors such as Int64 and
// Float64 automatically registers the measure
// by the given name.
// Each registered measure needs to be unique by name.
// Measures also have a description and a unit.
// Measures by themselves have no outside effects. In order to be exported,
// the measure needs to be used in a View. If no Views are defined over a
// measure, there is very little cost in recording it.
type Measure interface {
// Name returns the name of this measure.
//
// Measure names are globally unique (among all libraries linked into your program).
// We recommend prefixing the measure name with a domain name relevant to your
// project or application.
//
// Measure names are never sent over the wire or exported to backends.
// They are only used to create Views.
Name() string
// Description returns the human-readable description of this measure.
Description() string
// Unit returns the units for the values this measure takes on.
//
// Units are encoded according to the case-sensitive abbreviations from the
// Unified Code for Units of Measure: http://unitsofmeasure.org/ucum.html
Unit() string
}
@ -81,8 +93,9 @@ func registerMeasureHandle(name, desc, unit string) *measureDescriptor {
// provides methods to create measurements of their kind. For example, Int64Measure
// provides M to convert an int64 into a measurement.
type Measurement struct {
v float64
m Measure
v float64
m Measure
desc *measureDescriptor
}
// Value returns the value of the Measurement as a float64.

View file

@ -15,38 +15,41 @@
package stats
// Float64Measure is a measure of type float64.
// Float64Measure is a measure for float64 values.
type Float64Measure struct {
md *measureDescriptor
}
// Name returns the name of the measure.
func (m *Float64Measure) Name() string {
return m.md.name
}
// Description returns the description of the measure.
func (m *Float64Measure) Description() string {
return m.md.description
}
// Unit returns the unit of the measure.
func (m *Float64Measure) Unit() string {
return m.md.unit
desc *measureDescriptor
}
// M creates a new float64 measurement.
// Use Record to record measurements.
func (m *Float64Measure) M(v float64) Measurement {
if !m.md.subscribed() {
return Measurement{}
return Measurement{
m: m,
desc: m.desc,
v: v,
}
return Measurement{m: m, v: v}
}
// Float64 creates a new measure of type Float64Measure.
// It never returns an error.
// Float64 creates a new measure for float64 values.
//
// See the documentation for interface Measure for more guidance on the
// parameters of this function.
func Float64(name, description, unit string) *Float64Measure {
mi := registerMeasureHandle(name, description, unit)
return &Float64Measure{mi}
}
// Name returns the name of the measure.
func (m *Float64Measure) Name() string {
return m.desc.name
}
// Description returns the description of the measure.
func (m *Float64Measure) Description() string {
return m.desc.description
}
// Unit returns the unit of the measure.
func (m *Float64Measure) Unit() string {
return m.desc.unit
}

View file

@ -15,38 +15,41 @@
package stats
// Int64Measure is a measure of type int64.
// Int64Measure is a measure for int64 values.
type Int64Measure struct {
md *measureDescriptor
}
// Name returns the name of the measure.
func (m *Int64Measure) Name() string {
return m.md.name
}
// Description returns the description of the measure.
func (m *Int64Measure) Description() string {
return m.md.description
}
// Unit returns the unit of the measure.
func (m *Int64Measure) Unit() string {
return m.md.unit
desc *measureDescriptor
}
// M creates a new int64 measurement.
// Use Record to record measurements.
func (m *Int64Measure) M(v int64) Measurement {
if !m.md.subscribed() {
return Measurement{}
return Measurement{
m: m,
desc: m.desc,
v: float64(v),
}
return Measurement{m: m, v: float64(v)}
}
// Int64 creates a new measure of type Int64Measure.
// It never returns an error.
// Int64 creates a new measure for int64 values.
//
// See the documentation for interface Measure for more guidance on the
// parameters of this function.
func Int64(name, description, unit string) *Int64Measure {
mi := registerMeasureHandle(name, description, unit)
return &Int64Measure{mi}
}
// Name returns the name of the measure.
func (m *Int64Measure) Name() string {
return m.desc.name
}
// Description returns the description of the measure.
func (m *Int64Measure) Description() string {
return m.desc.description
}
// Unit returns the unit of the measure.
func (m *Int64Measure) Unit() string {
return m.desc.unit
}

View file

@ -18,6 +18,7 @@ package stats
import (
"context"
"go.opencensus.io/metric/metricdata"
"go.opencensus.io/stats/internal"
"go.opencensus.io/tag"
)
@ -30,23 +31,87 @@ func init() {
}
}
// Record records one or multiple measurements with the same tags at once.
type recordOptions struct {
attachments metricdata.Attachments
mutators []tag.Mutator
measurements []Measurement
}
// WithAttachments applies provided exemplar attachments.
func WithAttachments(attachments metricdata.Attachments) Options {
return func(ro *recordOptions) {
ro.attachments = attachments
}
}
// WithTags applies provided tag mutators.
func WithTags(mutators ...tag.Mutator) Options {
return func(ro *recordOptions) {
ro.mutators = mutators
}
}
// WithMeasurements applies provided measurements.
func WithMeasurements(measurements ...Measurement) Options {
return func(ro *recordOptions) {
ro.measurements = measurements
}
}
// Options apply changes to recordOptions.
type Options func(*recordOptions)
func createRecordOption(ros ...Options) *recordOptions {
o := &recordOptions{}
for _, ro := range ros {
ro(o)
}
return o
}
// Record records one or multiple measurements with the same context at once.
// If there are any tags in the context, measurements will be tagged with them.
func Record(ctx context.Context, ms ...Measurement) {
if len(ms) == 0 {
return
RecordWithOptions(ctx, WithMeasurements(ms...))
}
// RecordWithTags records one or multiple measurements at once.
//
// Measurements will be tagged with the tags in the context mutated by the mutators.
// RecordWithTags is useful if you want to record with tag mutations but don't want
// to propagate the mutations in the context.
func RecordWithTags(ctx context.Context, mutators []tag.Mutator, ms ...Measurement) error {
return RecordWithOptions(ctx, WithTags(mutators...), WithMeasurements(ms...))
}
// RecordWithOptions records measurements from the given options (if any) against context
// and tags and attachments in the options (if any).
// If there are any tags in the context, measurements will be tagged with them.
func RecordWithOptions(ctx context.Context, ros ...Options) error {
o := createRecordOption(ros...)
if len(o.measurements) == 0 {
return nil
}
var record bool
for _, m := range ms {
if (m != Measurement{}) {
recorder := internal.DefaultRecorder
if recorder == nil {
return nil
}
record := false
for _, m := range o.measurements {
if m.desc.subscribed() {
record = true
break
}
}
if !record {
return
return nil
}
if internal.DefaultRecorder != nil {
internal.DefaultRecorder(tag.FromContext(ctx), ms)
if len(o.mutators) > 0 {
var err error
if ctx, err = tag.New(ctx, o.mutators...); err != nil {
return err
}
}
recorder(tag.FromContext(ctx), o.measurements, o.attachments)
return nil
}

View file

@ -22,4 +22,5 @@ const (
UnitDimensionless = "1"
UnitBytes = "By"
UnitMilliseconds = "ms"
UnitSeconds = "s"
)

View file

@ -82,7 +82,7 @@ func Sum() *Aggregation {
// Distribution indicates that the desired aggregation is
// a histogram distribution.
//
// An distribution aggregation may contain a histogram of the values in the
// A distribution aggregation may contain a histogram of the values in the
// population. The bucket boundaries for that histogram are described
// by the bounds. This defines len(bounds)+1 buckets.
//
@ -99,13 +99,14 @@ func Sum() *Aggregation {
// If len(bounds) is 1 then there is no finite buckets, and that single
// element is the common boundary of the overflow and underflow buckets.
func Distribution(bounds ...float64) *Aggregation {
return &Aggregation{
agg := &Aggregation{
Type: AggTypeDistribution,
Buckets: bounds,
newData: func() AggregationData {
return newDistributionData(bounds)
},
}
agg.newData = func() AggregationData {
return newDistributionData(agg)
}
return agg
}
// LastValue only reports the last value recorded using this

View file

@ -17,6 +17,9 @@ package view
import (
"math"
"time"
"go.opencensus.io/metric/metricdata"
)
// AggregationData represents an aggregated value from a collection.
@ -24,9 +27,10 @@ import (
// Mosts users won't directly access aggregration data.
type AggregationData interface {
isAggregationData() bool
addSample(v float64)
addSample(v float64, attachments map[string]interface{}, t time.Time)
clone() AggregationData
equal(other AggregationData) bool
toPoint(t metricdata.Type, time time.Time) metricdata.Point
}
const epsilon = 1e-9
@ -41,7 +45,7 @@ type CountData struct {
func (a *CountData) isAggregationData() bool { return true }
func (a *CountData) addSample(v float64) {
func (a *CountData) addSample(_ float64, _ map[string]interface{}, _ time.Time) {
a.Value = a.Value + 1
}
@ -58,6 +62,15 @@ func (a *CountData) equal(other AggregationData) bool {
return a.Value == a2.Value
}
func (a *CountData) toPoint(metricType metricdata.Type, t time.Time) metricdata.Point {
switch metricType {
case metricdata.TypeCumulativeInt64:
return metricdata.NewInt64Point(t, a.Value)
default:
panic("unsupported metricdata.Type")
}
}
// SumData is the aggregated data for the Sum aggregation.
// A sum aggregation processes data and sums up the recordings.
//
@ -68,8 +81,8 @@ type SumData struct {
func (a *SumData) isAggregationData() bool { return true }
func (a *SumData) addSample(f float64) {
a.Value += f
func (a *SumData) addSample(v float64, _ map[string]interface{}, _ time.Time) {
a.Value += v
}
func (a *SumData) clone() AggregationData {
@ -84,26 +97,45 @@ func (a *SumData) equal(other AggregationData) bool {
return math.Pow(a.Value-a2.Value, 2) < epsilon
}
func (a *SumData) toPoint(metricType metricdata.Type, t time.Time) metricdata.Point {
switch metricType {
case metricdata.TypeCumulativeInt64:
return metricdata.NewInt64Point(t, int64(a.Value))
case metricdata.TypeCumulativeFloat64:
return metricdata.NewFloat64Point(t, a.Value)
default:
panic("unsupported metricdata.Type")
}
}
// DistributionData is the aggregated data for the
// Distribution aggregation.
//
// Most users won't directly access distribution data.
//
// For a distribution with N bounds, the associated DistributionData will have
// N+1 buckets.
type DistributionData struct {
Count int64 // number of data points aggregated
Min float64 // minimum value in the distribution
Max float64 // max value in the distribution
Mean float64 // mean of the distribution
SumOfSquaredDev float64 // sum of the squared deviation from the mean
CountPerBucket []int64 // number of occurrences per bucket
bounds []float64 // histogram distribution of the values
Count int64 // number of data points aggregated
Min float64 // minimum value in the distribution
Max float64 // max value in the distribution
Mean float64 // mean of the distribution
SumOfSquaredDev float64 // sum of the squared deviation from the mean
CountPerBucket []int64 // number of occurrences per bucket
// ExemplarsPerBucket is slice the same length as CountPerBucket containing
// an exemplar for the associated bucket, or nil.
ExemplarsPerBucket []*metricdata.Exemplar
bounds []float64 // histogram distribution of the values
}
func newDistributionData(bounds []float64) *DistributionData {
func newDistributionData(agg *Aggregation) *DistributionData {
bucketCount := len(agg.Buckets) + 1
return &DistributionData{
CountPerBucket: make([]int64, len(bounds)+1),
bounds: bounds,
Min: math.MaxFloat64,
Max: math.SmallestNonzeroFloat64,
CountPerBucket: make([]int64, bucketCount),
ExemplarsPerBucket: make([]*metricdata.Exemplar, bucketCount),
bounds: agg.Buckets,
Min: math.MaxFloat64,
Max: math.SmallestNonzeroFloat64,
}
}
@ -119,46 +151,62 @@ func (a *DistributionData) variance() float64 {
func (a *DistributionData) isAggregationData() bool { return true }
func (a *DistributionData) addSample(f float64) {
if f < a.Min {
a.Min = f
// TODO(songy23): support exemplar attachments.
func (a *DistributionData) addSample(v float64, attachments map[string]interface{}, t time.Time) {
if v < a.Min {
a.Min = v
}
if f > a.Max {
a.Max = f
if v > a.Max {
a.Max = v
}
a.Count++
a.incrementBucketCount(f)
a.addToBucket(v, attachments, t)
if a.Count == 1 {
a.Mean = f
a.Mean = v
return
}
oldMean := a.Mean
a.Mean = a.Mean + (f-a.Mean)/float64(a.Count)
a.SumOfSquaredDev = a.SumOfSquaredDev + (f-oldMean)*(f-a.Mean)
a.Mean = a.Mean + (v-a.Mean)/float64(a.Count)
a.SumOfSquaredDev = a.SumOfSquaredDev + (v-oldMean)*(v-a.Mean)
}
func (a *DistributionData) incrementBucketCount(f float64) {
if len(a.bounds) == 0 {
a.CountPerBucket[0]++
return
}
for i, b := range a.bounds {
if f < b {
a.CountPerBucket[i]++
return
func (a *DistributionData) addToBucket(v float64, attachments map[string]interface{}, t time.Time) {
var count *int64
var i int
var b float64
for i, b = range a.bounds {
if v < b {
count = &a.CountPerBucket[i]
break
}
}
a.CountPerBucket[len(a.bounds)]++
if count == nil { // Last bucket.
i = len(a.bounds)
count = &a.CountPerBucket[i]
}
*count++
if exemplar := getExemplar(v, attachments, t); exemplar != nil {
a.ExemplarsPerBucket[i] = exemplar
}
}
func getExemplar(v float64, attachments map[string]interface{}, t time.Time) *metricdata.Exemplar {
if len(attachments) == 0 {
return nil
}
return &metricdata.Exemplar{
Value: v,
Timestamp: t,
Attachments: attachments,
}
}
func (a *DistributionData) clone() AggregationData {
counts := make([]int64, len(a.CountPerBucket))
copy(counts, a.CountPerBucket)
c := *a
c.CountPerBucket = counts
c.CountPerBucket = append([]int64(nil), a.CountPerBucket...)
c.ExemplarsPerBucket = append([]*metricdata.Exemplar(nil), a.ExemplarsPerBucket...)
return &c
}
@ -181,6 +229,33 @@ func (a *DistributionData) equal(other AggregationData) bool {
return a.Count == a2.Count && a.Min == a2.Min && a.Max == a2.Max && math.Pow(a.Mean-a2.Mean, 2) < epsilon && math.Pow(a.variance()-a2.variance(), 2) < epsilon
}
func (a *DistributionData) toPoint(metricType metricdata.Type, t time.Time) metricdata.Point {
switch metricType {
case metricdata.TypeCumulativeDistribution:
buckets := []metricdata.Bucket{}
for i := 0; i < len(a.CountPerBucket); i++ {
buckets = append(buckets, metricdata.Bucket{
Count: a.CountPerBucket[i],
Exemplar: a.ExemplarsPerBucket[i],
})
}
bucketOptions := &metricdata.BucketOptions{Bounds: a.bounds}
val := &metricdata.Distribution{
Count: a.Count,
Sum: a.Sum(),
SumOfSquaredDeviation: a.SumOfSquaredDev,
BucketOptions: bucketOptions,
Buckets: buckets,
}
return metricdata.NewDistributionPoint(t, val)
default:
// TODO: [rghetia] when we have a use case for TypeGaugeDistribution.
panic("unsupported metricdata.Type")
}
}
// LastValueData returns the last value recorded for LastValue aggregation.
type LastValueData struct {
Value float64
@ -190,7 +265,7 @@ func (l *LastValueData) isAggregationData() bool {
return true
}
func (l *LastValueData) addSample(v float64) {
func (l *LastValueData) addSample(v float64, _ map[string]interface{}, _ time.Time) {
l.Value = v
}
@ -205,3 +280,14 @@ func (l *LastValueData) equal(other AggregationData) bool {
}
return l.Value == a2.Value
}
func (l *LastValueData) toPoint(metricType metricdata.Type, t time.Time) metricdata.Point {
switch metricType {
case metricdata.TypeGaugeInt64:
return metricdata.NewInt64Point(t, int64(l.Value))
case metricdata.TypeGaugeFloat64:
return metricdata.NewFloat64Point(t, l.Value)
default:
panic("unsupported metricdata.Type")
}
}

View file

@ -17,6 +17,7 @@ package view
import (
"sort"
"time"
"go.opencensus.io/internal/tagencoding"
"go.opencensus.io/tag"
@ -31,20 +32,21 @@ type collector struct {
a *Aggregation
}
func (c *collector) addSample(s string, v float64) {
func (c *collector) addSample(s string, v float64, attachments map[string]interface{}, t time.Time) {
aggregator, ok := c.signatures[s]
if !ok {
aggregator = c.a.newData()
c.signatures[s] = aggregator
}
aggregator.addSample(v)
aggregator.addSample(v, attachments, t)
}
// collectRows returns a snapshot of the collected Row values.
func (c *collector) collectedRows(keys []tag.Key) []*Row {
var rows []*Row
rows := make([]*Row, 0, len(c.signatures))
for sig, aggregator := range c.signatures {
tags := decodeTags([]byte(sig), keys)
row := &Row{tags, aggregator}
row := &Row{Tags: tags, Data: aggregator.clone()}
rows = append(rows, row)
}
return rows

View file

@ -13,33 +13,34 @@
// limitations under the License.
//
/*
Package view contains support for collecting and exposing aggregates over stats.
In order to collect measurements, views need to be defined and registered.
A view allows recorded measurements to be filtered and aggregated over a time window.
All recorded measurements can be filtered by a list of tags.
OpenCensus provides several aggregation methods: count, distribution and sum.
Count aggregation only counts the number of measurement points. Distribution
aggregation provides statistical summary of the aggregated data. Sum distribution
sums up the measurement points. Aggregations are cumulative.
Users can dynamically create and delete views.
Libraries can export their own views and claim the view names
by registering them themselves.
Exporting
Collected and aggregated data can be exported to a metric collection
backend by registering its exporter.
Multiple exporters can be registered to upload the data to various
different backends. Users need to unregister the exporters once they
no longer are needed.
*/
// Package view contains support for collecting and exposing aggregates over stats.
//
// In order to collect measurements, views need to be defined and registered.
// A view allows recorded measurements to be filtered and aggregated.
//
// All recorded measurements can be grouped by a list of tags.
//
// OpenCensus provides several aggregation methods: Count, Distribution and Sum.
//
// Count only counts the number of measurement points recorded.
// Distribution provides statistical summary of the aggregated data by counting
// how many recorded measurements fall into each bucket.
// Sum adds up the measurement values.
// LastValue just keeps track of the most recently recorded measurement value.
// All aggregations are cumulative.
//
// Views can be registered and unregistered at any time during program execution.
//
// Libraries can define views but it is recommended that in most cases registering
// views be left up to applications.
//
// Exporting
//
// Collected and aggregated data can be exported to a metric collection
// backend by registering its exporter.
//
// Multiple exporters can be registered to upload the data to various
// different back ends.
package view // import "go.opencensus.io/stats/view"
// TODO(acetechnologist): Add a link to the language independent OpenCensus

View file

@ -27,6 +27,9 @@ var (
// Exporter takes a significant amount of time to
// process a Data, that work should be done on another goroutine.
//
// It is safe to assume that ExportView will not be called concurrently from
// multiple goroutines.
//
// The Data should not be modified.
type Exporter interface {
ExportView(viewData *Data)

View file

@ -17,19 +17,20 @@ package view
import (
"bytes"
"errors"
"fmt"
"reflect"
"sort"
"sync/atomic"
"time"
"go.opencensus.io/metric/metricdata"
"go.opencensus.io/stats"
"go.opencensus.io/stats/internal"
"go.opencensus.io/tag"
)
// View allows users to aggregate the recorded stats.Measurements.
// Views need to be passed to the Subscribe function to be before data will be
// Views need to be passed to the Register function before data will be
// collected and sent to Exporters.
type View struct {
Name string // Name of View. Must be unique. If unset, will default to the name of the Measure.
@ -42,7 +43,7 @@ type View struct {
// Measure is a stats.Measure to aggregate in this view.
Measure stats.Measure
// Aggregation is the aggregation function tp apply to the set of Measurements.
// Aggregation is the aggregation function to apply to the set of Measurements.
Aggregation *Aggregation
}
@ -67,14 +68,19 @@ func (v *View) same(other *View) bool {
v.Measure.Name() == other.Measure.Name()
}
// ErrNegativeBucketBounds error returned if histogram contains negative bounds.
//
// Deprecated: this should not be public.
var ErrNegativeBucketBounds = errors.New("negative bucket bounds not supported")
// canonicalize canonicalizes v by setting explicit
// defaults for Name and Description and sorting the TagKeys
func (v *View) canonicalize() error {
if v.Measure == nil {
return fmt.Errorf("cannot subscribe view %q: measure not set", v.Name)
return fmt.Errorf("cannot register view %q: measure not set", v.Name)
}
if v.Aggregation == nil {
return fmt.Errorf("cannot subscribe view %q: aggregation not set", v.Name)
return fmt.Errorf("cannot register view %q: aggregation not set", v.Name)
}
if v.Name == "" {
v.Name = v.Measure.Name()
@ -88,20 +94,40 @@ func (v *View) canonicalize() error {
sort.Slice(v.TagKeys, func(i, j int) bool {
return v.TagKeys[i].Name() < v.TagKeys[j].Name()
})
sort.Float64s(v.Aggregation.Buckets)
for _, b := range v.Aggregation.Buckets {
if b < 0 {
return ErrNegativeBucketBounds
}
}
// drop 0 bucket silently.
v.Aggregation.Buckets = dropZeroBounds(v.Aggregation.Buckets...)
return nil
}
func dropZeroBounds(bounds ...float64) []float64 {
for i, bound := range bounds {
if bound > 0 {
return bounds[i:]
}
}
return []float64{}
}
// viewInternal is the internal representation of a View.
type viewInternal struct {
view *View // view is the canonicalized View definition associated with this view.
subscribed uint32 // 1 if someone is subscribed and data need to be exported, use atomic to access
collector *collector
view *View // view is the canonicalized View definition associated with this view.
subscribed uint32 // 1 if someone is subscribed and data need to be exported, use atomic to access
collector *collector
metricDescriptor *metricdata.Descriptor
}
func newViewInternal(v *View) (*viewInternal, error) {
return &viewInternal{
view: v,
collector: &collector{make(map[string]AggregationData), v.Aggregation},
view: v,
collector: &collector{make(map[string]AggregationData), v.Aggregation},
metricDescriptor: viewToMetricDescriptor(v),
}, nil
}
@ -127,12 +153,12 @@ func (v *viewInternal) collectedRows() []*Row {
return v.collector.collectedRows(v.view.TagKeys)
}
func (v *viewInternal) addSample(m *tag.Map, val float64) {
func (v *viewInternal) addSample(m *tag.Map, val float64, attachments map[string]interface{}, t time.Time) {
if !v.isSubscribed() {
return
}
sig := string(encodeWithKeys(m, v.view.TagKeys))
v.collector.addSample(sig, val)
v.collector.addSample(sig, val, attachments, t)
}
// A Data is a set of rows about usage of the single measure associated
@ -163,7 +189,7 @@ func (r *Row) String() string {
}
// Equal returns true if both rows are equal. Tags are expected to be ordered
// by the key name. Even both rows have the same tags but the tags appear in
// by the key name. Even if both rows have the same tags but the tags appear in
// different orders it will return false.
func (r *Row) Equal(other *Row) bool {
if r == other {
@ -172,11 +198,23 @@ func (r *Row) Equal(other *Row) bool {
return reflect.DeepEqual(r.Tags, other.Tags) && r.Data.equal(other.Data)
}
func checkViewName(name string) error {
if len(name) > internal.MaxNameLength {
return fmt.Errorf("view name cannot be larger than %v", internal.MaxNameLength)
const maxNameLength = 255
// Returns true if the given string contains only printable characters.
func isPrintable(str string) bool {
for _, r := range str {
if !(r >= ' ' && r <= '~') {
return false
}
}
if !internal.IsPrintable(name) {
return true
}
func checkViewName(name string) error {
if len(name) > maxNameLength {
return fmt.Errorf("view name cannot be larger than %v", maxNameLength)
}
if !isPrintable(name) {
return fmt.Errorf("view name needs to be an ASCII string")
}
return nil

149
vendor/go.opencensus.io/stats/view/view_to_metric.go generated vendored Normal file
View file

@ -0,0 +1,149 @@
// Copyright 2019, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
package view
import (
"time"
"go.opencensus.io/metric/metricdata"
"go.opencensus.io/stats"
)
func getUnit(unit string) metricdata.Unit {
switch unit {
case "1":
return metricdata.UnitDimensionless
case "ms":
return metricdata.UnitMilliseconds
case "By":
return metricdata.UnitBytes
}
return metricdata.UnitDimensionless
}
func getType(v *View) metricdata.Type {
m := v.Measure
agg := v.Aggregation
switch agg.Type {
case AggTypeSum:
switch m.(type) {
case *stats.Int64Measure:
return metricdata.TypeCumulativeInt64
case *stats.Float64Measure:
return metricdata.TypeCumulativeFloat64
default:
panic("unexpected measure type")
}
case AggTypeDistribution:
return metricdata.TypeCumulativeDistribution
case AggTypeLastValue:
switch m.(type) {
case *stats.Int64Measure:
return metricdata.TypeGaugeInt64
case *stats.Float64Measure:
return metricdata.TypeGaugeFloat64
default:
panic("unexpected measure type")
}
case AggTypeCount:
switch m.(type) {
case *stats.Int64Measure:
return metricdata.TypeCumulativeInt64
case *stats.Float64Measure:
return metricdata.TypeCumulativeInt64
default:
panic("unexpected measure type")
}
default:
panic("unexpected aggregation type")
}
}
func getLabelKeys(v *View) []metricdata.LabelKey {
labelKeys := []metricdata.LabelKey{}
for _, k := range v.TagKeys {
labelKeys = append(labelKeys, metricdata.LabelKey{Key: k.Name()})
}
return labelKeys
}
func viewToMetricDescriptor(v *View) *metricdata.Descriptor {
return &metricdata.Descriptor{
Name: v.Name,
Description: v.Description,
Unit: convertUnit(v),
Type: getType(v),
LabelKeys: getLabelKeys(v),
}
}
func convertUnit(v *View) metricdata.Unit {
switch v.Aggregation.Type {
case AggTypeCount:
return metricdata.UnitDimensionless
default:
return getUnit(v.Measure.Unit())
}
}
func toLabelValues(row *Row, expectedKeys []metricdata.LabelKey) []metricdata.LabelValue {
labelValues := []metricdata.LabelValue{}
tagMap := make(map[string]string)
for _, tag := range row.Tags {
tagMap[tag.Key.Name()] = tag.Value
}
for _, key := range expectedKeys {
if val, ok := tagMap[key.Key]; ok {
labelValues = append(labelValues, metricdata.NewLabelValue(val))
} else {
labelValues = append(labelValues, metricdata.LabelValue{})
}
}
return labelValues
}
func rowToTimeseries(v *viewInternal, row *Row, now time.Time, startTime time.Time) *metricdata.TimeSeries {
return &metricdata.TimeSeries{
Points: []metricdata.Point{row.Data.toPoint(v.metricDescriptor.Type, now)},
LabelValues: toLabelValues(row, v.metricDescriptor.LabelKeys),
StartTime: startTime,
}
}
func viewToMetric(v *viewInternal, now time.Time, startTime time.Time) *metricdata.Metric {
if v.metricDescriptor.Type == metricdata.TypeGaugeInt64 ||
v.metricDescriptor.Type == metricdata.TypeGaugeFloat64 {
startTime = time.Time{}
}
rows := v.collectedRows()
if len(rows) == 0 {
return nil
}
ts := []*metricdata.TimeSeries{}
for _, row := range rows {
ts = append(ts, rowToTimeseries(v, row, now, startTime))
}
m := &metricdata.Metric{
Descriptor: *v.metricDescriptor,
TimeSeries: ts,
}
return m
}

View file

@ -17,8 +17,11 @@ package view
import (
"fmt"
"sync"
"time"
"go.opencensus.io/metric/metricdata"
"go.opencensus.io/metric/metricproducer"
"go.opencensus.io/stats"
"go.opencensus.io/stats/internal"
"go.opencensus.io/tag"
@ -43,14 +46,15 @@ type worker struct {
timer *time.Ticker
c chan command
quit, done chan bool
mu sync.RWMutex
}
var defaultWorker *worker
var defaultReportingDuration = 10 * time.Second
// Find returns a subscribed view associated with this name.
// If no subscribed view is found, nil is returned.
// Find returns a registered view associated with this name.
// If no registered view is found, nil is returned.
func Find(name string) (v *View) {
req := &getViewByNameReq{
name: name,
@ -62,13 +66,8 @@ func Find(name string) (v *View) {
}
// Register begins collecting data for the given views.
// Once a view is subscribed, it reports data to the registered exporters.
// Once a view is registered, it reports data to the registered exporters.
func Register(views ...*View) error {
for _, v := range views {
if err := v.canonicalize(); err != nil {
return err
}
}
req := &registerViewReq{
views: views,
err: make(chan error),
@ -94,6 +93,8 @@ func Unregister(views ...*View) {
<-req.done
}
// RetrieveData gets a snapshot of the data collected for the the view registered
// with the given name. It is intended for testing only.
func RetrieveData(viewName string) ([]*Row, error) {
req := &retrieveDataReq{
now: time.Now(),
@ -105,17 +106,23 @@ func RetrieveData(viewName string) ([]*Row, error) {
return resp.rows, resp.err
}
func record(tags *tag.Map, ms interface{}) {
func record(tags *tag.Map, ms interface{}, attachments map[string]interface{}) {
req := &recordReq{
tm: tags,
ms: ms.([]stats.Measurement),
tm: tags,
ms: ms.([]stats.Measurement),
attachments: attachments,
t: time.Now(),
}
defaultWorker.c <- req
}
// SetReportingPeriod sets the interval between reporting aggregated views in
// the program. If duration is less than or
// equal to zero, it enables the default behavior.
// the program. If duration is less than or equal to zero, it enables the
// default behavior.
//
// Note: each exporter makes different promises about what the lowest supported
// duration is. For example, the Stackdriver exporter recommends a value no
// lower than 1 minute. Consult each exporter per your needs.
func SetReportingPeriod(d time.Duration) {
// TODO(acetechnologist): ensure that the duration d is more than a certain
// value. e.g. 1s
@ -140,6 +147,9 @@ func newWorker() *worker {
}
func (w *worker) start() {
prodMgr := metricproducer.GlobalManager()
prodMgr.AddProducer(w)
for {
select {
case cmd := <-w.c:
@ -156,6 +166,9 @@ func (w *worker) start() {
}
func (w *worker) stop() {
prodMgr := metricproducer.GlobalManager()
prodMgr.DeleteProducer(w)
w.quit <- true
<-w.done
}
@ -173,13 +186,15 @@ func (w *worker) getMeasureRef(name string) *measureRef {
}
func (w *worker) tryRegisterView(v *View) (*viewInternal, error) {
w.mu.Lock()
defer w.mu.Unlock()
vi, err := newViewInternal(v)
if err != nil {
return nil, err
}
if x, ok := w.views[vi.view.Name]; ok {
if !x.view.same(vi.view) {
return nil, fmt.Errorf("cannot subscribe view %q; a different view with the same name is already subscribed", v.Name)
return nil, fmt.Errorf("cannot register view %q; a different view with the same name is already registered", v.Name)
}
// the view is already registered so there is nothing to do and the
@ -192,40 +207,75 @@ func (w *worker) tryRegisterView(v *View) (*viewInternal, error) {
return vi, nil
}
func (w *worker) unregisterView(viewName string) {
w.mu.Lock()
defer w.mu.Unlock()
delete(w.views, viewName)
}
func (w *worker) reportView(v *viewInternal, now time.Time) {
if !v.isSubscribed() {
return
}
rows := v.collectedRows()
_, ok := w.startTimes[v]
if !ok {
w.startTimes[v] = now
}
viewData := &Data{
View: v.view,
Start: w.startTimes[v],
End: time.Now(),
Rows: rows,
}
exportersMu.Lock()
for e := range exporters {
e.ExportView(viewData)
}
exportersMu.Unlock()
}
func (w *worker) reportUsage(now time.Time) {
w.mu.Lock()
defer w.mu.Unlock()
for _, v := range w.views {
if !v.isSubscribed() {
continue
}
rows := v.collectedRows()
_, ok := w.startTimes[v]
if !ok {
w.startTimes[v] = now
}
// Make sure collector is never going
// to mutate the exported data.
rows = deepCopyRowData(rows)
viewData := &Data{
View: v.view,
Start: w.startTimes[v],
End: time.Now(),
Rows: rows,
}
exportersMu.Lock()
for e := range exporters {
e.ExportView(viewData)
}
exportersMu.Unlock()
w.reportView(v, now)
}
}
func deepCopyRowData(rows []*Row) []*Row {
newRows := make([]*Row, 0, len(rows))
for _, r := range rows {
newRows = append(newRows, &Row{
Data: r.Data.clone(),
Tags: r.Tags,
})
func (w *worker) toMetric(v *viewInternal, now time.Time) *metricdata.Metric {
if !v.isSubscribed() {
return nil
}
return newRows
_, ok := w.startTimes[v]
if !ok {
w.startTimes[v] = now
}
var startTime time.Time
if v.metricDescriptor.Type == metricdata.TypeGaugeInt64 ||
v.metricDescriptor.Type == metricdata.TypeGaugeFloat64 {
startTime = time.Time{}
} else {
startTime = w.startTimes[v]
}
return viewToMetric(v, now, startTime)
}
// Read reads all view data and returns them as metrics.
// It is typically invoked by metric reader to export stats in metric format.
func (w *worker) Read() []*metricdata.Metric {
w.mu.Lock()
defer w.mu.Unlock()
now := time.Now()
metrics := make([]*metricdata.Metric, 0, len(w.views))
for _, v := range w.views {
metric := w.toMetric(v, now)
if metric != nil {
metrics = append(metrics, metric)
}
}
return metrics
}

View file

@ -56,6 +56,12 @@ type registerViewReq struct {
}
func (cmd *registerViewReq) handleCommand(w *worker) {
for _, v := range cmd.views {
if err := v.canonicalize(); err != nil {
cmd.err <- err
return
}
}
var errstr []string
for _, view := range cmd.views {
vi, err := w.tryRegisterView(view)
@ -73,7 +79,7 @@ func (cmd *registerViewReq) handleCommand(w *worker) {
}
}
// unregisterFromViewReq is the command to unsubscribe to a view. Has no
// unregisterFromViewReq is the command to unregister to a view. Has no
// impact on the data collection for client that are pulling data from the
// library.
type unregisterFromViewReq struct {
@ -88,13 +94,16 @@ func (cmd *unregisterFromViewReq) handleCommand(w *worker) {
continue
}
// Report pending data for this view before removing it.
w.reportView(vi, time.Now())
vi.unsubscribe()
if !vi.isSubscribed() {
// this was the last subscription and view is not collecting anymore.
// The collected data can be cleared.
vi.clearRows()
}
delete(w.views, name)
w.unregisterView(name)
}
cmd.done <- struct{}{}
}
@ -112,6 +121,8 @@ type retrieveDataResp struct {
}
func (cmd *retrieveDataReq) handleCommand(w *worker) {
w.mu.Lock()
defer w.mu.Unlock()
vi, ok := w.views[cmd.v]
if !ok {
cmd.c <- &retrieveDataResp{
@ -137,24 +148,28 @@ func (cmd *retrieveDataReq) handleCommand(w *worker) {
// recordReq is the command to record data related to multiple measures
// at once.
type recordReq struct {
tm *tag.Map
ms []stats.Measurement
tm *tag.Map
ms []stats.Measurement
attachments map[string]interface{}
t time.Time
}
func (cmd *recordReq) handleCommand(w *worker) {
w.mu.Lock()
defer w.mu.Unlock()
for _, m := range cmd.ms {
if (m == stats.Measurement{}) { // not subscribed
if (m == stats.Measurement{}) { // not registered
continue
}
ref := w.getMeasureRef(m.Measure().Name())
for v := range ref.views {
v.addSample(cmd.tm, m.Value())
v.addSample(cmd.tm, m.Value(), cmd.attachments, time.Now())
}
}
}
// setReportingPeriodReq is the command to modify the duration between
// reporting the collected data to the subscribed clients.
// reporting the collected data to the registered clients.
type setReportingPeriodReq struct {
d time.Duration
c chan bool

View file

@ -15,7 +15,9 @@
package tag
import "context"
import (
"context"
)
// FromContext returns the tag map stored in the context.
func FromContext(ctx context.Context) *Map {

11
vendor/go.opencensus.io/tag/key.go generated vendored
View file

@ -21,7 +21,7 @@ type Key struct {
}
// NewKey creates or retrieves a string key identified by name.
// Calling NewKey consequently with the same name returns the same key.
// Calling NewKey more than once with the same name returns the same key.
func NewKey(name string) (Key, error) {
if !checkKeyName(name) {
return Key{}, errInvalidKeyName
@ -29,6 +29,15 @@ func NewKey(name string) (Key, error) {
return Key{name: name}, nil
}
// MustNewKey returns a key with the given name, and panics if name is an invalid key name.
func MustNewKey(name string) Key {
k, err := NewKey(name)
if err != nil {
panic(err)
}
return k
}
// Name returns the name of the key.
func (k Key) Name() string {
return k.name

66
vendor/go.opencensus.io/tag/map.go generated vendored
View file

@ -28,10 +28,15 @@ type Tag struct {
Value string
}
type tagContent struct {
value string
m metadatas
}
// Map is a map of tags. Use New to create a context containing
// a new Map.
type Map struct {
m map[Key]string
m map[Key]tagContent
}
// Value returns the value for the key if a value for the key exists.
@ -40,7 +45,7 @@ func (m *Map) Value(k Key) (string, bool) {
return "", false
}
v, ok := m.m[k]
return v, ok
return v.value, ok
}
func (m *Map) String() string {
@ -62,21 +67,21 @@ func (m *Map) String() string {
return buffer.String()
}
func (m *Map) insert(k Key, v string) {
func (m *Map) insert(k Key, v string, md metadatas) {
if _, ok := m.m[k]; ok {
return
}
m.m[k] = v
m.m[k] = tagContent{value: v, m: md}
}
func (m *Map) update(k Key, v string) {
func (m *Map) update(k Key, v string, md metadatas) {
if _, ok := m.m[k]; ok {
m.m[k] = v
m.m[k] = tagContent{value: v, m: md}
}
}
func (m *Map) upsert(k Key, v string) {
m.m[k] = v
func (m *Map) upsert(k Key, v string, md metadatas) {
m.m[k] = tagContent{value: v, m: md}
}
func (m *Map) delete(k Key) {
@ -84,7 +89,7 @@ func (m *Map) delete(k Key) {
}
func newMap() *Map {
return &Map{m: make(map[Key]string)}
return &Map{m: make(map[Key]tagContent)}
}
// Mutator modifies a tag map.
@ -95,13 +100,17 @@ type Mutator interface {
// Insert returns a mutator that inserts a
// value associated with k. If k already exists in the tag map,
// mutator doesn't update the value.
func Insert(k Key, v string) Mutator {
// Metadata applies metadata to the tag. It is optional.
// Metadatas are applied in the order in which it is provided.
// If more than one metadata updates the same attribute then
// the update from the last metadata prevails.
func Insert(k Key, v string, mds ...Metadata) Mutator {
return &mutator{
fn: func(m *Map) (*Map, error) {
if !checkValue(v) {
return nil, errInvalidValue
}
m.insert(k, v)
m.insert(k, v, createMetadatas(mds...))
return m, nil
},
}
@ -110,13 +119,17 @@ func Insert(k Key, v string) Mutator {
// Update returns a mutator that updates the
// value of the tag associated with k with v. If k doesn't
// exists in the tag map, the mutator doesn't insert the value.
func Update(k Key, v string) Mutator {
// Metadata applies metadata to the tag. It is optional.
// Metadatas are applied in the order in which it is provided.
// If more than one metadata updates the same attribute then
// the update from the last metadata prevails.
func Update(k Key, v string, mds ...Metadata) Mutator {
return &mutator{
fn: func(m *Map) (*Map, error) {
if !checkValue(v) {
return nil, errInvalidValue
}
m.update(k, v)
m.update(k, v, createMetadatas(mds...))
return m, nil
},
}
@ -126,18 +139,37 @@ func Update(k Key, v string) Mutator {
// value of the tag associated with k with v. It inserts the
// value if k doesn't exist already. It mutates the value
// if k already exists.
func Upsert(k Key, v string) Mutator {
// Metadata applies metadata to the tag. It is optional.
// Metadatas are applied in the order in which it is provided.
// If more than one metadata updates the same attribute then
// the update from the last metadata prevails.
func Upsert(k Key, v string, mds ...Metadata) Mutator {
return &mutator{
fn: func(m *Map) (*Map, error) {
if !checkValue(v) {
return nil, errInvalidValue
}
m.upsert(k, v)
m.upsert(k, v, createMetadatas(mds...))
return m, nil
},
}
}
func createMetadatas(mds ...Metadata) metadatas {
var metas metadatas
if len(mds) > 0 {
for _, md := range mds {
if md != nil {
md(&metas)
}
}
} else {
WithTTL(TTLUnlimitedPropagation)(&metas)
}
return metas
}
// Delete returns a mutator that deletes
// the value associated with k.
func Delete(k Key) Mutator {
@ -160,10 +192,10 @@ func New(ctx context.Context, mutator ...Mutator) (context.Context, error) {
if !checkKeyName(k.Name()) {
return ctx, fmt.Errorf("key:%q: %v", k, errInvalidKeyName)
}
if !checkValue(v) {
if !checkValue(v.value) {
return ctx, fmt.Errorf("key:%q value:%q: %v", k.Name(), v, errInvalidValue)
}
m.insert(k, v)
m.insert(k, v.value, v.m)
}
}
var err error

View file

@ -162,14 +162,19 @@ func (eg *encoderGRPC) bytes() []byte {
// Encode encodes the tag map into a []byte. It is useful to propagate
// the tag maps on wire in binary format.
func Encode(m *Map) []byte {
if m == nil {
return nil
}
eg := &encoderGRPC{
buf: make([]byte, len(m.m)),
}
eg.writeByte(byte(tagsVersionID))
eg.writeByte(tagsVersionID)
for k, v := range m.m {
eg.writeByte(byte(keyTypeString))
eg.writeStringWithVarintLen(k.name)
eg.writeBytesWithVarintLen([]byte(v))
if v.m.ttl.ttl == valueTTLUnlimitedPropagation {
eg.writeByte(byte(keyTypeString))
eg.writeStringWithVarintLen(k.name)
eg.writeBytesWithVarintLen([]byte(v.value))
}
}
return eg.bytes()
}
@ -177,45 +182,58 @@ func Encode(m *Map) []byte {
// Decode decodes the given []byte into a tag map.
func Decode(bytes []byte) (*Map, error) {
ts := newMap()
err := DecodeEach(bytes, ts.upsert)
if err != nil {
// no partial failures
return nil, err
}
return ts, nil
}
// DecodeEach decodes the given serialized tag map, calling handler for each
// tag key and value decoded.
func DecodeEach(bytes []byte, fn func(key Key, val string, md metadatas)) error {
eg := &encoderGRPC{
buf: bytes,
}
if len(eg.buf) == 0 {
return ts, nil
return nil
}
version := eg.readByte()
if version > tagsVersionID {
return nil, fmt.Errorf("cannot decode: unsupported version: %q; supports only up to: %q", version, tagsVersionID)
return fmt.Errorf("cannot decode: unsupported version: %q; supports only up to: %q", version, tagsVersionID)
}
for !eg.readEnded() {
typ := keyType(eg.readByte())
if typ != keyTypeString {
return nil, fmt.Errorf("cannot decode: invalid key type: %q", typ)
return fmt.Errorf("cannot decode: invalid key type: %q", typ)
}
k, err := eg.readBytesWithVarintLen()
if err != nil {
return nil, err
return err
}
v, err := eg.readBytesWithVarintLen()
if err != nil {
return nil, err
return err
}
key, err := NewKey(string(k))
if err != nil {
return nil, err // no partial failures
return err
}
val := string(v)
if !checkValue(val) {
return nil, errInvalidValue // no partial failures
return errInvalidValue
}
fn(key, val, createMetadatas(WithTTL(TTLUnlimitedPropagation)))
if err != nil {
return err
}
ts.upsert(key, val)
}
return ts, nil
return nil
}

52
vendor/go.opencensus.io/tag/metadata.go generated vendored Normal file
View file

@ -0,0 +1,52 @@
// Copyright 2019, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
package tag
const (
// valueTTLNoPropagation prevents tag from propagating.
valueTTLNoPropagation = 0
// valueTTLUnlimitedPropagation allows tag to propagate without any limits on number of hops.
valueTTLUnlimitedPropagation = -1
)
// TTL is metadata that specifies number of hops a tag can propagate.
// Details about TTL metadata is specified at https://github.com/census-instrumentation/opencensus-specs/blob/master/tags/TagMap.md#tagmetadata
type TTL struct {
ttl int
}
var (
// TTLUnlimitedPropagation is TTL metadata that allows tag to propagate without any limits on number of hops.
TTLUnlimitedPropagation = TTL{ttl: valueTTLUnlimitedPropagation}
// TTLNoPropagation is TTL metadata that prevents tag from propagating.
TTLNoPropagation = TTL{ttl: valueTTLNoPropagation}
)
type metadatas struct {
ttl TTL
}
// Metadata applies metadatas specified by the function.
type Metadata func(*metadatas)
// WithTTL applies metadata with provided ttl.
func WithTTL(ttl TTL) Metadata {
return func(m *metadatas) {
m.ttl = ttl
}
}

View file

@ -25,7 +25,7 @@ func do(ctx context.Context, f func(ctx context.Context)) {
m := FromContext(ctx)
keyvals := make([]string, 0, 2*len(m.m))
for k, v := range m.m {
keyvals = append(keyvals, k.Name(), v)
keyvals = append(keyvals, k.Name(), v.value)
}
pprof.Do(ctx, pprof.Labels(keyvals...), f)
}

View file

@ -59,6 +59,11 @@ func Int64Attribute(key string, value int64) Attribute {
return Attribute{key: key, value: value}
}
// Float64Attribute returns a float64-valued attribute.
func Float64Attribute(key string, value float64) Attribute {
return Attribute{key: key, value: value}
}
// StringAttribute returns a string-valued attribute.
func StringAttribute(key string, value string) Attribute {
return Attribute{key: key, value: value}
@ -71,8 +76,8 @@ type LinkType int32
// LinkType values.
const (
LinkTypeUnspecified LinkType = iota // The relationship of the two spans is unknown.
LinkTypeChild // The current span is a child of the linked span.
LinkTypeParent // The current span is the parent of the linked span.
LinkTypeChild // The linked span is a child of the current span.
LinkTypeParent // The linked span is the parent of the current span.
)
// Link represents a reference from one span to another span.

View file

@ -14,7 +14,11 @@
package trace
import "go.opencensus.io/trace/internal"
import (
"sync"
"go.opencensus.io/trace/internal"
)
// Config represents the global tracing configuration.
type Config struct {
@ -23,12 +27,42 @@ type Config struct {
// IDGenerator is for internal use only.
IDGenerator internal.IDGenerator
// MaxAnnotationEventsPerSpan is max number of annotation events per span
MaxAnnotationEventsPerSpan int
// MaxMessageEventsPerSpan is max number of message events per span
MaxMessageEventsPerSpan int
// MaxAnnotationEventsPerSpan is max number of attributes per span
MaxAttributesPerSpan int
// MaxLinksPerSpan is max number of links per span
MaxLinksPerSpan int
}
var configWriteMu sync.Mutex
const (
// DefaultMaxAnnotationEventsPerSpan is default max number of annotation events per span
DefaultMaxAnnotationEventsPerSpan = 32
// DefaultMaxMessageEventsPerSpan is default max number of message events per span
DefaultMaxMessageEventsPerSpan = 128
// DefaultMaxAttributesPerSpan is default max number of attributes per span
DefaultMaxAttributesPerSpan = 32
// DefaultMaxLinksPerSpan is default max number of links per span
DefaultMaxLinksPerSpan = 32
)
// ApplyConfig applies changes to the global tracing configuration.
//
// Fields not provided in the given config are going to be preserved.
func ApplyConfig(cfg Config) {
configWriteMu.Lock()
defer configWriteMu.Unlock()
c := *config.Load().(*Config)
if cfg.DefaultSampler != nil {
c.DefaultSampler = cfg.DefaultSampler
@ -36,5 +70,17 @@ func ApplyConfig(cfg Config) {
if cfg.IDGenerator != nil {
c.IDGenerator = cfg.IDGenerator
}
if cfg.MaxAnnotationEventsPerSpan > 0 {
c.MaxAnnotationEventsPerSpan = cfg.MaxAnnotationEventsPerSpan
}
if cfg.MaxMessageEventsPerSpan > 0 {
c.MaxMessageEventsPerSpan = cfg.MaxMessageEventsPerSpan
}
if cfg.MaxAttributesPerSpan > 0 {
c.MaxAttributesPerSpan = cfg.MaxAttributesPerSpan
}
if cfg.MaxLinksPerSpan > 0 {
c.MaxLinksPerSpan = cfg.MaxLinksPerSpan
}
config.Store(&c)
}

View file

@ -32,6 +32,8 @@ to sample a subset of traces, or use AlwaysSample to collect a trace on every ru
trace.ApplyConfig(trace.Config{DefaultSampler: trace.AlwaysSample()})
Be careful about using trace.AlwaysSample in a production application with
significant traffic: a new trace will be started and exported for every request.
Adding Spans to a Trace
@ -42,7 +44,7 @@ It is common to want to capture all the activity of a function call in a span. F
this to work, the function must take a context.Context as a parameter. Add these two
lines to the top of the function:
ctx, span := trace.StartSpan(ctx, "my.org/Run")
ctx, span := trace.StartSpan(ctx, "example.com/Run")
defer span.End()
StartSpan will create a new top-level span if the context

38
vendor/go.opencensus.io/trace/evictedqueue.go generated vendored Normal file
View file

@ -0,0 +1,38 @@
// Copyright 2019, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package trace
type evictedQueue struct {
queue []interface{}
capacity int
droppedCount int
}
func newEvictedQueue(capacity int) *evictedQueue {
eq := &evictedQueue{
capacity: capacity,
queue: make([]interface{}, 0),
}
return eq
}
func (eq *evictedQueue) add(value interface{}) {
if len(eq.queue) == eq.capacity {
eq.queue = eq.queue[1:]
eq.droppedCount++
}
eq.queue = append(eq.queue, value)
}

View file

@ -16,6 +16,7 @@ package trace
import (
"sync"
"sync/atomic"
"time"
)
@ -30,9 +31,11 @@ type Exporter interface {
ExportSpan(s *SpanData)
}
type exportersMap map[Exporter]struct{}
var (
exportersMu sync.Mutex
exporters map[Exporter]struct{}
exporterMu sync.Mutex
exporters atomic.Value
)
// RegisterExporter adds to the list of Exporters that will receive sampled
@ -40,20 +43,31 @@ var (
//
// Binaries can register exporters, libraries shouldn't register exporters.
func RegisterExporter(e Exporter) {
exportersMu.Lock()
if exporters == nil {
exporters = make(map[Exporter]struct{})
exporterMu.Lock()
new := make(exportersMap)
if old, ok := exporters.Load().(exportersMap); ok {
for k, v := range old {
new[k] = v
}
}
exporters[e] = struct{}{}
exportersMu.Unlock()
new[e] = struct{}{}
exporters.Store(new)
exporterMu.Unlock()
}
// UnregisterExporter removes from the list of Exporters the Exporter that was
// registered with the given name.
func UnregisterExporter(e Exporter) {
exportersMu.Lock()
delete(exporters, e)
exportersMu.Unlock()
exporterMu.Lock()
new := make(exportersMap)
if old, ok := exporters.Load().(exportersMap); ok {
for k, v := range old {
new[k] = v
}
}
delete(new, e)
exporters.Store(new)
exporterMu.Unlock()
}
// SpanData contains all the information collected by a Span.
@ -71,6 +85,13 @@ type SpanData struct {
Annotations []Annotation
MessageEvents []MessageEvent
Status
Links []Link
HasRemoteParent bool
Links []Link
HasRemoteParent bool
DroppedAttributeCount int
DroppedAnnotationCount int
DroppedMessageEventCount int
DroppedLinkCount int
// ChildSpanCount holds the number of child span created for this span.
ChildSpanCount int
}

View file

@ -15,6 +15,7 @@
// Package internal provides trace internals.
package internal
// IDGenerator allows custom generators for TraceId and SpanId.
type IDGenerator interface {
NewTraceID() [16]byte
NewSpanID() [8]byte

61
vendor/go.opencensus.io/trace/lrumap.go generated vendored Normal file
View file

@ -0,0 +1,61 @@
// Copyright 2019, OpenCensus Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package trace
import (
"github.com/golang/groupcache/lru"
)
// A simple lru.Cache wrapper that tracks the keys of the current contents and
// the cumulative number of evicted items.
type lruMap struct {
cacheKeys map[lru.Key]bool
cache *lru.Cache
droppedCount int
}
func newLruMap(size int) *lruMap {
lm := &lruMap{
cacheKeys: make(map[lru.Key]bool),
cache: lru.New(size),
droppedCount: 0,
}
lm.cache.OnEvicted = func(key lru.Key, value interface{}) {
delete(lm.cacheKeys, key)
lm.droppedCount++
}
return lm
}
func (lm lruMap) len() int {
return lm.cache.Len()
}
func (lm lruMap) keys() []interface{} {
keys := []interface{}{}
for k := range lm.cacheKeys {
keys = append(keys, k)
}
return keys
}
func (lm *lruMap) add(key, value interface{}) {
lm.cacheKeys[lru.Key(key)] = true
lm.cache.Add(lru.Key(key), value)
}
func (lm *lruMap) get(key interface{}) (interface{}, bool) {
return lm.cache.Get(key)
}

Some files were not shown because too many files have changed in this diff Show more