vendor: regenerate
- all changes here are attributed to difference in behaviour between, namely: - resolution of secondary test dependencies - prunning of non-Go files Signed-off-by: Ilya Dmitrichenko <errordeveloper@gmail.com> Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This commit is contained in:
parent
a46f968229
commit
e5d28115ee
771 changed files with 120071 additions and 18747 deletions
1319
vendor/cloud.google.com/go/CHANGES.md
generated
vendored
Normal file
1319
vendor/cloud.google.com/go/CHANGES.md
generated
vendored
Normal file
File diff suppressed because it is too large
Load diff
44
vendor/cloud.google.com/go/CODE_OF_CONDUCT.md
generated
vendored
Normal file
44
vendor/cloud.google.com/go/CODE_OF_CONDUCT.md
generated
vendored
Normal file
|
@ -0,0 +1,44 @@
|
|||
# Contributor Code of Conduct
|
||||
|
||||
As contributors and maintainers of this project,
|
||||
and in the interest of fostering an open and welcoming community,
|
||||
we pledge to respect all people who contribute through reporting issues,
|
||||
posting feature requests, updating documentation,
|
||||
submitting pull requests or patches, and other activities.
|
||||
|
||||
We are committed to making participation in this project
|
||||
a harassment-free experience for everyone,
|
||||
regardless of level of experience, gender, gender identity and expression,
|
||||
sexual orientation, disability, personal appearance,
|
||||
body size, race, ethnicity, age, religion, or nationality.
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery
|
||||
* Personal attacks
|
||||
* Trolling or insulting/derogatory comments
|
||||
* Public or private harassment
|
||||
* Publishing other's private information,
|
||||
such as physical or electronic
|
||||
addresses, without explicit permission
|
||||
* Other unethical or unprofessional conduct.
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions
|
||||
that are not aligned to this Code of Conduct.
|
||||
By adopting this Code of Conduct,
|
||||
project maintainers commit themselves to fairly and consistently
|
||||
applying these principles to every aspect of managing this project.
|
||||
Project maintainers who do not follow or enforce the Code of Conduct
|
||||
may be permanently removed from the project team.
|
||||
|
||||
This code of conduct applies both within project spaces and in public spaces
|
||||
when an individual is representing the project or its community.
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior
|
||||
may be reported by opening an issue
|
||||
or contacting one or more of the project maintainers.
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.2.0,
|
||||
available at [http://contributor-covenant.org/version/1/2/0/](http://contributor-covenant.org/version/1/2/0/)
|
||||
|
238
vendor/cloud.google.com/go/CONTRIBUTING.md
generated
vendored
Normal file
238
vendor/cloud.google.com/go/CONTRIBUTING.md
generated
vendored
Normal file
|
@ -0,0 +1,238 @@
|
|||
# Contributing
|
||||
|
||||
1. Sign one of the contributor license agreements below.
|
||||
1. `go get golang.org/x/review/git-codereview` to install the code reviewing
|
||||
tool.
|
||||
1. You will need to ensure that your `GOBIN` directory (by default
|
||||
`$GOPATH/bin`) is in your `PATH` so that git can find the command.
|
||||
1. If you would like, you may want to set up aliases for git-codereview,
|
||||
such that `git codereview change` becomes `git change`. See the
|
||||
[godoc](https://godoc.org/golang.org/x/review/git-codereview) for details.
|
||||
1. Should you run into issues with the git-codereview tool, please note
|
||||
that all error messages will assume that you have set up these aliases.
|
||||
1. Get the cloud package by running `go get -d cloud.google.com/go`.
|
||||
1. If you have already checked out the source, make sure that the remote
|
||||
git origin is https://code.googlesource.com/gocloud:
|
||||
|
||||
```
|
||||
git remote set-url origin https://code.googlesource.com/gocloud
|
||||
```
|
||||
|
||||
1. Make sure your auth is configured correctly by visiting
|
||||
https://code.googlesource.com, clicking "Generate Password", and following the
|
||||
directions.
|
||||
1. Make changes and create a change by running `git codereview change <name>`,
|
||||
provide a commit message, and use `git codereview mail` to create a Gerrit CL.
|
||||
1. Keep amending to the change with `git codereview change` and mail as your
|
||||
receive feedback. Each new mailed amendment will create a new patch set for
|
||||
your change in Gerrit.
|
||||
- Note: if your change includes a breaking change, our breaking change
|
||||
detector will cause CI/CD to fail. If your breaking change is acceptable
|
||||
in some way, add BREAKING_CHANGE_ACCEPTABLE=<reason> to cause the
|
||||
detector not to be run and to make it clear why that is acceptable.
|
||||
|
||||
## Integration Tests
|
||||
|
||||
In addition to the unit tests, you may run the integration test suite. These
|
||||
directions describe setting up your environment to run integration tests for
|
||||
_all_ packages: note that many of these instructions may be redundant if you
|
||||
intend only to run integration tests on a single package.
|
||||
|
||||
#### GCP Setup
|
||||
|
||||
To run the integrations tests, creation and configuration of two projects in
|
||||
the Google Developers Console is required: one specifically for Firestore
|
||||
integration tests, and another for all other integration tests. We'll refer to
|
||||
these projects as "general project" and "Firestore project".
|
||||
|
||||
After creating each project, you must [create a service account](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount)
|
||||
for each project. Ensure the project-level **Owner**
|
||||
[IAM role](console.cloud.google.com/iam-admin/iam/project) role is added to
|
||||
each service account. During the creation of the service account, you should
|
||||
download the JSON credential file for use later.
|
||||
|
||||
Next, ensure the following APIs are enabled in the general project:
|
||||
|
||||
- BigQuery API
|
||||
- BigQuery Data Transfer API
|
||||
- Cloud Dataproc API
|
||||
- Cloud Dataproc Control API Private
|
||||
- Cloud Datastore API
|
||||
- Cloud Firestore API
|
||||
- Cloud Key Management Service (KMS) API
|
||||
- Cloud Natural Language API
|
||||
- Cloud OS Login API
|
||||
- Cloud Pub/Sub API
|
||||
- Cloud Resource Manager API
|
||||
- Cloud Spanner API
|
||||
- Cloud Speech API
|
||||
- Cloud Translation API
|
||||
- Cloud Video Intelligence API
|
||||
- Cloud Vision API
|
||||
- Compute Engine API
|
||||
- Compute Engine Instance Group Manager API
|
||||
- Container Registry API
|
||||
- Firebase Rules API
|
||||
- Google Cloud APIs
|
||||
- Google Cloud Deployment Manager V2 API
|
||||
- Google Cloud SQL
|
||||
- Google Cloud Storage
|
||||
- Google Cloud Storage JSON API
|
||||
- Google Compute Engine Instance Group Updater API
|
||||
- Google Compute Engine Instance Groups API
|
||||
- Kubernetes Engine API
|
||||
- Stackdriver Error Reporting API
|
||||
|
||||
Next, create a Datastore database in the general project, and a Firestore
|
||||
database in the Firestore project.
|
||||
|
||||
Finally, in the general project, create an API key for the translate API:
|
||||
|
||||
- Go to GCP Developer Console.
|
||||
- Navigate to APIs & Services > Credentials.
|
||||
- Click Create Credentials > API Key.
|
||||
- Save this key for use in `GCLOUD_TESTS_API_KEY` as described below.
|
||||
|
||||
#### Local Setup
|
||||
|
||||
Once the two projects are created and configured, set the following environment
|
||||
variables:
|
||||
|
||||
- `GCLOUD_TESTS_GOLANG_PROJECT_ID`: Developers Console project's ID (e.g.
|
||||
bamboo-shift-455) for the general project.
|
||||
- `GCLOUD_TESTS_GOLANG_KEY`: The path to the JSON key file of the general
|
||||
project's service account.
|
||||
- `GCLOUD_TESTS_GOLANG_FIRESTORE_PROJECT_ID`: Developers Console project's ID
|
||||
(e.g. doorway-cliff-677) for the Firestore project.
|
||||
- `GCLOUD_TESTS_GOLANG_FIRESTORE_KEY`: The path to the JSON key file of the
|
||||
Firestore project's service account.
|
||||
- `GCLOUD_TESTS_GOLANG_KEYRING`: The full name of the keyring for the tests,
|
||||
in the form
|
||||
"projects/P/locations/L/keyRings/R". The creation of this is described below.
|
||||
- `GCLOUD_TESTS_API_KEY`: API key for using the Translate API.
|
||||
- `GCLOUD_TESTS_GOLANG_ZONE`: Compute Engine zone.
|
||||
|
||||
Install the [gcloud command-line tool][gcloudcli] to your machine and use it to
|
||||
create some resources used in integration tests.
|
||||
|
||||
From the project's root directory:
|
||||
|
||||
``` sh
|
||||
# Sets the default project in your env.
|
||||
$ gcloud config set project $GCLOUD_TESTS_GOLANG_PROJECT_ID
|
||||
|
||||
# Authenticates the gcloud tool with your account.
|
||||
$ gcloud auth login
|
||||
|
||||
# Create the indexes used in the datastore integration tests.
|
||||
$ gcloud datastore indexes create datastore/testdata/index.yaml
|
||||
|
||||
# Creates a Google Cloud storage bucket with the same name as your test project,
|
||||
# and with the Stackdriver Logging service account as owner, for the sink
|
||||
# integration tests in logging.
|
||||
$ gsutil mb gs://$GCLOUD_TESTS_GOLANG_PROJECT_ID
|
||||
$ gsutil acl ch -g cloud-logs@google.com:O gs://$GCLOUD_TESTS_GOLANG_PROJECT_ID
|
||||
|
||||
# Creates a PubSub topic for integration tests of storage notifications.
|
||||
$ gcloud beta pubsub topics create go-storage-notification-test
|
||||
# Next, go to the Pub/Sub dashboard in GCP console. Authorize the user
|
||||
# "service-<numberic project id>@gs-project-accounts.iam.gserviceaccount.com"
|
||||
# as a publisher to that topic.
|
||||
|
||||
# Creates a Spanner instance for the spanner integration tests.
|
||||
$ gcloud beta spanner instances create go-integration-test --config regional-us-central1 --nodes 10 --description 'Instance for go client test'
|
||||
# NOTE: Spanner instances are priced by the node-hour, so you may want to
|
||||
# delete the instance after testing with 'gcloud beta spanner instances delete'.
|
||||
|
||||
$ export MY_KEYRING=some-keyring-name
|
||||
$ export MY_LOCATION=global
|
||||
# Creates a KMS keyring, in the same location as the default location for your
|
||||
# project's buckets.
|
||||
$ gcloud kms keyrings create $MY_KEYRING --location $MY_LOCATION
|
||||
# Creates two keys in the keyring, named key1 and key2.
|
||||
$ gcloud kms keys create key1 --keyring $MY_KEYRING --location $MY_LOCATION --purpose encryption
|
||||
$ gcloud kms keys create key2 --keyring $MY_KEYRING --location $MY_LOCATION --purpose encryption
|
||||
# Sets the GCLOUD_TESTS_GOLANG_KEYRING environment variable.
|
||||
$ export GCLOUD_TESTS_GOLANG_KEYRING=projects/$GCLOUD_TESTS_GOLANG_PROJECT_ID/locations/$MY_LOCATION/keyRings/$MY_KEYRING
|
||||
# Authorizes Google Cloud Storage to encrypt and decrypt using key1.
|
||||
gsutil kms authorize -p $GCLOUD_TESTS_GOLANG_PROJECT_ID -k $GCLOUD_TESTS_GOLANG_KEYRING/cryptoKeys/key1
|
||||
```
|
||||
|
||||
#### Running
|
||||
|
||||
Once you've done the necessary setup, you can run the integration tests by
|
||||
running:
|
||||
|
||||
``` sh
|
||||
$ go test -v cloud.google.com/go/...
|
||||
```
|
||||
|
||||
#### Replay
|
||||
|
||||
Some packages can record the RPCs during integration tests to a file for
|
||||
subsequent replay. To record, pass the `-record` flag to `go test`. The
|
||||
recording will be saved to the _package_`.replay` file. To replay integration
|
||||
tests from a saved recording, the replay file must be present, the `-short`
|
||||
flag must be passed to `go test`, and the `GCLOUD_TESTS_GOLANG_ENABLE_REPLAY`
|
||||
environment variable must have a non-empty value.
|
||||
|
||||
## Contributor License Agreements
|
||||
|
||||
Before we can accept your pull requests you'll need to sign a Contributor
|
||||
License Agreement (CLA):
|
||||
|
||||
- **If you are an individual writing original source code** and **you own the
|
||||
intellectual property**, then you'll need to sign an [individual CLA][indvcla].
|
||||
- **If you work for a company that wants to allow you to contribute your
|
||||
work**, then you'll need to sign a [corporate CLA][corpcla].
|
||||
|
||||
You can sign these electronically (just scroll to the bottom). After that,
|
||||
we'll be able to accept your pull requests.
|
||||
|
||||
## Contributor Code of Conduct
|
||||
|
||||
As contributors and maintainers of this project,
|
||||
and in the interest of fostering an open and welcoming community,
|
||||
we pledge to respect all people who contribute through reporting issues,
|
||||
posting feature requests, updating documentation,
|
||||
submitting pull requests or patches, and other activities.
|
||||
|
||||
We are committed to making participation in this project
|
||||
a harassment-free experience for everyone,
|
||||
regardless of level of experience, gender, gender identity and expression,
|
||||
sexual orientation, disability, personal appearance,
|
||||
body size, race, ethnicity, age, religion, or nationality.
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery
|
||||
* Personal attacks
|
||||
* Trolling or insulting/derogatory comments
|
||||
* Public or private harassment
|
||||
* Publishing other's private information,
|
||||
such as physical or electronic
|
||||
addresses, without explicit permission
|
||||
* Other unethical or unprofessional conduct.
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions
|
||||
that are not aligned to this Code of Conduct.
|
||||
By adopting this Code of Conduct,
|
||||
project maintainers commit themselves to fairly and consistently
|
||||
applying these principles to every aspect of managing this project.
|
||||
Project maintainers who do not follow or enforce the Code of Conduct
|
||||
may be permanently removed from the project team.
|
||||
|
||||
This code of conduct applies both within project spaces and in public spaces
|
||||
when an individual is representing the project or its community.
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior
|
||||
may be reported by opening an issue
|
||||
or contacting one or more of the project maintainers.
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.2.0,
|
||||
available at [http://contributor-covenant.org/version/1/2/0/](http://contributor-covenant.org/version/1/2/0/)
|
||||
|
||||
[gcloudcli]: https://developers.google.com/cloud/sdk/gcloud/
|
||||
[indvcla]: https://developers.google.com/open-source/cla/individual
|
||||
[corpcla]: https://developers.google.com/open-source/cla/corporate
|
18
vendor/cloud.google.com/go/RELEASING.md
generated
vendored
Normal file
18
vendor/cloud.google.com/go/RELEASING.md
generated
vendored
Normal file
|
@ -0,0 +1,18 @@
|
|||
# How to Release this Repo
|
||||
|
||||
1. Determine the current release version with `git tag -l`. It should look
|
||||
something like `vX.Y.Z`. We'll call the current version `$CV` and the new
|
||||
version `$NV`.
|
||||
1. On master, run `git log $CV..` to list all the changes since the last
|
||||
release.
|
||||
1. Edit `CHANGES.md` to include a summary of the changes.
|
||||
1. `cd internal/version && go generate && cd -`
|
||||
1. Mail the CL containing the `CHANGES.md` changes. When the CL is approved,
|
||||
submit it.
|
||||
1. Without submitting any other CLs:
|
||||
a. Switch to master.
|
||||
b. `git pull`
|
||||
c. Tag the repo with the next version: `git tag $NV`.
|
||||
d. Push the tag: `git push origin $NV`.
|
||||
1. Update [the releases page](https://github.com/googleapis/google-cloud-go/releases)
|
||||
with the new release, copying the contents of the CHANGES.md.
|
|
@ -1,9 +0,0 @@
|
|||
// +build ignore
|
||||
|
||||
// Empty include file to generate z symbols
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
// EOF
|
472
vendor/cloud.google.com/go/cmd/go-cloud-debug-agent/internal/debug/gosym/pclntab.go
generated
vendored
472
vendor/cloud.google.com/go/cmd/go-cloud-debug-agent/internal/debug/gosym/pclntab.go
generated
vendored
|
@ -1,472 +0,0 @@
|
|||
// Copyright 2018 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
/*
|
||||
* Line tables
|
||||
*/
|
||||
|
||||
package gosym
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// A LineTable is a data structure mapping program counters to line numbers.
|
||||
//
|
||||
// In Go 1.1 and earlier, each function (represented by a Func) had its own LineTable,
|
||||
// and the line number corresponded to a numbering of all source lines in the
|
||||
// program, across all files. That absolute line number would then have to be
|
||||
// converted separately to a file name and line number within the file.
|
||||
//
|
||||
// In Go 1.2, the format of the data changed so that there is a single LineTable
|
||||
// for the entire program, shared by all Funcs, and there are no absolute line
|
||||
// numbers, just line numbers within specific files.
|
||||
//
|
||||
// For the most part, LineTable's methods should be treated as an internal
|
||||
// detail of the package; callers should use the methods on Table instead.
|
||||
type LineTable struct {
|
||||
Data []byte
|
||||
PC uint64
|
||||
Line int
|
||||
|
||||
// Go 1.2 state
|
||||
mu sync.Mutex
|
||||
go12 int // is this in Go 1.2 format? -1 no, 0 unknown, 1 yes
|
||||
binary binary.ByteOrder
|
||||
quantum uint32
|
||||
ptrsize uint32
|
||||
functab []byte
|
||||
nfunctab uint32
|
||||
filetab []byte
|
||||
nfiletab uint32
|
||||
fileMap map[string]uint32
|
||||
}
|
||||
|
||||
// NOTE(rsc): This is wrong for GOARCH=arm, which uses a quantum of 4,
|
||||
// but we have no idea whether we're using arm or not. This only
|
||||
// matters in the old (pre-Go 1.2) symbol table format, so it's not worth
|
||||
// fixing.
|
||||
const oldQuantum = 1
|
||||
|
||||
func (t *LineTable) parse(targetPC uint64, targetLine int) (b []byte, pc uint64, line int) {
|
||||
// The PC/line table can be thought of as a sequence of
|
||||
// <pc update>* <line update>
|
||||
// batches. Each update batch results in a (pc, line) pair,
|
||||
// where line applies to every PC from pc up to but not
|
||||
// including the pc of the next pair.
|
||||
//
|
||||
// Here we process each update individually, which simplifies
|
||||
// the code, but makes the corner cases more confusing.
|
||||
b, pc, line = t.Data, t.PC, t.Line
|
||||
for pc <= targetPC && line != targetLine && len(b) > 0 {
|
||||
code := b[0]
|
||||
b = b[1:]
|
||||
switch {
|
||||
case code == 0:
|
||||
if len(b) < 4 {
|
||||
b = b[0:0]
|
||||
break
|
||||
}
|
||||
val := binary.BigEndian.Uint32(b)
|
||||
b = b[4:]
|
||||
line += int(val)
|
||||
case code <= 64:
|
||||
line += int(code)
|
||||
case code <= 128:
|
||||
line -= int(code - 64)
|
||||
default:
|
||||
pc += oldQuantum * uint64(code-128)
|
||||
continue
|
||||
}
|
||||
pc += oldQuantum
|
||||
}
|
||||
return b, pc, line
|
||||
}
|
||||
|
||||
func (t *LineTable) slice(pc uint64) *LineTable {
|
||||
data, pc, line := t.parse(pc, -1)
|
||||
return &LineTable{Data: data, PC: pc, Line: line}
|
||||
}
|
||||
|
||||
// PCToLine returns the line number for the given program counter.
|
||||
// Callers should use Table's PCToLine method instead.
|
||||
func (t *LineTable) PCToLine(pc uint64) int {
|
||||
if t.isGo12() {
|
||||
return t.go12PCToLine(pc)
|
||||
}
|
||||
_, _, line := t.parse(pc, -1)
|
||||
return line
|
||||
}
|
||||
|
||||
// LineToPC returns the program counter for the given line number,
|
||||
// considering only program counters before maxpc.
|
||||
// Callers should use Table's LineToPC method instead.
|
||||
func (t *LineTable) LineToPC(line int, maxpc uint64) uint64 {
|
||||
if t.isGo12() {
|
||||
return 0
|
||||
}
|
||||
_, pc, line1 := t.parse(maxpc, line)
|
||||
if line1 != line {
|
||||
return 0
|
||||
}
|
||||
// Subtract quantum from PC to account for post-line increment
|
||||
return pc - oldQuantum
|
||||
}
|
||||
|
||||
// NewLineTable returns a new PC/line table
|
||||
// corresponding to the encoded data.
|
||||
// Text must be the start address of the
|
||||
// corresponding text segment.
|
||||
func NewLineTable(data []byte, text uint64) *LineTable {
|
||||
return &LineTable{Data: data, PC: text, Line: 0}
|
||||
}
|
||||
|
||||
// Go 1.2 symbol table format.
|
||||
// See golang.org/s/go12symtab.
|
||||
//
|
||||
// A general note about the methods here: rather than try to avoid
|
||||
// index out of bounds errors, we trust Go to detect them, and then
|
||||
// we recover from the panics and treat them as indicative of a malformed
|
||||
// or incomplete table.
|
||||
//
|
||||
// The methods called by symtab.go, which begin with "go12" prefixes,
|
||||
// are expected to have that recovery logic.
|
||||
|
||||
// isGo12 reports whether this is a Go 1.2 (or later) symbol table.
|
||||
func (t *LineTable) isGo12() bool {
|
||||
t.go12Init()
|
||||
return t.go12 == 1
|
||||
}
|
||||
|
||||
const go12magic = 0xfffffffb
|
||||
|
||||
// uintptr returns the pointer-sized value encoded at b.
|
||||
// The pointer size is dictated by the table being read.
|
||||
func (t *LineTable) uintptr(b []byte) uint64 {
|
||||
if t.ptrsize == 4 {
|
||||
return uint64(t.binary.Uint32(b))
|
||||
}
|
||||
return t.binary.Uint64(b)
|
||||
}
|
||||
|
||||
// go12init initializes the Go 1.2 metadata if t is a Go 1.2 symbol table.
|
||||
func (t *LineTable) go12Init() {
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
if t.go12 != 0 {
|
||||
return
|
||||
}
|
||||
|
||||
defer func() {
|
||||
// If we panic parsing, assume it's not a Go 1.2 symbol table.
|
||||
recover()
|
||||
}()
|
||||
|
||||
// Check header: 4-byte magic, two zeros, pc quantum, pointer size.
|
||||
t.go12 = -1 // not Go 1.2 until proven otherwise
|
||||
if len(t.Data) < 16 || t.Data[4] != 0 || t.Data[5] != 0 ||
|
||||
(t.Data[6] != 1 && t.Data[6] != 4) || // pc quantum
|
||||
(t.Data[7] != 4 && t.Data[7] != 8) { // pointer size
|
||||
return
|
||||
}
|
||||
|
||||
switch uint32(go12magic) {
|
||||
case binary.LittleEndian.Uint32(t.Data):
|
||||
t.binary = binary.LittleEndian
|
||||
case binary.BigEndian.Uint32(t.Data):
|
||||
t.binary = binary.BigEndian
|
||||
default:
|
||||
return
|
||||
}
|
||||
|
||||
t.quantum = uint32(t.Data[6])
|
||||
t.ptrsize = uint32(t.Data[7])
|
||||
|
||||
t.nfunctab = uint32(t.uintptr(t.Data[8:]))
|
||||
t.functab = t.Data[8+t.ptrsize:]
|
||||
functabsize := t.nfunctab*2*t.ptrsize + t.ptrsize
|
||||
fileoff := t.binary.Uint32(t.functab[functabsize:])
|
||||
t.functab = t.functab[:functabsize]
|
||||
t.filetab = t.Data[fileoff:]
|
||||
t.nfiletab = t.binary.Uint32(t.filetab)
|
||||
t.filetab = t.filetab[:t.nfiletab*4]
|
||||
|
||||
t.go12 = 1 // so far so good
|
||||
}
|
||||
|
||||
// go12Funcs returns a slice of Funcs derived from the Go 1.2 pcln table.
|
||||
func (t *LineTable) go12Funcs() []Func {
|
||||
// Assume it is malformed and return nil on error.
|
||||
defer func() {
|
||||
recover()
|
||||
}()
|
||||
|
||||
n := len(t.functab) / int(t.ptrsize) / 2
|
||||
funcs := make([]Func, n)
|
||||
for i := range funcs {
|
||||
f := &funcs[i]
|
||||
f.Entry = uint64(t.uintptr(t.functab[2*i*int(t.ptrsize):]))
|
||||
f.End = uint64(t.uintptr(t.functab[(2*i+2)*int(t.ptrsize):]))
|
||||
info := t.Data[t.uintptr(t.functab[(2*i+1)*int(t.ptrsize):]):]
|
||||
f.LineTable = t
|
||||
f.FrameSize = int(t.binary.Uint32(info[t.ptrsize+2*4:]))
|
||||
f.Sym = &Sym{
|
||||
Value: f.Entry,
|
||||
Type: 'T',
|
||||
Name: t.string(t.binary.Uint32(info[t.ptrsize:])),
|
||||
GoType: 0,
|
||||
Func: f,
|
||||
}
|
||||
}
|
||||
return funcs
|
||||
}
|
||||
|
||||
// findFunc returns the func corresponding to the given program counter.
|
||||
func (t *LineTable) findFunc(pc uint64) []byte {
|
||||
if pc < t.uintptr(t.functab) || pc >= t.uintptr(t.functab[len(t.functab)-int(t.ptrsize):]) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// The function table is a list of 2*nfunctab+1 uintptrs,
|
||||
// alternating program counters and offsets to func structures.
|
||||
f := t.functab
|
||||
nf := t.nfunctab
|
||||
for nf > 0 {
|
||||
m := nf / 2
|
||||
fm := f[2*t.ptrsize*m:]
|
||||
if t.uintptr(fm) <= pc && pc < t.uintptr(fm[2*t.ptrsize:]) {
|
||||
return t.Data[t.uintptr(fm[t.ptrsize:]):]
|
||||
} else if pc < t.uintptr(fm) {
|
||||
nf = m
|
||||
} else {
|
||||
f = f[(m+1)*2*t.ptrsize:]
|
||||
nf -= m + 1
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// readvarint reads, removes, and returns a varint from *pp.
|
||||
func (t *LineTable) readvarint(pp *[]byte) uint32 {
|
||||
var v, shift uint32
|
||||
p := *pp
|
||||
for shift = 0; ; shift += 7 {
|
||||
b := p[0]
|
||||
p = p[1:]
|
||||
v |= (uint32(b) & 0x7F) << shift
|
||||
if b&0x80 == 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
*pp = p
|
||||
return v
|
||||
}
|
||||
|
||||
// string returns a Go string found at off.
|
||||
func (t *LineTable) string(off uint32) string {
|
||||
for i := off; ; i++ {
|
||||
if t.Data[i] == 0 {
|
||||
return string(t.Data[off:i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// step advances to the next pc, value pair in the encoded table.
|
||||
func (t *LineTable) step(p *[]byte, pc *uint64, val *int32, first bool) bool {
|
||||
uvdelta := t.readvarint(p)
|
||||
if uvdelta == 0 && !first {
|
||||
return false
|
||||
}
|
||||
if uvdelta&1 != 0 {
|
||||
uvdelta = ^(uvdelta >> 1)
|
||||
} else {
|
||||
uvdelta >>= 1
|
||||
}
|
||||
vdelta := int32(uvdelta)
|
||||
pcdelta := t.readvarint(p) * t.quantum
|
||||
*pc += uint64(pcdelta)
|
||||
*val += vdelta
|
||||
return true
|
||||
}
|
||||
|
||||
// pcvalue reports the value associated with the target pc.
|
||||
// off is the offset to the beginning of the pc-value table,
|
||||
// and entry is the start PC for the corresponding function.
|
||||
func (t *LineTable) pcvalue(off uint32, entry, targetpc uint64) int32 {
|
||||
if off == 0 {
|
||||
return -1
|
||||
}
|
||||
p := t.Data[off:]
|
||||
|
||||
val := int32(-1)
|
||||
pc := entry
|
||||
for t.step(&p, &pc, &val, pc == entry) {
|
||||
if targetpc < pc {
|
||||
return val
|
||||
}
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
// findFileLine scans one function in the binary looking for a
|
||||
// program counter in the given file on the given line.
|
||||
// It does so by running the pc-value tables mapping program counter
|
||||
// to file number. Since most functions come from a single file, these
|
||||
// are usually short and quick to scan. If a file match is found, then the
|
||||
// code goes to the expense of looking for a simultaneous line number match.
|
||||
func (t *LineTable) findFileLine(entry uint64, filetab, linetab uint32, filenum, line int32) uint64 {
|
||||
if filetab == 0 || linetab == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
fp := t.Data[filetab:]
|
||||
fl := t.Data[linetab:]
|
||||
fileVal := int32(-1)
|
||||
filePC := entry
|
||||
lineVal := int32(-1)
|
||||
linePC := entry
|
||||
fileStartPC := filePC
|
||||
for t.step(&fp, &filePC, &fileVal, filePC == entry) {
|
||||
if fileVal == filenum && fileStartPC < filePC {
|
||||
// fileVal is in effect starting at fileStartPC up to
|
||||
// but not including filePC, and it's the file we want.
|
||||
// Run the PC table looking for a matching line number
|
||||
// or until we reach filePC.
|
||||
lineStartPC := linePC
|
||||
for linePC < filePC && t.step(&fl, &linePC, &lineVal, linePC == entry) {
|
||||
// lineVal is in effect until linePC, and lineStartPC < filePC.
|
||||
if lineVal == line {
|
||||
if fileStartPC <= lineStartPC {
|
||||
return lineStartPC
|
||||
}
|
||||
if fileStartPC < linePC {
|
||||
return fileStartPC
|
||||
}
|
||||
}
|
||||
lineStartPC = linePC
|
||||
}
|
||||
}
|
||||
fileStartPC = filePC
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// go12PCToLine maps program counter to line number for the Go 1.2 pcln table.
|
||||
func (t *LineTable) go12PCToLine(pc uint64) (line int) {
|
||||
return t.go12PCToVal(pc, t.ptrsize+5*4)
|
||||
}
|
||||
|
||||
// go12PCToSPAdj maps program counter to Stack Pointer adjustment for the Go 1.2 pcln table.
|
||||
func (t *LineTable) go12PCToSPAdj(pc uint64) (spadj int) {
|
||||
return t.go12PCToVal(pc, t.ptrsize+3*4)
|
||||
}
|
||||
|
||||
func (t *LineTable) go12PCToVal(pc uint64, fOffset uint32) (val int) {
|
||||
defer func() {
|
||||
if recover() != nil {
|
||||
val = -1
|
||||
}
|
||||
}()
|
||||
|
||||
f := t.findFunc(pc)
|
||||
if f == nil {
|
||||
return -1
|
||||
}
|
||||
entry := t.uintptr(f)
|
||||
linetab := t.binary.Uint32(f[fOffset:])
|
||||
return int(t.pcvalue(linetab, entry, pc))
|
||||
}
|
||||
|
||||
// go12PCToFile maps program counter to file name for the Go 1.2 pcln table.
|
||||
func (t *LineTable) go12PCToFile(pc uint64) (file string) {
|
||||
defer func() {
|
||||
if recover() != nil {
|
||||
file = ""
|
||||
}
|
||||
}()
|
||||
|
||||
f := t.findFunc(pc)
|
||||
if f == nil {
|
||||
return ""
|
||||
}
|
||||
entry := t.uintptr(f)
|
||||
filetab := t.binary.Uint32(f[t.ptrsize+4*4:])
|
||||
fno := t.pcvalue(filetab, entry, pc)
|
||||
if fno <= 0 {
|
||||
return ""
|
||||
}
|
||||
return t.string(t.binary.Uint32(t.filetab[4*fno:]))
|
||||
}
|
||||
|
||||
// go12LineToPC maps a (file, line) pair to a program counter for the Go 1.2 pcln table.
|
||||
func (t *LineTable) go12LineToPC(file string, line int) (pc uint64) {
|
||||
defer func() {
|
||||
if recover() != nil {
|
||||
pc = 0
|
||||
}
|
||||
}()
|
||||
|
||||
t.initFileMap()
|
||||
filenum := t.fileMap[file]
|
||||
if filenum == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
// Scan all functions.
|
||||
// If this turns out to be a bottleneck, we could build a map[int32][]int32
|
||||
// mapping file number to a list of functions with code from that file.
|
||||
for i := uint32(0); i < t.nfunctab; i++ {
|
||||
f := t.Data[t.uintptr(t.functab[2*t.ptrsize*i+t.ptrsize:]):]
|
||||
entry := t.uintptr(f)
|
||||
filetab := t.binary.Uint32(f[t.ptrsize+4*4:])
|
||||
linetab := t.binary.Uint32(f[t.ptrsize+5*4:])
|
||||
pc := t.findFileLine(entry, filetab, linetab, int32(filenum), int32(line))
|
||||
if pc != 0 {
|
||||
return pc
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// initFileMap initializes the map from file name to file number.
|
||||
func (t *LineTable) initFileMap() {
|
||||
t.mu.Lock()
|
||||
defer t.mu.Unlock()
|
||||
|
||||
if t.fileMap != nil {
|
||||
return
|
||||
}
|
||||
m := make(map[string]uint32)
|
||||
|
||||
for i := uint32(1); i < t.nfiletab; i++ {
|
||||
s := t.string(t.binary.Uint32(t.filetab[4*i:]))
|
||||
m[s] = i
|
||||
}
|
||||
t.fileMap = m
|
||||
}
|
||||
|
||||
// go12MapFiles adds to m a key for every file in the Go 1.2 LineTable.
|
||||
// Every key maps to obj. That's not a very interesting map, but it provides
|
||||
// a way for callers to obtain the list of files in the program.
|
||||
func (t *LineTable) go12MapFiles(m map[string]*Obj, obj *Obj) {
|
||||
defer func() {
|
||||
recover()
|
||||
}()
|
||||
|
||||
t.initFileMap()
|
||||
for file := range t.fileMap {
|
||||
m[file] = obj
|
||||
}
|
||||
}
|
731
vendor/cloud.google.com/go/cmd/go-cloud-debug-agent/internal/debug/gosym/symtab.go
generated
vendored
731
vendor/cloud.google.com/go/cmd/go-cloud-debug-agent/internal/debug/gosym/symtab.go
generated
vendored
|
@ -1,731 +0,0 @@
|
|||
// Copyright 2018 Google Inc. All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package gosym implements access to the Go symbol
|
||||
// and line number tables embedded in Go binaries generated
|
||||
// by the gc compilers.
|
||||
package gosym
|
||||
|
||||
// The table format is a variant of the format used in Plan 9's a.out
|
||||
// format, documented at http://plan9.bell-labs.com/magic/man2html/6/a.out.
|
||||
// The best reference for the differences between the Plan 9 format
|
||||
// and the Go format is the runtime source, specifically ../../runtime/symtab.c.
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
/*
|
||||
* Symbols
|
||||
*/
|
||||
|
||||
// A Sym represents a single symbol table entry.
|
||||
type Sym struct {
|
||||
Value uint64
|
||||
Type byte
|
||||
Name string
|
||||
GoType uint64
|
||||
// If this symbol if a function symbol, the corresponding Func
|
||||
Func *Func
|
||||
}
|
||||
|
||||
// Static reports whether this symbol is static (not visible outside its file).
|
||||
func (s *Sym) Static() bool { return s.Type >= 'a' }
|
||||
|
||||
// PackageName returns the package part of the symbol name,
|
||||
// or the empty string if there is none.
|
||||
func (s *Sym) PackageName() string {
|
||||
if i := strings.Index(s.Name, "."); i != -1 {
|
||||
return s.Name[0:i]
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// ReceiverName returns the receiver type name of this symbol,
|
||||
// or the empty string if there is none.
|
||||
func (s *Sym) ReceiverName() string {
|
||||
l := strings.Index(s.Name, ".")
|
||||
r := strings.LastIndex(s.Name, ".")
|
||||
if l == -1 || r == -1 || l == r {
|
||||
return ""
|
||||
}
|
||||
return s.Name[l+1 : r]
|
||||
}
|
||||
|
||||
// BaseName returns the symbol name without the package or receiver name.
|
||||
func (s *Sym) BaseName() string {
|
||||
if i := strings.LastIndex(s.Name, "."); i != -1 {
|
||||
return s.Name[i+1:]
|
||||
}
|
||||
return s.Name
|
||||
}
|
||||
|
||||
// A Func collects information about a single function.
|
||||
type Func struct {
|
||||
Entry uint64
|
||||
*Sym
|
||||
End uint64
|
||||
Params []*Sym
|
||||
Locals []*Sym
|
||||
FrameSize int
|
||||
LineTable *LineTable
|
||||
Obj *Obj
|
||||
}
|
||||
|
||||
// An Obj represents a collection of functions in a symbol table.
|
||||
//
|
||||
// The exact method of division of a binary into separate Objs is an internal detail
|
||||
// of the symbol table format.
|
||||
//
|
||||
// In early versions of Go each source file became a different Obj.
|
||||
//
|
||||
// In Go 1 and Go 1.1, each package produced one Obj for all Go sources
|
||||
// and one Obj per C source file.
|
||||
//
|
||||
// In Go 1.2, there is a single Obj for the entire program.
|
||||
type Obj struct {
|
||||
// Funcs is a list of functions in the Obj.
|
||||
Funcs []Func
|
||||
|
||||
// In Go 1.1 and earlier, Paths is a list of symbols corresponding
|
||||
// to the source file names that produced the Obj.
|
||||
// In Go 1.2, Paths is nil.
|
||||
// Use the keys of Table.Files to obtain a list of source files.
|
||||
Paths []Sym // meta
|
||||
}
|
||||
|
||||
/*
|
||||
* Symbol tables
|
||||
*/
|
||||
|
||||
// Table represents a Go symbol table. It stores all of the
|
||||
// symbols decoded from the program and provides methods to translate
|
||||
// between symbols, names, and addresses.
|
||||
type Table struct {
|
||||
Syms []Sym
|
||||
Funcs []Func
|
||||
Files map[string]*Obj // nil for Go 1.2 and later binaries
|
||||
Objs []Obj // nil for Go 1.2 and later binaries
|
||||
|
||||
go12line *LineTable // Go 1.2 line number table
|
||||
}
|
||||
|
||||
type sym struct {
|
||||
value uint64
|
||||
gotype uint64
|
||||
typ byte
|
||||
name []byte
|
||||
}
|
||||
|
||||
var (
|
||||
littleEndianSymtab = []byte{0xFD, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00}
|
||||
bigEndianSymtab = []byte{0xFF, 0xFF, 0xFF, 0xFD, 0x00, 0x00, 0x00}
|
||||
oldLittleEndianSymtab = []byte{0xFE, 0xFF, 0xFF, 0xFF, 0x00, 0x00}
|
||||
)
|
||||
|
||||
func walksymtab(data []byte, fn func(sym) error) error {
|
||||
if len(data) == 0 { // missing symtab is okay
|
||||
return nil
|
||||
}
|
||||
var order binary.ByteOrder = binary.BigEndian
|
||||
newTable := false
|
||||
switch {
|
||||
case bytes.HasPrefix(data, oldLittleEndianSymtab):
|
||||
// Same as Go 1.0, but little endian.
|
||||
// Format was used during interim development between Go 1.0 and Go 1.1.
|
||||
// Should not be widespread, but easy to support.
|
||||
data = data[6:]
|
||||
order = binary.LittleEndian
|
||||
case bytes.HasPrefix(data, bigEndianSymtab):
|
||||
newTable = true
|
||||
case bytes.HasPrefix(data, littleEndianSymtab):
|
||||
newTable = true
|
||||
order = binary.LittleEndian
|
||||
}
|
||||
var ptrsz int
|
||||
if newTable {
|
||||
if len(data) < 8 {
|
||||
return &DecodingError{len(data), "unexpected EOF", nil}
|
||||
}
|
||||
ptrsz = int(data[7])
|
||||
if ptrsz != 4 && ptrsz != 8 {
|
||||
return &DecodingError{7, "invalid pointer size", ptrsz}
|
||||
}
|
||||
data = data[8:]
|
||||
}
|
||||
var s sym
|
||||
p := data
|
||||
for len(p) >= 4 {
|
||||
var typ byte
|
||||
if newTable {
|
||||
// Symbol type, value, Go type.
|
||||
typ = p[0] & 0x3F
|
||||
wideValue := p[0]&0x40 != 0
|
||||
goType := p[0]&0x80 != 0
|
||||
if typ < 26 {
|
||||
typ += 'A'
|
||||
} else {
|
||||
typ += 'a' - 26
|
||||
}
|
||||
s.typ = typ
|
||||
p = p[1:]
|
||||
if wideValue {
|
||||
if len(p) < ptrsz {
|
||||
return &DecodingError{len(data), "unexpected EOF", nil}
|
||||
}
|
||||
// fixed-width value
|
||||
if ptrsz == 8 {
|
||||
s.value = order.Uint64(p[0:8])
|
||||
p = p[8:]
|
||||
} else {
|
||||
s.value = uint64(order.Uint32(p[0:4]))
|
||||
p = p[4:]
|
||||
}
|
||||
} else {
|
||||
// varint value
|
||||
s.value = 0
|
||||
shift := uint(0)
|
||||
for len(p) > 0 && p[0]&0x80 != 0 {
|
||||
s.value |= uint64(p[0]&0x7F) << shift
|
||||
shift += 7
|
||||
p = p[1:]
|
||||
}
|
||||
if len(p) == 0 {
|
||||
return &DecodingError{len(data), "unexpected EOF", nil}
|
||||
}
|
||||
s.value |= uint64(p[0]) << shift
|
||||
p = p[1:]
|
||||
}
|
||||
if goType {
|
||||
if len(p) < ptrsz {
|
||||
return &DecodingError{len(data), "unexpected EOF", nil}
|
||||
}
|
||||
// fixed-width go type
|
||||
if ptrsz == 8 {
|
||||
s.gotype = order.Uint64(p[0:8])
|
||||
p = p[8:]
|
||||
} else {
|
||||
s.gotype = uint64(order.Uint32(p[0:4]))
|
||||
p = p[4:]
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Value, symbol type.
|
||||
s.value = uint64(order.Uint32(p[0:4]))
|
||||
if len(p) < 5 {
|
||||
return &DecodingError{len(data), "unexpected EOF", nil}
|
||||
}
|
||||
typ = p[4]
|
||||
if typ&0x80 == 0 {
|
||||
return &DecodingError{len(data) - len(p) + 4, "bad symbol type", typ}
|
||||
}
|
||||
typ &^= 0x80
|
||||
s.typ = typ
|
||||
p = p[5:]
|
||||
}
|
||||
|
||||
// Name.
|
||||
var i int
|
||||
var nnul int
|
||||
for i = 0; i < len(p); i++ {
|
||||
if p[i] == 0 {
|
||||
nnul = 1
|
||||
break
|
||||
}
|
||||
}
|
||||
switch typ {
|
||||
case 'z', 'Z':
|
||||
p = p[i+nnul:]
|
||||
for i = 0; i+2 <= len(p); i += 2 {
|
||||
if p[i] == 0 && p[i+1] == 0 {
|
||||
nnul = 2
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(p) < i+nnul {
|
||||
return &DecodingError{len(data), "unexpected EOF", nil}
|
||||
}
|
||||
s.name = p[0:i]
|
||||
i += nnul
|
||||
p = p[i:]
|
||||
|
||||
if !newTable {
|
||||
if len(p) < 4 {
|
||||
return &DecodingError{len(data), "unexpected EOF", nil}
|
||||
}
|
||||
// Go type.
|
||||
s.gotype = uint64(order.Uint32(p[:4]))
|
||||
p = p[4:]
|
||||
}
|
||||
fn(s)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// NewTable decodes the Go symbol table in data,
|
||||
// returning an in-memory representation.
|
||||
func NewTable(symtab []byte, pcln *LineTable) (*Table, error) {
|
||||
var n int
|
||||
err := walksymtab(symtab, func(s sym) error {
|
||||
n++
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var t Table
|
||||
if pcln.isGo12() {
|
||||
t.go12line = pcln
|
||||
}
|
||||
fname := make(map[uint16]string)
|
||||
t.Syms = make([]Sym, 0, n)
|
||||
nf := 0
|
||||
nz := 0
|
||||
lasttyp := uint8(0)
|
||||
err = walksymtab(symtab, func(s sym) error {
|
||||
n := len(t.Syms)
|
||||
t.Syms = t.Syms[0 : n+1]
|
||||
ts := &t.Syms[n]
|
||||
ts.Type = s.typ
|
||||
ts.Value = uint64(s.value)
|
||||
ts.GoType = uint64(s.gotype)
|
||||
switch s.typ {
|
||||
default:
|
||||
// rewrite name to use . instead of · (c2 b7)
|
||||
w := 0
|
||||
b := s.name
|
||||
for i := 0; i < len(b); i++ {
|
||||
if b[i] == 0xc2 && i+1 < len(b) && b[i+1] == 0xb7 {
|
||||
i++
|
||||
b[i] = '.'
|
||||
}
|
||||
b[w] = b[i]
|
||||
w++
|
||||
}
|
||||
ts.Name = string(s.name[0:w])
|
||||
case 'z', 'Z':
|
||||
if lasttyp != 'z' && lasttyp != 'Z' {
|
||||
nz++
|
||||
}
|
||||
for i := 0; i < len(s.name); i += 2 {
|
||||
eltIdx := binary.BigEndian.Uint16(s.name[i : i+2])
|
||||
elt, ok := fname[eltIdx]
|
||||
if !ok {
|
||||
return &DecodingError{-1, "bad filename code", eltIdx}
|
||||
}
|
||||
if n := len(ts.Name); n > 0 && ts.Name[n-1] != '/' {
|
||||
ts.Name += "/"
|
||||
}
|
||||
ts.Name += elt
|
||||
}
|
||||
}
|
||||
switch s.typ {
|
||||
case 'T', 't', 'L', 'l':
|
||||
nf++
|
||||
case 'f':
|
||||
fname[uint16(s.value)] = ts.Name
|
||||
}
|
||||
lasttyp = s.typ
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
t.Funcs = make([]Func, 0, nf)
|
||||
t.Files = make(map[string]*Obj)
|
||||
|
||||
var obj *Obj
|
||||
if t.go12line != nil {
|
||||
// Put all functions into one Obj.
|
||||
t.Objs = make([]Obj, 1)
|
||||
obj = &t.Objs[0]
|
||||
t.go12line.go12MapFiles(t.Files, obj)
|
||||
} else {
|
||||
t.Objs = make([]Obj, 0, nz)
|
||||
}
|
||||
|
||||
// Count text symbols and attach frame sizes, parameters, and
|
||||
// locals to them. Also, find object file boundaries.
|
||||
lastf := 0
|
||||
for i := 0; i < len(t.Syms); i++ {
|
||||
sym := &t.Syms[i]
|
||||
switch sym.Type {
|
||||
case 'Z', 'z': // path symbol
|
||||
if t.go12line != nil {
|
||||
// Go 1.2 binaries have the file information elsewhere. Ignore.
|
||||
break
|
||||
}
|
||||
// Finish the current object
|
||||
if obj != nil {
|
||||
obj.Funcs = t.Funcs[lastf:]
|
||||
}
|
||||
lastf = len(t.Funcs)
|
||||
|
||||
// Start new object
|
||||
n := len(t.Objs)
|
||||
t.Objs = t.Objs[0 : n+1]
|
||||
obj = &t.Objs[n]
|
||||
|
||||
// Count & copy path symbols
|
||||
var end int
|
||||
for end = i + 1; end < len(t.Syms); end++ {
|
||||
if c := t.Syms[end].Type; c != 'Z' && c != 'z' {
|
||||
break
|
||||
}
|
||||
}
|
||||
obj.Paths = t.Syms[i:end]
|
||||
i = end - 1 // loop will i++
|
||||
|
||||
// Record file names
|
||||
depth := 0
|
||||
for j := range obj.Paths {
|
||||
s := &obj.Paths[j]
|
||||
if s.Name == "" {
|
||||
depth--
|
||||
} else {
|
||||
if depth == 0 {
|
||||
t.Files[s.Name] = obj
|
||||
}
|
||||
depth++
|
||||
}
|
||||
}
|
||||
|
||||
case 'T', 't', 'L', 'l': // text symbol
|
||||
if n := len(t.Funcs); n > 0 {
|
||||
t.Funcs[n-1].End = sym.Value
|
||||
}
|
||||
if sym.Name == "etext" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Count parameter and local (auto) syms
|
||||
var np, na int
|
||||
var end int
|
||||
countloop:
|
||||
for end = i + 1; end < len(t.Syms); end++ {
|
||||
switch t.Syms[end].Type {
|
||||
case 'T', 't', 'L', 'l', 'Z', 'z':
|
||||
break countloop
|
||||
case 'p':
|
||||
np++
|
||||
case 'a':
|
||||
na++
|
||||
}
|
||||
}
|
||||
|
||||
// Fill in the function symbol
|
||||
n := len(t.Funcs)
|
||||
t.Funcs = t.Funcs[0 : n+1]
|
||||
fn := &t.Funcs[n]
|
||||
sym.Func = fn
|
||||
fn.Params = make([]*Sym, 0, np)
|
||||
fn.Locals = make([]*Sym, 0, na)
|
||||
fn.Sym = sym
|
||||
fn.Entry = sym.Value
|
||||
fn.Obj = obj
|
||||
if t.go12line != nil {
|
||||
// All functions share the same line table.
|
||||
// It knows how to narrow down to a specific
|
||||
// function quickly.
|
||||
fn.LineTable = t.go12line
|
||||
} else if pcln != nil {
|
||||
fn.LineTable = pcln.slice(fn.Entry)
|
||||
pcln = fn.LineTable
|
||||
}
|
||||
for j := i; j < end; j++ {
|
||||
s := &t.Syms[j]
|
||||
switch s.Type {
|
||||
case 'm':
|
||||
fn.FrameSize = int(s.Value)
|
||||
case 'p':
|
||||
n := len(fn.Params)
|
||||
fn.Params = fn.Params[0 : n+1]
|
||||
fn.Params[n] = s
|
||||
case 'a':
|
||||
n := len(fn.Locals)
|
||||
fn.Locals = fn.Locals[0 : n+1]
|
||||
fn.Locals[n] = s
|
||||
}
|
||||
}
|
||||
i = end - 1 // loop will i++
|
||||
}
|
||||
}
|
||||
|
||||
if t.go12line != nil && nf == 0 {
|
||||
t.Funcs = t.go12line.go12Funcs()
|
||||
}
|
||||
if obj != nil {
|
||||
obj.Funcs = t.Funcs[lastf:]
|
||||
}
|
||||
return &t, nil
|
||||
}
|
||||
|
||||
// PCToFunc returns the function containing the program counter pc,
|
||||
// or nil if there is no such function.
|
||||
func (t *Table) PCToFunc(pc uint64) *Func {
|
||||
funcs := t.Funcs
|
||||
for len(funcs) > 0 {
|
||||
m := len(funcs) / 2
|
||||
fn := &funcs[m]
|
||||
switch {
|
||||
case pc < fn.Entry:
|
||||
funcs = funcs[0:m]
|
||||
case fn.Entry <= pc && pc < fn.End:
|
||||
return fn
|
||||
default:
|
||||
funcs = funcs[m+1:]
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// PCToLine looks up line number information for a program counter.
|
||||
// If there is no information, it returns fn == nil.
|
||||
func (t *Table) PCToLine(pc uint64) (file string, line int, fn *Func) {
|
||||
if fn = t.PCToFunc(pc); fn == nil {
|
||||
return
|
||||
}
|
||||
if t.go12line != nil {
|
||||
file = t.go12line.go12PCToFile(pc)
|
||||
line = t.go12line.go12PCToLine(pc)
|
||||
} else {
|
||||
file, line = fn.Obj.lineFromAline(fn.LineTable.PCToLine(pc))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// PCToSPAdj returns the stack pointer adjustment for a program counter.
|
||||
func (t *Table) PCToSPAdj(pc uint64) (spadj int) {
|
||||
if fn := t.PCToFunc(pc); fn == nil {
|
||||
return 0
|
||||
}
|
||||
if t.go12line != nil {
|
||||
return t.go12line.go12PCToSPAdj(pc)
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// LineToPC looks up the first program counter on the given line in
|
||||
// the named file. It returns UnknownPathError or UnknownLineError if
|
||||
// there is an error looking up this line.
|
||||
func (t *Table) LineToPC(file string, line int) (pc uint64, fn *Func, err error) {
|
||||
obj, ok := t.Files[file]
|
||||
if !ok {
|
||||
return 0, nil, UnknownFileError(file)
|
||||
}
|
||||
|
||||
if t.go12line != nil {
|
||||
pc := t.go12line.go12LineToPC(file, line)
|
||||
if pc == 0 {
|
||||
return 0, nil, &UnknownLineError{file, line}
|
||||
}
|
||||
return pc, t.PCToFunc(pc), nil
|
||||
}
|
||||
|
||||
abs, err := obj.alineFromLine(file, line)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
for i := range obj.Funcs {
|
||||
f := &obj.Funcs[i]
|
||||
pc := f.LineTable.LineToPC(abs, f.End)
|
||||
if pc != 0 {
|
||||
return pc, f, nil
|
||||
}
|
||||
}
|
||||
return 0, nil, &UnknownLineError{file, line}
|
||||
}
|
||||
|
||||
// LookupSym returns the text, data, or bss symbol with the given name,
|
||||
// or nil if no such symbol is found.
|
||||
func (t *Table) LookupSym(name string) *Sym {
|
||||
// TODO(austin) Maybe make a map
|
||||
for i := range t.Syms {
|
||||
s := &t.Syms[i]
|
||||
switch s.Type {
|
||||
case 'T', 't', 'L', 'l', 'D', 'd', 'B', 'b':
|
||||
if s.Name == name {
|
||||
return s
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// LookupFunc returns the text, data, or bss symbol with the given name,
|
||||
// or nil if no such symbol is found.
|
||||
func (t *Table) LookupFunc(name string) *Func {
|
||||
for i := range t.Funcs {
|
||||
f := &t.Funcs[i]
|
||||
if f.Sym.Name == name {
|
||||
return f
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// SymByAddr returns the text, data, or bss symbol starting at the given address.
|
||||
func (t *Table) SymByAddr(addr uint64) *Sym {
|
||||
for i := range t.Syms {
|
||||
s := &t.Syms[i]
|
||||
switch s.Type {
|
||||
case 'T', 't', 'L', 'l', 'D', 'd', 'B', 'b':
|
||||
if s.Value == addr {
|
||||
return s
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
/*
|
||||
* Object files
|
||||
*/
|
||||
|
||||
// This is legacy code for Go 1.1 and earlier, which used the
|
||||
// Plan 9 format for pc-line tables. This code was never quite
|
||||
// correct. It's probably very close, and it's usually correct, but
|
||||
// we never quite found all the corner cases.
|
||||
//
|
||||
// Go 1.2 and later use a simpler format, documented at golang.org/s/go12symtab.
|
||||
|
||||
func (o *Obj) lineFromAline(aline int) (string, int) {
|
||||
type stackEnt struct {
|
||||
path string
|
||||
start int
|
||||
offset int
|
||||
prev *stackEnt
|
||||
}
|
||||
|
||||
noPath := &stackEnt{"", 0, 0, nil}
|
||||
tos := noPath
|
||||
|
||||
pathloop:
|
||||
for _, s := range o.Paths {
|
||||
val := int(s.Value)
|
||||
switch {
|
||||
case val > aline:
|
||||
break pathloop
|
||||
|
||||
case val == 1:
|
||||
// Start a new stack
|
||||
tos = &stackEnt{s.Name, val, 0, noPath}
|
||||
|
||||
case s.Name == "":
|
||||
// Pop
|
||||
if tos == noPath {
|
||||
return "<malformed symbol table>", 0
|
||||
}
|
||||
tos.prev.offset += val - tos.start
|
||||
tos = tos.prev
|
||||
|
||||
default:
|
||||
// Push
|
||||
tos = &stackEnt{s.Name, val, 0, tos}
|
||||
}
|
||||
}
|
||||
|
||||
if tos == noPath {
|
||||
return "", 0
|
||||
}
|
||||
return tos.path, aline - tos.start - tos.offset + 1
|
||||
}
|
||||
|
||||
func (o *Obj) alineFromLine(path string, line int) (int, error) {
|
||||
if line < 1 {
|
||||
return 0, &UnknownLineError{path, line}
|
||||
}
|
||||
|
||||
for i, s := range o.Paths {
|
||||
// Find this path
|
||||
if s.Name != path {
|
||||
continue
|
||||
}
|
||||
|
||||
// Find this line at this stack level
|
||||
depth := 0
|
||||
var incstart int
|
||||
line += int(s.Value)
|
||||
pathloop:
|
||||
for _, s := range o.Paths[i:] {
|
||||
val := int(s.Value)
|
||||
switch {
|
||||
case depth == 1 && val >= line:
|
||||
return line - 1, nil
|
||||
|
||||
case s.Name == "":
|
||||
depth--
|
||||
if depth == 0 {
|
||||
break pathloop
|
||||
} else if depth == 1 {
|
||||
line += val - incstart
|
||||
}
|
||||
|
||||
default:
|
||||
if depth == 1 {
|
||||
incstart = val
|
||||
}
|
||||
depth++
|
||||
}
|
||||
}
|
||||
return 0, &UnknownLineError{path, line}
|
||||
}
|
||||
return 0, UnknownFileError(path)
|
||||
}
|
||||
|
||||
/*
|
||||
* Errors
|
||||
*/
|
||||
|
||||
// UnknownFileError represents a failure to find the specific file in
|
||||
// the symbol table.
|
||||
type UnknownFileError string
|
||||
|
||||
func (e UnknownFileError) Error() string { return "unknown file: " + string(e) }
|
||||
|
||||
// UnknownLineError represents a failure to map a line to a program
|
||||
// counter, either because the line is beyond the bounds of the file
|
||||
// or because there is no code on the given line.
|
||||
type UnknownLineError struct {
|
||||
File string
|
||||
Line int
|
||||
}
|
||||
|
||||
func (e *UnknownLineError) Error() string {
|
||||
return "no code at " + e.File + ":" + strconv.Itoa(e.Line)
|
||||
}
|
||||
|
||||
// DecodingError represents an error during the decoding of
|
||||
// the symbol table.
|
||||
type DecodingError struct {
|
||||
off int
|
||||
msg string
|
||||
val interface{}
|
||||
}
|
||||
|
||||
func (e *DecodingError) Error() string {
|
||||
msg := e.msg
|
||||
if e.val != nil {
|
||||
msg += fmt.Sprintf(" '%v'", e.val)
|
||||
}
|
||||
msg += fmt.Sprintf(" at byte %#x", e.off)
|
||||
return msg
|
||||
}
|
29
vendor/cloud.google.com/go/go.mod
generated
vendored
29
vendor/cloud.google.com/go/go.mod
generated
vendored
|
@ -1,29 +0,0 @@
|
|||
module cloud.google.com/go
|
||||
|
||||
go 1.9
|
||||
|
||||
require (
|
||||
cloud.google.com/go/datastore v1.0.0
|
||||
github.com/golang/mock v1.3.1
|
||||
github.com/golang/protobuf v1.3.2
|
||||
github.com/google/btree v1.0.0
|
||||
github.com/google/go-cmp v0.3.0
|
||||
github.com/google/martian v2.1.0+incompatible
|
||||
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f
|
||||
github.com/googleapis/gax-go/v2 v2.0.5
|
||||
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024
|
||||
go.opencensus.io v0.22.0
|
||||
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522
|
||||
golang.org/x/lint v0.0.0-20190409202823-959b441ac422
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58
|
||||
golang.org/x/text v0.3.2
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
|
||||
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0
|
||||
google.golang.org/api v0.8.0
|
||||
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64
|
||||
google.golang.org/grpc v1.21.1
|
||||
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a
|
||||
rsc.io/binaryregexp v0.2.0
|
||||
)
|
6
vendor/cloud.google.com/go/internal/version/update_version.sh
generated
vendored
Normal file
6
vendor/cloud.google.com/go/internal/version/update_version.sh
generated
vendored
Normal file
|
@ -0,0 +1,6 @@
|
|||
#!/bin/bash
|
||||
|
||||
today=$(date +%Y%m%d)
|
||||
|
||||
sed -i -r -e 's/const Repo = "([0-9]{8})"/const Repo = "'$today'"/' $GOFILE
|
||||
|
17
vendor/cloud.google.com/go/issue_template.md
generated
vendored
Normal file
17
vendor/cloud.google.com/go/issue_template.md
generated
vendored
Normal file
|
@ -0,0 +1,17 @@
|
|||
(delete this for feature requests)
|
||||
|
||||
## Client
|
||||
|
||||
e.g. PubSub
|
||||
|
||||
## Describe Your Environment
|
||||
|
||||
e.g. Alpine Docker on GKE
|
||||
|
||||
## Expected Behavior
|
||||
|
||||
e.g. Messages arrive really fast.
|
||||
|
||||
## Actual Behavior
|
||||
|
||||
e.g. Messages arrive really slowly.
|
6
vendor/cloud.google.com/go/logging/CHANGES.md
generated
vendored
Normal file
6
vendor/cloud.google.com/go/logging/CHANGES.md
generated
vendored
Normal file
|
@ -0,0 +1,6 @@
|
|||
# Changes
|
||||
|
||||
## v1.0.0
|
||||
|
||||
This is the first tag to carve out logging as its own module. See:
|
||||
https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
|
0
vendor/github.com/moby/sys/LICENSE → vendor/cloud.google.com/go/logging/LICENSE
generated
vendored
0
vendor/github.com/moby/sys/LICENSE → vendor/cloud.google.com/go/logging/LICENSE
generated
vendored
15
vendor/cloud.google.com/go/logging/go.mod
generated
vendored
15
vendor/cloud.google.com/go/logging/go.mod
generated
vendored
|
@ -1,15 +0,0 @@
|
|||
module cloud.google.com/go/logging
|
||||
|
||||
go 1.9
|
||||
|
||||
require (
|
||||
cloud.google.com/go v0.43.0
|
||||
github.com/golang/protobuf v1.3.1
|
||||
github.com/google/go-cmp v0.3.0
|
||||
github.com/googleapis/gax-go/v2 v2.0.5
|
||||
go.opencensus.io v0.22.0
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
|
||||
google.golang.org/api v0.7.0
|
||||
google.golang.org/genproto v0.0.0-20190708153700-3bdd9d9f5532
|
||||
google.golang.org/grpc v1.21.1
|
||||
)
|
150
vendor/cloud.google.com/go/regen-gapic.sh
generated
vendored
Normal file
150
vendor/cloud.google.com/go/regen-gapic.sh
generated
vendored
Normal file
|
@ -0,0 +1,150 @@
|
|||
#!/bin/bash
|
||||
|
||||
# This script generates all GAPIC clients in this repo.
|
||||
# One-time setup:
|
||||
# cd path/to/googleapis # https://github.com/googleapis/googleapis
|
||||
# virtualenv env
|
||||
# . env/bin/activate
|
||||
# pip install googleapis-artman
|
||||
# deactivate
|
||||
#
|
||||
# Regenerate:
|
||||
# cd path/to/googleapis
|
||||
# . env/bin/activate
|
||||
# $GOPATH/src/cloud.google.com/go/regen-gapic.sh
|
||||
# deactivate
|
||||
#
|
||||
# Being in googleapis directory is important;
|
||||
# that's where we find YAML files and where artman puts the "artman-genfiles" directory.
|
||||
#
|
||||
# NOTE: This script does not generate the "raw" gRPC client found in google.golang.org/genproto.
|
||||
# To do that, use the regen.sh script in the genproto repo instead.
|
||||
|
||||
set -ex
|
||||
|
||||
APIS=(
|
||||
google/api/expr/artman_cel.yaml
|
||||
google/iam/artman_iam_admin.yaml
|
||||
google/cloud/asset/artman_cloudasset_v1beta1.yaml
|
||||
google/cloud/asset/artman_cloudasset_v1p2beta1.yaml
|
||||
google/cloud/asset/artman_cloudasset_v1.yaml
|
||||
google/iam/credentials/artman_iamcredentials_v1.yaml
|
||||
google/cloud/automl/artman_automl_v1beta1.yaml
|
||||
google/cloud/bigquery/datatransfer/artman_bigquerydatatransfer.yaml
|
||||
google/cloud/bigquery/storage/artman_bigquerystorage_v1beta1.yaml
|
||||
google/cloud/dataproc/artman_dataproc_v1.yaml
|
||||
google/cloud/dataproc/artman_dataproc_v1beta2.yaml
|
||||
google/cloud/dialogflow/artman_dialogflow_v2.yaml
|
||||
google/cloud/iot/artman_cloudiot.yaml
|
||||
google/cloud/irm/artman_irm_v1alpha2.yaml
|
||||
google/cloud/kms/artman_cloudkms.yaml
|
||||
google/cloud/language/artman_language_v1.yaml
|
||||
google/cloud/language/artman_language_v1beta2.yaml
|
||||
google/cloud/oslogin/artman_oslogin_v1.yaml
|
||||
google/cloud/oslogin/artman_oslogin_v1beta.yaml
|
||||
google/cloud/phishingprotection/artman_phishingprotection_v1beta1.yaml
|
||||
google/cloud/recaptchaenterprise/artman_recaptchaenterprise_v1beta1.yaml
|
||||
google/cloud/redis/artman_redis_v1beta1.yaml
|
||||
google/cloud/redis/artman_redis_v1.yaml
|
||||
google/cloud/scheduler/artman_cloudscheduler_v1beta1.yaml
|
||||
google/cloud/scheduler/artman_cloudscheduler_v1.yaml
|
||||
google/cloud/securitycenter/artman_securitycenter_v1beta1.yaml
|
||||
google/cloud/securitycenter/artman_securitycenter_v1.yaml
|
||||
google/cloud/speech/artman_speech_v1.yaml
|
||||
google/cloud/speech/artman_speech_v1p1beta1.yaml
|
||||
google/cloud/talent/artman_talent_v4beta1.yaml
|
||||
google/cloud/tasks/artman_cloudtasks_v2beta2.yaml
|
||||
google/cloud/tasks/artman_cloudtasks_v2beta3.yaml
|
||||
google/cloud/tasks/artman_cloudtasks_v2.yaml
|
||||
google/cloud/texttospeech/artman_texttospeech_v1.yaml
|
||||
google/cloud/videointelligence/artman_videointelligence_v1.yaml
|
||||
google/cloud/videointelligence/artman_videointelligence_v1beta1.yaml
|
||||
google/cloud/videointelligence/artman_videointelligence_v1beta2.yaml
|
||||
google/cloud/vision/artman_vision_v1.yaml
|
||||
google/cloud/vision/artman_vision_v1p1beta1.yaml
|
||||
google/cloud/webrisk/artman_webrisk_v1beta1.yaml
|
||||
google/devtools/artman_clouddebugger.yaml
|
||||
google/devtools/clouderrorreporting/artman_errorreporting.yaml
|
||||
google/devtools/cloudtrace/artman_cloudtrace_v1.yaml
|
||||
google/devtools/cloudtrace/artman_cloudtrace_v2.yaml
|
||||
|
||||
# The containeranalysis team wants manual changes in the auto-generated gapic.
|
||||
# So, let's remove it from the autogen list until we're ready to spend energy
|
||||
# generating and manually updating it.
|
||||
# google/devtools/containeranalysis/artman_containeranalysis_v1.yaml
|
||||
|
||||
google/devtools/containeranalysis/artman_containeranalysis_v1beta1.yaml
|
||||
google/firestore/artman_firestore.yaml
|
||||
google/firestore/admin/artman_firestore_v1.yaml
|
||||
|
||||
# See containeranalysis note above.
|
||||
# grafeas/artman_grafeas_v1.yaml
|
||||
|
||||
google/logging/artman_logging.yaml
|
||||
google/longrunning/artman_longrunning.yaml
|
||||
google/monitoring/artman_monitoring.yaml
|
||||
google/privacy/dlp/artman_dlp_v2.yaml
|
||||
google/pubsub/artman_pubsub.yaml
|
||||
google/spanner/admin/database/artman_spanner_admin_database.yaml
|
||||
google/spanner/admin/instance/artman_spanner_admin_instance.yaml
|
||||
google/spanner/artman_spanner.yaml
|
||||
)
|
||||
|
||||
for api in "${APIS[@]}"; do
|
||||
rm -rf artman-genfiles/*
|
||||
artman --config "$api" generate go_gapic
|
||||
cp -r artman-genfiles/gapi-*/cloud.google.com/go/* $GOPATH/src/cloud.google.com/go/
|
||||
done
|
||||
|
||||
microgen() {
|
||||
input=$1
|
||||
options="${@:2}"
|
||||
|
||||
# see https://github.com/googleapis/gapic-generator-go/blob/master/README.md#docker-wrapper for details
|
||||
docker run \
|
||||
--mount type=bind,source=$(pwd),destination=/conf,readonly \
|
||||
--mount type=bind,source=$(pwd)/$input,destination=/in/$input,readonly \
|
||||
--mount type=bind,source=$GOPATH/src,destination=/out \
|
||||
--rm \
|
||||
gcr.io/gapic-images/gapic-generator-go:latest \
|
||||
$options
|
||||
}
|
||||
|
||||
MICROAPIS=(
|
||||
# input proto directory | gapic-generator-go flag | gapic-service-config flag
|
||||
# "google/cloud/language/v1 --go-gapic-package cloud.google.com/go/language/apiv1;language --gapic-service-config google/cloud/language/language_v1.yaml"
|
||||
)
|
||||
|
||||
for api in "${MICROAPIS[@]}"; do
|
||||
microgen $api
|
||||
done
|
||||
|
||||
pushd $GOPATH/src/cloud.google.com/go/
|
||||
gofmt -s -d -l -w . && goimports -w .
|
||||
|
||||
# NOTE(pongad): `sed -i` doesn't work on Macs, because -i option needs an argument.
|
||||
# `-i ''` doesn't work on GNU, since the empty string is treated as a file name.
|
||||
# So we just create the backup and delete it after.
|
||||
ver=$(date +%Y%m%d)
|
||||
git ls-files -mo | while read modified; do
|
||||
dir=${modified%/*.*}
|
||||
find . -path "*/$dir/doc.go" -exec sed -i.backup -e "s/^const versionClient.*/const versionClient = \"$ver\"/" '{}' +
|
||||
done
|
||||
popd
|
||||
|
||||
|
||||
HASMANUAL=(
|
||||
errorreporting/apiv1beta1
|
||||
firestore/apiv1beta1
|
||||
firestore/apiv1
|
||||
logging/apiv2
|
||||
longrunning/autogen
|
||||
pubsub/apiv1
|
||||
spanner/apiv1
|
||||
trace/apiv1
|
||||
)
|
||||
for dir in "${HASMANUAL[@]}"; do
|
||||
find "$GOPATH/src/cloud.google.com/go/$dir" -name '*.go' -exec sed -i.backup -e 's/setGoogleClientInfo/SetGoogleClientInfo/g' '{}' '+'
|
||||
done
|
||||
|
||||
find $GOPATH/src/cloud.google.com/go/ -name '*.backup' -delete
|
5
vendor/github.com/Azure/go-ansiterm/go.mod
generated
vendored
5
vendor/github.com/Azure/go-ansiterm/go.mod
generated
vendored
|
@ -1,5 +0,0 @@
|
|||
module github.com/Azure/go-ansiterm
|
||||
|
||||
go 1.16
|
||||
|
||||
require golang.org/x/sys v0.0.0-20210616094352-59db8d763f22
|
5
vendor/github.com/BurntSushi/toml/.gitignore
generated
vendored
Normal file
5
vendor/github.com/BurntSushi/toml/.gitignore
generated
vendored
Normal file
|
@ -0,0 +1,5 @@
|
|||
TAGS
|
||||
tags
|
||||
.*.swp
|
||||
tomlcheck/tomlcheck
|
||||
toml.test
|
15
vendor/github.com/BurntSushi/toml/.travis.yml
generated
vendored
Normal file
15
vendor/github.com/BurntSushi/toml/.travis.yml
generated
vendored
Normal file
|
@ -0,0 +1,15 @@
|
|||
language: go
|
||||
go:
|
||||
- 1.1
|
||||
- 1.2
|
||||
- 1.3
|
||||
- 1.4
|
||||
- 1.5
|
||||
- 1.6
|
||||
- tip
|
||||
install:
|
||||
- go install ./...
|
||||
- go get github.com/BurntSushi/toml-test
|
||||
script:
|
||||
- export PATH="$PATH:$HOME/gopath/bin"
|
||||
- make test
|
3
vendor/github.com/BurntSushi/toml/COMPATIBLE
generated
vendored
Normal file
3
vendor/github.com/BurntSushi/toml/COMPATIBLE
generated
vendored
Normal file
|
@ -0,0 +1,3 @@
|
|||
Compatible with TOML version
|
||||
[v0.4.0](https://github.com/toml-lang/toml/blob/v0.4.0/versions/en/toml-v0.4.0.md)
|
||||
|
21
vendor/github.com/BurntSushi/toml/COPYING
generated
vendored
Normal file
21
vendor/github.com/BurntSushi/toml/COPYING
generated
vendored
Normal file
|
@ -0,0 +1,21 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2013 TOML authors
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
19
vendor/github.com/BurntSushi/toml/Makefile
generated
vendored
Normal file
19
vendor/github.com/BurntSushi/toml/Makefile
generated
vendored
Normal file
|
@ -0,0 +1,19 @@
|
|||
install:
|
||||
go install ./...
|
||||
|
||||
test: install
|
||||
go test -v
|
||||
toml-test toml-test-decoder
|
||||
toml-test -encoder toml-test-encoder
|
||||
|
||||
fmt:
|
||||
gofmt -w *.go */*.go
|
||||
colcheck *.go */*.go
|
||||
|
||||
tags:
|
||||
find ./ -name '*.go' -print0 | xargs -0 gotags > TAGS
|
||||
|
||||
push:
|
||||
git push origin master
|
||||
git push github master
|
||||
|
218
vendor/github.com/BurntSushi/toml/README.md
generated
vendored
Normal file
218
vendor/github.com/BurntSushi/toml/README.md
generated
vendored
Normal file
|
@ -0,0 +1,218 @@
|
|||
## TOML parser and encoder for Go with reflection
|
||||
|
||||
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
|
||||
reflection interface similar to Go's standard library `json` and `xml`
|
||||
packages. This package also supports the `encoding.TextUnmarshaler` and
|
||||
`encoding.TextMarshaler` interfaces so that you can define custom data
|
||||
representations. (There is an example of this below.)
|
||||
|
||||
Spec: https://github.com/toml-lang/toml
|
||||
|
||||
Compatible with TOML version
|
||||
[v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md)
|
||||
|
||||
Documentation: https://godoc.org/github.com/BurntSushi/toml
|
||||
|
||||
Installation:
|
||||
|
||||
```bash
|
||||
go get github.com/BurntSushi/toml
|
||||
```
|
||||
|
||||
Try the toml validator:
|
||||
|
||||
```bash
|
||||
go get github.com/BurntSushi/toml/cmd/tomlv
|
||||
tomlv some-toml-file.toml
|
||||
```
|
||||
|
||||
[](https://travis-ci.org/BurntSushi/toml) [](https://godoc.org/github.com/BurntSushi/toml)
|
||||
|
||||
### Testing
|
||||
|
||||
This package passes all tests in
|
||||
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
|
||||
and the encoder.
|
||||
|
||||
### Examples
|
||||
|
||||
This package works similarly to how the Go standard library handles `XML`
|
||||
and `JSON`. Namely, data is loaded into Go values via reflection.
|
||||
|
||||
For the simplest example, consider some TOML file as just a list of keys
|
||||
and values:
|
||||
|
||||
```toml
|
||||
Age = 25
|
||||
Cats = [ "Cauchy", "Plato" ]
|
||||
Pi = 3.14
|
||||
Perfection = [ 6, 28, 496, 8128 ]
|
||||
DOB = 1987-07-05T05:45:00Z
|
||||
```
|
||||
|
||||
Which could be defined in Go as:
|
||||
|
||||
```go
|
||||
type Config struct {
|
||||
Age int
|
||||
Cats []string
|
||||
Pi float64
|
||||
Perfection []int
|
||||
DOB time.Time // requires `import time`
|
||||
}
|
||||
```
|
||||
|
||||
And then decoded with:
|
||||
|
||||
```go
|
||||
var conf Config
|
||||
if _, err := toml.Decode(tomlData, &conf); err != nil {
|
||||
// handle error
|
||||
}
|
||||
```
|
||||
|
||||
You can also use struct tags if your struct field name doesn't map to a TOML
|
||||
key value directly:
|
||||
|
||||
```toml
|
||||
some_key_NAME = "wat"
|
||||
```
|
||||
|
||||
```go
|
||||
type TOML struct {
|
||||
ObscureKey string `toml:"some_key_NAME"`
|
||||
}
|
||||
```
|
||||
|
||||
### Using the `encoding.TextUnmarshaler` interface
|
||||
|
||||
Here's an example that automatically parses duration strings into
|
||||
`time.Duration` values:
|
||||
|
||||
```toml
|
||||
[[song]]
|
||||
name = "Thunder Road"
|
||||
duration = "4m49s"
|
||||
|
||||
[[song]]
|
||||
name = "Stairway to Heaven"
|
||||
duration = "8m03s"
|
||||
```
|
||||
|
||||
Which can be decoded with:
|
||||
|
||||
```go
|
||||
type song struct {
|
||||
Name string
|
||||
Duration duration
|
||||
}
|
||||
type songs struct {
|
||||
Song []song
|
||||
}
|
||||
var favorites songs
|
||||
if _, err := toml.Decode(blob, &favorites); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
for _, s := range favorites.Song {
|
||||
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
|
||||
}
|
||||
```
|
||||
|
||||
And you'll also need a `duration` type that satisfies the
|
||||
`encoding.TextUnmarshaler` interface:
|
||||
|
||||
```go
|
||||
type duration struct {
|
||||
time.Duration
|
||||
}
|
||||
|
||||
func (d *duration) UnmarshalText(text []byte) error {
|
||||
var err error
|
||||
d.Duration, err = time.ParseDuration(string(text))
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
### More complex usage
|
||||
|
||||
Here's an example of how to load the example from the official spec page:
|
||||
|
||||
```toml
|
||||
# This is a TOML document. Boom.
|
||||
|
||||
title = "TOML Example"
|
||||
|
||||
[owner]
|
||||
name = "Tom Preston-Werner"
|
||||
organization = "GitHub"
|
||||
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
|
||||
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
|
||||
|
||||
[database]
|
||||
server = "192.168.1.1"
|
||||
ports = [ 8001, 8001, 8002 ]
|
||||
connection_max = 5000
|
||||
enabled = true
|
||||
|
||||
[servers]
|
||||
|
||||
# You can indent as you please. Tabs or spaces. TOML don't care.
|
||||
[servers.alpha]
|
||||
ip = "10.0.0.1"
|
||||
dc = "eqdc10"
|
||||
|
||||
[servers.beta]
|
||||
ip = "10.0.0.2"
|
||||
dc = "eqdc10"
|
||||
|
||||
[clients]
|
||||
data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
|
||||
|
||||
# Line breaks are OK when inside arrays
|
||||
hosts = [
|
||||
"alpha",
|
||||
"omega"
|
||||
]
|
||||
```
|
||||
|
||||
And the corresponding Go types are:
|
||||
|
||||
```go
|
||||
type tomlConfig struct {
|
||||
Title string
|
||||
Owner ownerInfo
|
||||
DB database `toml:"database"`
|
||||
Servers map[string]server
|
||||
Clients clients
|
||||
}
|
||||
|
||||
type ownerInfo struct {
|
||||
Name string
|
||||
Org string `toml:"organization"`
|
||||
Bio string
|
||||
DOB time.Time
|
||||
}
|
||||
|
||||
type database struct {
|
||||
Server string
|
||||
Ports []int
|
||||
ConnMax int `toml:"connection_max"`
|
||||
Enabled bool
|
||||
}
|
||||
|
||||
type server struct {
|
||||
IP string
|
||||
DC string
|
||||
}
|
||||
|
||||
type clients struct {
|
||||
Data [][]interface{}
|
||||
Hosts []string
|
||||
}
|
||||
```
|
||||
|
||||
Note that a case insensitive match will be tried if an exact match can't be
|
||||
found.
|
||||
|
||||
A working example of the above can be found in `_examples/example.{go,toml}`.
|
509
vendor/github.com/BurntSushi/toml/decode.go
generated
vendored
Normal file
509
vendor/github.com/BurntSushi/toml/decode.go
generated
vendored
Normal file
|
@ -0,0 +1,509 @@
|
|||
package toml
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"math"
|
||||
"reflect"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func e(format string, args ...interface{}) error {
|
||||
return fmt.Errorf("toml: "+format, args...)
|
||||
}
|
||||
|
||||
// Unmarshaler is the interface implemented by objects that can unmarshal a
|
||||
// TOML description of themselves.
|
||||
type Unmarshaler interface {
|
||||
UnmarshalTOML(interface{}) error
|
||||
}
|
||||
|
||||
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
|
||||
func Unmarshal(p []byte, v interface{}) error {
|
||||
_, err := Decode(string(p), v)
|
||||
return err
|
||||
}
|
||||
|
||||
// Primitive is a TOML value that hasn't been decoded into a Go value.
|
||||
// When using the various `Decode*` functions, the type `Primitive` may
|
||||
// be given to any value, and its decoding will be delayed.
|
||||
//
|
||||
// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
|
||||
//
|
||||
// The underlying representation of a `Primitive` value is subject to change.
|
||||
// Do not rely on it.
|
||||
//
|
||||
// N.B. Primitive values are still parsed, so using them will only avoid
|
||||
// the overhead of reflection. They can be useful when you don't know the
|
||||
// exact type of TOML data until run time.
|
||||
type Primitive struct {
|
||||
undecoded interface{}
|
||||
context Key
|
||||
}
|
||||
|
||||
// DEPRECATED!
|
||||
//
|
||||
// Use MetaData.PrimitiveDecode instead.
|
||||
func PrimitiveDecode(primValue Primitive, v interface{}) error {
|
||||
md := MetaData{decoded: make(map[string]bool)}
|
||||
return md.unify(primValue.undecoded, rvalue(v))
|
||||
}
|
||||
|
||||
// PrimitiveDecode is just like the other `Decode*` functions, except it
|
||||
// decodes a TOML value that has already been parsed. Valid primitive values
|
||||
// can *only* be obtained from values filled by the decoder functions,
|
||||
// including this method. (i.e., `v` may contain more `Primitive`
|
||||
// values.)
|
||||
//
|
||||
// Meta data for primitive values is included in the meta data returned by
|
||||
// the `Decode*` functions with one exception: keys returned by the Undecoded
|
||||
// method will only reflect keys that were decoded. Namely, any keys hidden
|
||||
// behind a Primitive will be considered undecoded. Executing this method will
|
||||
// update the undecoded keys in the meta data. (See the example.)
|
||||
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
|
||||
md.context = primValue.context
|
||||
defer func() { md.context = nil }()
|
||||
return md.unify(primValue.undecoded, rvalue(v))
|
||||
}
|
||||
|
||||
// Decode will decode the contents of `data` in TOML format into a pointer
|
||||
// `v`.
|
||||
//
|
||||
// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
|
||||
// used interchangeably.)
|
||||
//
|
||||
// TOML arrays of tables correspond to either a slice of structs or a slice
|
||||
// of maps.
|
||||
//
|
||||
// TOML datetimes correspond to Go `time.Time` values.
|
||||
//
|
||||
// All other TOML types (float, string, int, bool and array) correspond
|
||||
// to the obvious Go types.
|
||||
//
|
||||
// An exception to the above rules is if a type implements the
|
||||
// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
|
||||
// (floats, strings, integers, booleans and datetimes) will be converted to
|
||||
// a byte string and given to the value's UnmarshalText method. See the
|
||||
// Unmarshaler example for a demonstration with time duration strings.
|
||||
//
|
||||
// Key mapping
|
||||
//
|
||||
// TOML keys can map to either keys in a Go map or field names in a Go
|
||||
// struct. The special `toml` struct tag may be used to map TOML keys to
|
||||
// struct fields that don't match the key name exactly. (See the example.)
|
||||
// A case insensitive match to struct names will be tried if an exact match
|
||||
// can't be found.
|
||||
//
|
||||
// The mapping between TOML values and Go values is loose. That is, there
|
||||
// may exist TOML values that cannot be placed into your representation, and
|
||||
// there may be parts of your representation that do not correspond to
|
||||
// TOML values. This loose mapping can be made stricter by using the IsDefined
|
||||
// and/or Undecoded methods on the MetaData returned.
|
||||
//
|
||||
// This decoder will not handle cyclic types. If a cyclic type is passed,
|
||||
// `Decode` will not terminate.
|
||||
func Decode(data string, v interface{}) (MetaData, error) {
|
||||
rv := reflect.ValueOf(v)
|
||||
if rv.Kind() != reflect.Ptr {
|
||||
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
|
||||
}
|
||||
if rv.IsNil() {
|
||||
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
|
||||
}
|
||||
p, err := parse(data)
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
md := MetaData{
|
||||
p.mapping, p.types, p.ordered,
|
||||
make(map[string]bool, len(p.ordered)), nil,
|
||||
}
|
||||
return md, md.unify(p.mapping, indirect(rv))
|
||||
}
|
||||
|
||||
// DecodeFile is just like Decode, except it will automatically read the
|
||||
// contents of the file at `fpath` and decode it for you.
|
||||
func DecodeFile(fpath string, v interface{}) (MetaData, error) {
|
||||
bs, err := ioutil.ReadFile(fpath)
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
return Decode(string(bs), v)
|
||||
}
|
||||
|
||||
// DecodeReader is just like Decode, except it will consume all bytes
|
||||
// from the reader and decode it for you.
|
||||
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
|
||||
bs, err := ioutil.ReadAll(r)
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
return Decode(string(bs), v)
|
||||
}
|
||||
|
||||
// unify performs a sort of type unification based on the structure of `rv`,
|
||||
// which is the client representation.
|
||||
//
|
||||
// Any type mismatch produces an error. Finding a type that we don't know
|
||||
// how to handle produces an unsupported type error.
|
||||
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
|
||||
|
||||
// Special case. Look for a `Primitive` value.
|
||||
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
|
||||
// Save the undecoded data and the key context into the primitive
|
||||
// value.
|
||||
context := make(Key, len(md.context))
|
||||
copy(context, md.context)
|
||||
rv.Set(reflect.ValueOf(Primitive{
|
||||
undecoded: data,
|
||||
context: context,
|
||||
}))
|
||||
return nil
|
||||
}
|
||||
|
||||
// Special case. Unmarshaler Interface support.
|
||||
if rv.CanAddr() {
|
||||
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
|
||||
return v.UnmarshalTOML(data)
|
||||
}
|
||||
}
|
||||
|
||||
// Special case. Handle time.Time values specifically.
|
||||
// TODO: Remove this code when we decide to drop support for Go 1.1.
|
||||
// This isn't necessary in Go 1.2 because time.Time satisfies the encoding
|
||||
// interfaces.
|
||||
if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
|
||||
return md.unifyDatetime(data, rv)
|
||||
}
|
||||
|
||||
// Special case. Look for a value satisfying the TextUnmarshaler interface.
|
||||
if v, ok := rv.Interface().(TextUnmarshaler); ok {
|
||||
return md.unifyText(data, v)
|
||||
}
|
||||
// BUG(burntsushi)
|
||||
// The behavior here is incorrect whenever a Go type satisfies the
|
||||
// encoding.TextUnmarshaler interface but also corresponds to a TOML
|
||||
// hash or array. In particular, the unmarshaler should only be applied
|
||||
// to primitive TOML values. But at this point, it will be applied to
|
||||
// all kinds of values and produce an incorrect error whenever those values
|
||||
// are hashes or arrays (including arrays of tables).
|
||||
|
||||
k := rv.Kind()
|
||||
|
||||
// laziness
|
||||
if k >= reflect.Int && k <= reflect.Uint64 {
|
||||
return md.unifyInt(data, rv)
|
||||
}
|
||||
switch k {
|
||||
case reflect.Ptr:
|
||||
elem := reflect.New(rv.Type().Elem())
|
||||
err := md.unify(data, reflect.Indirect(elem))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
rv.Set(elem)
|
||||
return nil
|
||||
case reflect.Struct:
|
||||
return md.unifyStruct(data, rv)
|
||||
case reflect.Map:
|
||||
return md.unifyMap(data, rv)
|
||||
case reflect.Array:
|
||||
return md.unifyArray(data, rv)
|
||||
case reflect.Slice:
|
||||
return md.unifySlice(data, rv)
|
||||
case reflect.String:
|
||||
return md.unifyString(data, rv)
|
||||
case reflect.Bool:
|
||||
return md.unifyBool(data, rv)
|
||||
case reflect.Interface:
|
||||
// we only support empty interfaces.
|
||||
if rv.NumMethod() > 0 {
|
||||
return e("unsupported type %s", rv.Type())
|
||||
}
|
||||
return md.unifyAnything(data, rv)
|
||||
case reflect.Float32:
|
||||
fallthrough
|
||||
case reflect.Float64:
|
||||
return md.unifyFloat64(data, rv)
|
||||
}
|
||||
return e("unsupported type %s", rv.Kind())
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
|
||||
tmap, ok := mapping.(map[string]interface{})
|
||||
if !ok {
|
||||
if mapping == nil {
|
||||
return nil
|
||||
}
|
||||
return e("type mismatch for %s: expected table but found %T",
|
||||
rv.Type().String(), mapping)
|
||||
}
|
||||
|
||||
for key, datum := range tmap {
|
||||
var f *field
|
||||
fields := cachedTypeFields(rv.Type())
|
||||
for i := range fields {
|
||||
ff := &fields[i]
|
||||
if ff.name == key {
|
||||
f = ff
|
||||
break
|
||||
}
|
||||
if f == nil && strings.EqualFold(ff.name, key) {
|
||||
f = ff
|
||||
}
|
||||
}
|
||||
if f != nil {
|
||||
subv := rv
|
||||
for _, i := range f.index {
|
||||
subv = indirect(subv.Field(i))
|
||||
}
|
||||
if isUnifiable(subv) {
|
||||
md.decoded[md.context.add(key).String()] = true
|
||||
md.context = append(md.context, key)
|
||||
if err := md.unify(datum, subv); err != nil {
|
||||
return err
|
||||
}
|
||||
md.context = md.context[0 : len(md.context)-1]
|
||||
} else if f.name != "" {
|
||||
// Bad user! No soup for you!
|
||||
return e("cannot write unexported field %s.%s",
|
||||
rv.Type().String(), f.name)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
|
||||
tmap, ok := mapping.(map[string]interface{})
|
||||
if !ok {
|
||||
if tmap == nil {
|
||||
return nil
|
||||
}
|
||||
return badtype("map", mapping)
|
||||
}
|
||||
if rv.IsNil() {
|
||||
rv.Set(reflect.MakeMap(rv.Type()))
|
||||
}
|
||||
for k, v := range tmap {
|
||||
md.decoded[md.context.add(k).String()] = true
|
||||
md.context = append(md.context, k)
|
||||
|
||||
rvkey := indirect(reflect.New(rv.Type().Key()))
|
||||
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
|
||||
if err := md.unify(v, rvval); err != nil {
|
||||
return err
|
||||
}
|
||||
md.context = md.context[0 : len(md.context)-1]
|
||||
|
||||
rvkey.SetString(k)
|
||||
rv.SetMapIndex(rvkey, rvval)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
|
||||
datav := reflect.ValueOf(data)
|
||||
if datav.Kind() != reflect.Slice {
|
||||
if !datav.IsValid() {
|
||||
return nil
|
||||
}
|
||||
return badtype("slice", data)
|
||||
}
|
||||
sliceLen := datav.Len()
|
||||
if sliceLen != rv.Len() {
|
||||
return e("expected array length %d; got TOML array of length %d",
|
||||
rv.Len(), sliceLen)
|
||||
}
|
||||
return md.unifySliceArray(datav, rv)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
|
||||
datav := reflect.ValueOf(data)
|
||||
if datav.Kind() != reflect.Slice {
|
||||
if !datav.IsValid() {
|
||||
return nil
|
||||
}
|
||||
return badtype("slice", data)
|
||||
}
|
||||
n := datav.Len()
|
||||
if rv.IsNil() || rv.Cap() < n {
|
||||
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
|
||||
}
|
||||
rv.SetLen(n)
|
||||
return md.unifySliceArray(datav, rv)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
|
||||
sliceLen := data.Len()
|
||||
for i := 0; i < sliceLen; i++ {
|
||||
v := data.Index(i).Interface()
|
||||
sliceval := indirect(rv.Index(i))
|
||||
if err := md.unify(v, sliceval); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
|
||||
if _, ok := data.(time.Time); ok {
|
||||
rv.Set(reflect.ValueOf(data))
|
||||
return nil
|
||||
}
|
||||
return badtype("time.Time", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
|
||||
if s, ok := data.(string); ok {
|
||||
rv.SetString(s)
|
||||
return nil
|
||||
}
|
||||
return badtype("string", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
|
||||
if num, ok := data.(float64); ok {
|
||||
switch rv.Kind() {
|
||||
case reflect.Float32:
|
||||
fallthrough
|
||||
case reflect.Float64:
|
||||
rv.SetFloat(num)
|
||||
default:
|
||||
panic("bug")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
return badtype("float", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
|
||||
if num, ok := data.(int64); ok {
|
||||
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
|
||||
switch rv.Kind() {
|
||||
case reflect.Int, reflect.Int64:
|
||||
// No bounds checking necessary.
|
||||
case reflect.Int8:
|
||||
if num < math.MinInt8 || num > math.MaxInt8 {
|
||||
return e("value %d is out of range for int8", num)
|
||||
}
|
||||
case reflect.Int16:
|
||||
if num < math.MinInt16 || num > math.MaxInt16 {
|
||||
return e("value %d is out of range for int16", num)
|
||||
}
|
||||
case reflect.Int32:
|
||||
if num < math.MinInt32 || num > math.MaxInt32 {
|
||||
return e("value %d is out of range for int32", num)
|
||||
}
|
||||
}
|
||||
rv.SetInt(num)
|
||||
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
|
||||
unum := uint64(num)
|
||||
switch rv.Kind() {
|
||||
case reflect.Uint, reflect.Uint64:
|
||||
// No bounds checking necessary.
|
||||
case reflect.Uint8:
|
||||
if num < 0 || unum > math.MaxUint8 {
|
||||
return e("value %d is out of range for uint8", num)
|
||||
}
|
||||
case reflect.Uint16:
|
||||
if num < 0 || unum > math.MaxUint16 {
|
||||
return e("value %d is out of range for uint16", num)
|
||||
}
|
||||
case reflect.Uint32:
|
||||
if num < 0 || unum > math.MaxUint32 {
|
||||
return e("value %d is out of range for uint32", num)
|
||||
}
|
||||
}
|
||||
rv.SetUint(unum)
|
||||
} else {
|
||||
panic("unreachable")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
return badtype("integer", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
|
||||
if b, ok := data.(bool); ok {
|
||||
rv.SetBool(b)
|
||||
return nil
|
||||
}
|
||||
return badtype("boolean", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
|
||||
rv.Set(reflect.ValueOf(data))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
|
||||
var s string
|
||||
switch sdata := data.(type) {
|
||||
case TextMarshaler:
|
||||
text, err := sdata.MarshalText()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s = string(text)
|
||||
case fmt.Stringer:
|
||||
s = sdata.String()
|
||||
case string:
|
||||
s = sdata
|
||||
case bool:
|
||||
s = fmt.Sprintf("%v", sdata)
|
||||
case int64:
|
||||
s = fmt.Sprintf("%d", sdata)
|
||||
case float64:
|
||||
s = fmt.Sprintf("%f", sdata)
|
||||
default:
|
||||
return badtype("primitive (string-like)", data)
|
||||
}
|
||||
if err := v.UnmarshalText([]byte(s)); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
|
||||
func rvalue(v interface{}) reflect.Value {
|
||||
return indirect(reflect.ValueOf(v))
|
||||
}
|
||||
|
||||
// indirect returns the value pointed to by a pointer.
|
||||
// Pointers are followed until the value is not a pointer.
|
||||
// New values are allocated for each nil pointer.
|
||||
//
|
||||
// An exception to this rule is if the value satisfies an interface of
|
||||
// interest to us (like encoding.TextUnmarshaler).
|
||||
func indirect(v reflect.Value) reflect.Value {
|
||||
if v.Kind() != reflect.Ptr {
|
||||
if v.CanSet() {
|
||||
pv := v.Addr()
|
||||
if _, ok := pv.Interface().(TextUnmarshaler); ok {
|
||||
return pv
|
||||
}
|
||||
}
|
||||
return v
|
||||
}
|
||||
if v.IsNil() {
|
||||
v.Set(reflect.New(v.Type().Elem()))
|
||||
}
|
||||
return indirect(reflect.Indirect(v))
|
||||
}
|
||||
|
||||
func isUnifiable(rv reflect.Value) bool {
|
||||
if rv.CanSet() {
|
||||
return true
|
||||
}
|
||||
if _, ok := rv.Interface().(TextUnmarshaler); ok {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func badtype(expected string, data interface{}) error {
|
||||
return e("cannot load TOML value of type %T into a Go %s", data, expected)
|
||||
}
|
121
vendor/github.com/BurntSushi/toml/decode_meta.go
generated
vendored
Normal file
121
vendor/github.com/BurntSushi/toml/decode_meta.go
generated
vendored
Normal file
|
@ -0,0 +1,121 @@
|
|||
package toml
|
||||
|
||||
import "strings"
|
||||
|
||||
// MetaData allows access to meta information about TOML data that may not
|
||||
// be inferrable via reflection. In particular, whether a key has been defined
|
||||
// and the TOML type of a key.
|
||||
type MetaData struct {
|
||||
mapping map[string]interface{}
|
||||
types map[string]tomlType
|
||||
keys []Key
|
||||
decoded map[string]bool
|
||||
context Key // Used only during decoding.
|
||||
}
|
||||
|
||||
// IsDefined returns true if the key given exists in the TOML data. The key
|
||||
// should be specified hierarchially. e.g.,
|
||||
//
|
||||
// // access the TOML key 'a.b.c'
|
||||
// IsDefined("a", "b", "c")
|
||||
//
|
||||
// IsDefined will return false if an empty key given. Keys are case sensitive.
|
||||
func (md *MetaData) IsDefined(key ...string) bool {
|
||||
if len(key) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
var hash map[string]interface{}
|
||||
var ok bool
|
||||
var hashOrVal interface{} = md.mapping
|
||||
for _, k := range key {
|
||||
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
|
||||
return false
|
||||
}
|
||||
if hashOrVal, ok = hash[k]; !ok {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Type returns a string representation of the type of the key specified.
|
||||
//
|
||||
// Type will return the empty string if given an empty key or a key that
|
||||
// does not exist. Keys are case sensitive.
|
||||
func (md *MetaData) Type(key ...string) string {
|
||||
fullkey := strings.Join(key, ".")
|
||||
if typ, ok := md.types[fullkey]; ok {
|
||||
return typ.typeString()
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
|
||||
// to get values of this type.
|
||||
type Key []string
|
||||
|
||||
func (k Key) String() string {
|
||||
return strings.Join(k, ".")
|
||||
}
|
||||
|
||||
func (k Key) maybeQuotedAll() string {
|
||||
var ss []string
|
||||
for i := range k {
|
||||
ss = append(ss, k.maybeQuoted(i))
|
||||
}
|
||||
return strings.Join(ss, ".")
|
||||
}
|
||||
|
||||
func (k Key) maybeQuoted(i int) string {
|
||||
quote := false
|
||||
for _, c := range k[i] {
|
||||
if !isBareKeyChar(c) {
|
||||
quote = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if quote {
|
||||
return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
|
||||
}
|
||||
return k[i]
|
||||
}
|
||||
|
||||
func (k Key) add(piece string) Key {
|
||||
newKey := make(Key, len(k)+1)
|
||||
copy(newKey, k)
|
||||
newKey[len(k)] = piece
|
||||
return newKey
|
||||
}
|
||||
|
||||
// Keys returns a slice of every key in the TOML data, including key groups.
|
||||
// Each key is itself a slice, where the first element is the top of the
|
||||
// hierarchy and the last is the most specific.
|
||||
//
|
||||
// The list will have the same order as the keys appeared in the TOML data.
|
||||
//
|
||||
// All keys returned are non-empty.
|
||||
func (md *MetaData) Keys() []Key {
|
||||
return md.keys
|
||||
}
|
||||
|
||||
// Undecoded returns all keys that have not been decoded in the order in which
|
||||
// they appear in the original TOML document.
|
||||
//
|
||||
// This includes keys that haven't been decoded because of a Primitive value.
|
||||
// Once the Primitive value is decoded, the keys will be considered decoded.
|
||||
//
|
||||
// Also note that decoding into an empty interface will result in no decoding,
|
||||
// and so no keys will be considered decoded.
|
||||
//
|
||||
// In this sense, the Undecoded keys correspond to keys in the TOML document
|
||||
// that do not have a concrete type in your representation.
|
||||
func (md *MetaData) Undecoded() []Key {
|
||||
undecoded := make([]Key, 0, len(md.keys))
|
||||
for _, key := range md.keys {
|
||||
if !md.decoded[key.String()] {
|
||||
undecoded = append(undecoded, key)
|
||||
}
|
||||
}
|
||||
return undecoded
|
||||
}
|
27
vendor/github.com/BurntSushi/toml/doc.go
generated
vendored
Normal file
27
vendor/github.com/BurntSushi/toml/doc.go
generated
vendored
Normal file
|
@ -0,0 +1,27 @@
|
|||
/*
|
||||
Package toml provides facilities for decoding and encoding TOML configuration
|
||||
files via reflection. There is also support for delaying decoding with
|
||||
the Primitive type, and querying the set of keys in a TOML document with the
|
||||
MetaData type.
|
||||
|
||||
The specification implemented: https://github.com/toml-lang/toml
|
||||
|
||||
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
|
||||
whether a file is a valid TOML document. It can also be used to print the
|
||||
type of each key in a TOML document.
|
||||
|
||||
Testing
|
||||
|
||||
There are two important types of tests used for this package. The first is
|
||||
contained inside '*_test.go' files and uses the standard Go unit testing
|
||||
framework. These tests are primarily devoted to holistically testing the
|
||||
decoder and encoder.
|
||||
|
||||
The second type of testing is used to verify the implementation's adherence
|
||||
to the TOML specification. These tests have been factored into their own
|
||||
project: https://github.com/BurntSushi/toml-test
|
||||
|
||||
The reason the tests are in a separate project is so that they can be used by
|
||||
any implementation of TOML. Namely, it is language agnostic.
|
||||
*/
|
||||
package toml
|
568
vendor/github.com/BurntSushi/toml/encode.go
generated
vendored
Normal file
568
vendor/github.com/BurntSushi/toml/encode.go
generated
vendored
Normal file
|
@ -0,0 +1,568 @@
|
|||
package toml
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type tomlEncodeError struct{ error }
|
||||
|
||||
var (
|
||||
errArrayMixedElementTypes = errors.New(
|
||||
"toml: cannot encode array with mixed element types")
|
||||
errArrayNilElement = errors.New(
|
||||
"toml: cannot encode array with nil element")
|
||||
errNonString = errors.New(
|
||||
"toml: cannot encode a map with non-string key type")
|
||||
errAnonNonStruct = errors.New(
|
||||
"toml: cannot encode an anonymous field that is not a struct")
|
||||
errArrayNoTable = errors.New(
|
||||
"toml: TOML array element cannot contain a table")
|
||||
errNoKey = errors.New(
|
||||
"toml: top-level values must be Go maps or structs")
|
||||
errAnything = errors.New("") // used in testing
|
||||
)
|
||||
|
||||
var quotedReplacer = strings.NewReplacer(
|
||||
"\t", "\\t",
|
||||
"\n", "\\n",
|
||||
"\r", "\\r",
|
||||
"\"", "\\\"",
|
||||
"\\", "\\\\",
|
||||
)
|
||||
|
||||
// Encoder controls the encoding of Go values to a TOML document to some
|
||||
// io.Writer.
|
||||
//
|
||||
// The indentation level can be controlled with the Indent field.
|
||||
type Encoder struct {
|
||||
// A single indentation level. By default it is two spaces.
|
||||
Indent string
|
||||
|
||||
// hasWritten is whether we have written any output to w yet.
|
||||
hasWritten bool
|
||||
w *bufio.Writer
|
||||
}
|
||||
|
||||
// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
|
||||
// given. By default, a single indentation level is 2 spaces.
|
||||
func NewEncoder(w io.Writer) *Encoder {
|
||||
return &Encoder{
|
||||
w: bufio.NewWriter(w),
|
||||
Indent: " ",
|
||||
}
|
||||
}
|
||||
|
||||
// Encode writes a TOML representation of the Go value to the underlying
|
||||
// io.Writer. If the value given cannot be encoded to a valid TOML document,
|
||||
// then an error is returned.
|
||||
//
|
||||
// The mapping between Go values and TOML values should be precisely the same
|
||||
// as for the Decode* functions. Similarly, the TextMarshaler interface is
|
||||
// supported by encoding the resulting bytes as strings. (If you want to write
|
||||
// arbitrary binary data then you will need to use something like base64 since
|
||||
// TOML does not have any binary types.)
|
||||
//
|
||||
// When encoding TOML hashes (i.e., Go maps or structs), keys without any
|
||||
// sub-hashes are encoded first.
|
||||
//
|
||||
// If a Go map is encoded, then its keys are sorted alphabetically for
|
||||
// deterministic output. More control over this behavior may be provided if
|
||||
// there is demand for it.
|
||||
//
|
||||
// Encoding Go values without a corresponding TOML representation---like map
|
||||
// types with non-string keys---will cause an error to be returned. Similarly
|
||||
// for mixed arrays/slices, arrays/slices with nil elements, embedded
|
||||
// non-struct types and nested slices containing maps or structs.
|
||||
// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
|
||||
// and so is []map[string][]string.)
|
||||
func (enc *Encoder) Encode(v interface{}) error {
|
||||
rv := eindirect(reflect.ValueOf(v))
|
||||
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
|
||||
return err
|
||||
}
|
||||
return enc.w.Flush()
|
||||
}
|
||||
|
||||
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
if terr, ok := r.(tomlEncodeError); ok {
|
||||
err = terr.error
|
||||
return
|
||||
}
|
||||
panic(r)
|
||||
}
|
||||
}()
|
||||
enc.encode(key, rv)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (enc *Encoder) encode(key Key, rv reflect.Value) {
|
||||
// Special case. Time needs to be in ISO8601 format.
|
||||
// Special case. If we can marshal the type to text, then we used that.
|
||||
// Basically, this prevents the encoder for handling these types as
|
||||
// generic structs (or whatever the underlying type of a TextMarshaler is).
|
||||
switch rv.Interface().(type) {
|
||||
case time.Time, TextMarshaler:
|
||||
enc.keyEqElement(key, rv)
|
||||
return
|
||||
}
|
||||
|
||||
k := rv.Kind()
|
||||
switch k {
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
|
||||
reflect.Int64,
|
||||
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
|
||||
reflect.Uint64,
|
||||
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
|
||||
enc.keyEqElement(key, rv)
|
||||
case reflect.Array, reflect.Slice:
|
||||
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
|
||||
enc.eArrayOfTables(key, rv)
|
||||
} else {
|
||||
enc.keyEqElement(key, rv)
|
||||
}
|
||||
case reflect.Interface:
|
||||
if rv.IsNil() {
|
||||
return
|
||||
}
|
||||
enc.encode(key, rv.Elem())
|
||||
case reflect.Map:
|
||||
if rv.IsNil() {
|
||||
return
|
||||
}
|
||||
enc.eTable(key, rv)
|
||||
case reflect.Ptr:
|
||||
if rv.IsNil() {
|
||||
return
|
||||
}
|
||||
enc.encode(key, rv.Elem())
|
||||
case reflect.Struct:
|
||||
enc.eTable(key, rv)
|
||||
default:
|
||||
panic(e("unsupported type for key '%s': %s", key, k))
|
||||
}
|
||||
}
|
||||
|
||||
// eElement encodes any value that can be an array element (primitives and
|
||||
// arrays).
|
||||
func (enc *Encoder) eElement(rv reflect.Value) {
|
||||
switch v := rv.Interface().(type) {
|
||||
case time.Time:
|
||||
// Special case time.Time as a primitive. Has to come before
|
||||
// TextMarshaler below because time.Time implements
|
||||
// encoding.TextMarshaler, but we need to always use UTC.
|
||||
enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
|
||||
return
|
||||
case TextMarshaler:
|
||||
// Special case. Use text marshaler if it's available for this value.
|
||||
if s, err := v.MarshalText(); err != nil {
|
||||
encPanic(err)
|
||||
} else {
|
||||
enc.writeQuoted(string(s))
|
||||
}
|
||||
return
|
||||
}
|
||||
switch rv.Kind() {
|
||||
case reflect.Bool:
|
||||
enc.wf(strconv.FormatBool(rv.Bool()))
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
|
||||
reflect.Int64:
|
||||
enc.wf(strconv.FormatInt(rv.Int(), 10))
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16,
|
||||
reflect.Uint32, reflect.Uint64:
|
||||
enc.wf(strconv.FormatUint(rv.Uint(), 10))
|
||||
case reflect.Float32:
|
||||
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
|
||||
case reflect.Float64:
|
||||
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
|
||||
case reflect.Array, reflect.Slice:
|
||||
enc.eArrayOrSliceElement(rv)
|
||||
case reflect.Interface:
|
||||
enc.eElement(rv.Elem())
|
||||
case reflect.String:
|
||||
enc.writeQuoted(rv.String())
|
||||
default:
|
||||
panic(e("unexpected primitive type: %s", rv.Kind()))
|
||||
}
|
||||
}
|
||||
|
||||
// By the TOML spec, all floats must have a decimal with at least one
|
||||
// number on either side.
|
||||
func floatAddDecimal(fstr string) string {
|
||||
if !strings.Contains(fstr, ".") {
|
||||
return fstr + ".0"
|
||||
}
|
||||
return fstr
|
||||
}
|
||||
|
||||
func (enc *Encoder) writeQuoted(s string) {
|
||||
enc.wf("\"%s\"", quotedReplacer.Replace(s))
|
||||
}
|
||||
|
||||
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
|
||||
length := rv.Len()
|
||||
enc.wf("[")
|
||||
for i := 0; i < length; i++ {
|
||||
elem := rv.Index(i)
|
||||
enc.eElement(elem)
|
||||
if i != length-1 {
|
||||
enc.wf(", ")
|
||||
}
|
||||
}
|
||||
enc.wf("]")
|
||||
}
|
||||
|
||||
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
|
||||
if len(key) == 0 {
|
||||
encPanic(errNoKey)
|
||||
}
|
||||
for i := 0; i < rv.Len(); i++ {
|
||||
trv := rv.Index(i)
|
||||
if isNil(trv) {
|
||||
continue
|
||||
}
|
||||
panicIfInvalidKey(key)
|
||||
enc.newline()
|
||||
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
|
||||
enc.newline()
|
||||
enc.eMapOrStruct(key, trv)
|
||||
}
|
||||
}
|
||||
|
||||
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
|
||||
panicIfInvalidKey(key)
|
||||
if len(key) == 1 {
|
||||
// Output an extra newline between top-level tables.
|
||||
// (The newline isn't written if nothing else has been written though.)
|
||||
enc.newline()
|
||||
}
|
||||
if len(key) > 0 {
|
||||
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
|
||||
enc.newline()
|
||||
}
|
||||
enc.eMapOrStruct(key, rv)
|
||||
}
|
||||
|
||||
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
|
||||
switch rv := eindirect(rv); rv.Kind() {
|
||||
case reflect.Map:
|
||||
enc.eMap(key, rv)
|
||||
case reflect.Struct:
|
||||
enc.eStruct(key, rv)
|
||||
default:
|
||||
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
|
||||
}
|
||||
}
|
||||
|
||||
func (enc *Encoder) eMap(key Key, rv reflect.Value) {
|
||||
rt := rv.Type()
|
||||
if rt.Key().Kind() != reflect.String {
|
||||
encPanic(errNonString)
|
||||
}
|
||||
|
||||
// Sort keys so that we have deterministic output. And write keys directly
|
||||
// underneath this key first, before writing sub-structs or sub-maps.
|
||||
var mapKeysDirect, mapKeysSub []string
|
||||
for _, mapKey := range rv.MapKeys() {
|
||||
k := mapKey.String()
|
||||
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
|
||||
mapKeysSub = append(mapKeysSub, k)
|
||||
} else {
|
||||
mapKeysDirect = append(mapKeysDirect, k)
|
||||
}
|
||||
}
|
||||
|
||||
var writeMapKeys = func(mapKeys []string) {
|
||||
sort.Strings(mapKeys)
|
||||
for _, mapKey := range mapKeys {
|
||||
mrv := rv.MapIndex(reflect.ValueOf(mapKey))
|
||||
if isNil(mrv) {
|
||||
// Don't write anything for nil fields.
|
||||
continue
|
||||
}
|
||||
enc.encode(key.add(mapKey), mrv)
|
||||
}
|
||||
}
|
||||
writeMapKeys(mapKeysDirect)
|
||||
writeMapKeys(mapKeysSub)
|
||||
}
|
||||
|
||||
func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
|
||||
// Write keys for fields directly under this key first, because if we write
|
||||
// a field that creates a new table, then all keys under it will be in that
|
||||
// table (not the one we're writing here).
|
||||
rt := rv.Type()
|
||||
var fieldsDirect, fieldsSub [][]int
|
||||
var addFields func(rt reflect.Type, rv reflect.Value, start []int)
|
||||
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
|
||||
for i := 0; i < rt.NumField(); i++ {
|
||||
f := rt.Field(i)
|
||||
// skip unexported fields
|
||||
if f.PkgPath != "" && !f.Anonymous {
|
||||
continue
|
||||
}
|
||||
frv := rv.Field(i)
|
||||
if f.Anonymous {
|
||||
t := f.Type
|
||||
switch t.Kind() {
|
||||
case reflect.Struct:
|
||||
// Treat anonymous struct fields with
|
||||
// tag names as though they are not
|
||||
// anonymous, like encoding/json does.
|
||||
if getOptions(f.Tag).name == "" {
|
||||
addFields(t, frv, f.Index)
|
||||
continue
|
||||
}
|
||||
case reflect.Ptr:
|
||||
if t.Elem().Kind() == reflect.Struct &&
|
||||
getOptions(f.Tag).name == "" {
|
||||
if !frv.IsNil() {
|
||||
addFields(t.Elem(), frv.Elem(), f.Index)
|
||||
}
|
||||
continue
|
||||
}
|
||||
// Fall through to the normal field encoding logic below
|
||||
// for non-struct anonymous fields.
|
||||
}
|
||||
}
|
||||
|
||||
if typeIsHash(tomlTypeOfGo(frv)) {
|
||||
fieldsSub = append(fieldsSub, append(start, f.Index...))
|
||||
} else {
|
||||
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
|
||||
}
|
||||
}
|
||||
}
|
||||
addFields(rt, rv, nil)
|
||||
|
||||
var writeFields = func(fields [][]int) {
|
||||
for _, fieldIndex := range fields {
|
||||
sft := rt.FieldByIndex(fieldIndex)
|
||||
sf := rv.FieldByIndex(fieldIndex)
|
||||
if isNil(sf) {
|
||||
// Don't write anything for nil fields.
|
||||
continue
|
||||
}
|
||||
|
||||
opts := getOptions(sft.Tag)
|
||||
if opts.skip {
|
||||
continue
|
||||
}
|
||||
keyName := sft.Name
|
||||
if opts.name != "" {
|
||||
keyName = opts.name
|
||||
}
|
||||
if opts.omitempty && isEmpty(sf) {
|
||||
continue
|
||||
}
|
||||
if opts.omitzero && isZero(sf) {
|
||||
continue
|
||||
}
|
||||
|
||||
enc.encode(key.add(keyName), sf)
|
||||
}
|
||||
}
|
||||
writeFields(fieldsDirect)
|
||||
writeFields(fieldsSub)
|
||||
}
|
||||
|
||||
// tomlTypeName returns the TOML type name of the Go value's type. It is
|
||||
// used to determine whether the types of array elements are mixed (which is
|
||||
// forbidden). If the Go value is nil, then it is illegal for it to be an array
|
||||
// element, and valueIsNil is returned as true.
|
||||
|
||||
// Returns the TOML type of a Go value. The type may be `nil`, which means
|
||||
// no concrete TOML type could be found.
|
||||
func tomlTypeOfGo(rv reflect.Value) tomlType {
|
||||
if isNil(rv) || !rv.IsValid() {
|
||||
return nil
|
||||
}
|
||||
switch rv.Kind() {
|
||||
case reflect.Bool:
|
||||
return tomlBool
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
|
||||
reflect.Int64,
|
||||
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
|
||||
reflect.Uint64:
|
||||
return tomlInteger
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return tomlFloat
|
||||
case reflect.Array, reflect.Slice:
|
||||
if typeEqual(tomlHash, tomlArrayType(rv)) {
|
||||
return tomlArrayHash
|
||||
}
|
||||
return tomlArray
|
||||
case reflect.Ptr, reflect.Interface:
|
||||
return tomlTypeOfGo(rv.Elem())
|
||||
case reflect.String:
|
||||
return tomlString
|
||||
case reflect.Map:
|
||||
return tomlHash
|
||||
case reflect.Struct:
|
||||
switch rv.Interface().(type) {
|
||||
case time.Time:
|
||||
return tomlDatetime
|
||||
case TextMarshaler:
|
||||
return tomlString
|
||||
default:
|
||||
return tomlHash
|
||||
}
|
||||
default:
|
||||
panic("unexpected reflect.Kind: " + rv.Kind().String())
|
||||
}
|
||||
}
|
||||
|
||||
// tomlArrayType returns the element type of a TOML array. The type returned
|
||||
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
|
||||
// slize). This function may also panic if it finds a type that cannot be
|
||||
// expressed in TOML (such as nil elements, heterogeneous arrays or directly
|
||||
// nested arrays of tables).
|
||||
func tomlArrayType(rv reflect.Value) tomlType {
|
||||
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
|
||||
return nil
|
||||
}
|
||||
firstType := tomlTypeOfGo(rv.Index(0))
|
||||
if firstType == nil {
|
||||
encPanic(errArrayNilElement)
|
||||
}
|
||||
|
||||
rvlen := rv.Len()
|
||||
for i := 1; i < rvlen; i++ {
|
||||
elem := rv.Index(i)
|
||||
switch elemType := tomlTypeOfGo(elem); {
|
||||
case elemType == nil:
|
||||
encPanic(errArrayNilElement)
|
||||
case !typeEqual(firstType, elemType):
|
||||
encPanic(errArrayMixedElementTypes)
|
||||
}
|
||||
}
|
||||
// If we have a nested array, then we must make sure that the nested
|
||||
// array contains ONLY primitives.
|
||||
// This checks arbitrarily nested arrays.
|
||||
if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
|
||||
nest := tomlArrayType(eindirect(rv.Index(0)))
|
||||
if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
|
||||
encPanic(errArrayNoTable)
|
||||
}
|
||||
}
|
||||
return firstType
|
||||
}
|
||||
|
||||
type tagOptions struct {
|
||||
skip bool // "-"
|
||||
name string
|
||||
omitempty bool
|
||||
omitzero bool
|
||||
}
|
||||
|
||||
func getOptions(tag reflect.StructTag) tagOptions {
|
||||
t := tag.Get("toml")
|
||||
if t == "-" {
|
||||
return tagOptions{skip: true}
|
||||
}
|
||||
var opts tagOptions
|
||||
parts := strings.Split(t, ",")
|
||||
opts.name = parts[0]
|
||||
for _, s := range parts[1:] {
|
||||
switch s {
|
||||
case "omitempty":
|
||||
opts.omitempty = true
|
||||
case "omitzero":
|
||||
opts.omitzero = true
|
||||
}
|
||||
}
|
||||
return opts
|
||||
}
|
||||
|
||||
func isZero(rv reflect.Value) bool {
|
||||
switch rv.Kind() {
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return rv.Int() == 0
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
|
||||
return rv.Uint() == 0
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return rv.Float() == 0.0
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func isEmpty(rv reflect.Value) bool {
|
||||
switch rv.Kind() {
|
||||
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
|
||||
return rv.Len() == 0
|
||||
case reflect.Bool:
|
||||
return !rv.Bool()
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (enc *Encoder) newline() {
|
||||
if enc.hasWritten {
|
||||
enc.wf("\n")
|
||||
}
|
||||
}
|
||||
|
||||
func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
|
||||
if len(key) == 0 {
|
||||
encPanic(errNoKey)
|
||||
}
|
||||
panicIfInvalidKey(key)
|
||||
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
|
||||
enc.eElement(val)
|
||||
enc.newline()
|
||||
}
|
||||
|
||||
func (enc *Encoder) wf(format string, v ...interface{}) {
|
||||
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
|
||||
encPanic(err)
|
||||
}
|
||||
enc.hasWritten = true
|
||||
}
|
||||
|
||||
func (enc *Encoder) indentStr(key Key) string {
|
||||
return strings.Repeat(enc.Indent, len(key)-1)
|
||||
}
|
||||
|
||||
func encPanic(err error) {
|
||||
panic(tomlEncodeError{err})
|
||||
}
|
||||
|
||||
func eindirect(v reflect.Value) reflect.Value {
|
||||
switch v.Kind() {
|
||||
case reflect.Ptr, reflect.Interface:
|
||||
return eindirect(v.Elem())
|
||||
default:
|
||||
return v
|
||||
}
|
||||
}
|
||||
|
||||
func isNil(rv reflect.Value) bool {
|
||||
switch rv.Kind() {
|
||||
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
|
||||
return rv.IsNil()
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func panicIfInvalidKey(key Key) {
|
||||
for _, k := range key {
|
||||
if len(k) == 0 {
|
||||
encPanic(e("Key '%s' is not a valid table name. Key names "+
|
||||
"cannot be empty.", key.maybeQuotedAll()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func isValidKeyName(s string) bool {
|
||||
return len(s) != 0
|
||||
}
|
19
vendor/github.com/BurntSushi/toml/encoding_types.go
generated
vendored
Normal file
19
vendor/github.com/BurntSushi/toml/encoding_types.go
generated
vendored
Normal file
|
@ -0,0 +1,19 @@
|
|||
// +build go1.2
|
||||
|
||||
package toml
|
||||
|
||||
// In order to support Go 1.1, we define our own TextMarshaler and
|
||||
// TextUnmarshaler types. For Go 1.2+, we just alias them with the
|
||||
// standard library interfaces.
|
||||
|
||||
import (
|
||||
"encoding"
|
||||
)
|
||||
|
||||
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
|
||||
// so that Go 1.1 can be supported.
|
||||
type TextMarshaler encoding.TextMarshaler
|
||||
|
||||
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
|
||||
// here so that Go 1.1 can be supported.
|
||||
type TextUnmarshaler encoding.TextUnmarshaler
|
18
vendor/github.com/BurntSushi/toml/encoding_types_1.1.go
generated
vendored
Normal file
18
vendor/github.com/BurntSushi/toml/encoding_types_1.1.go
generated
vendored
Normal file
|
@ -0,0 +1,18 @@
|
|||
// +build !go1.2
|
||||
|
||||
package toml
|
||||
|
||||
// These interfaces were introduced in Go 1.2, so we add them manually when
|
||||
// compiling for Go 1.1.
|
||||
|
||||
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
|
||||
// so that Go 1.1 can be supported.
|
||||
type TextMarshaler interface {
|
||||
MarshalText() (text []byte, err error)
|
||||
}
|
||||
|
||||
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
|
||||
// here so that Go 1.1 can be supported.
|
||||
type TextUnmarshaler interface {
|
||||
UnmarshalText(text []byte) error
|
||||
}
|
953
vendor/github.com/BurntSushi/toml/lex.go
generated
vendored
Normal file
953
vendor/github.com/BurntSushi/toml/lex.go
generated
vendored
Normal file
|
@ -0,0 +1,953 @@
|
|||
package toml
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
type itemType int
|
||||
|
||||
const (
|
||||
itemError itemType = iota
|
||||
itemNIL // used in the parser to indicate no type
|
||||
itemEOF
|
||||
itemText
|
||||
itemString
|
||||
itemRawString
|
||||
itemMultilineString
|
||||
itemRawMultilineString
|
||||
itemBool
|
||||
itemInteger
|
||||
itemFloat
|
||||
itemDatetime
|
||||
itemArray // the start of an array
|
||||
itemArrayEnd
|
||||
itemTableStart
|
||||
itemTableEnd
|
||||
itemArrayTableStart
|
||||
itemArrayTableEnd
|
||||
itemKeyStart
|
||||
itemCommentStart
|
||||
itemInlineTableStart
|
||||
itemInlineTableEnd
|
||||
)
|
||||
|
||||
const (
|
||||
eof = 0
|
||||
comma = ','
|
||||
tableStart = '['
|
||||
tableEnd = ']'
|
||||
arrayTableStart = '['
|
||||
arrayTableEnd = ']'
|
||||
tableSep = '.'
|
||||
keySep = '='
|
||||
arrayStart = '['
|
||||
arrayEnd = ']'
|
||||
commentStart = '#'
|
||||
stringStart = '"'
|
||||
stringEnd = '"'
|
||||
rawStringStart = '\''
|
||||
rawStringEnd = '\''
|
||||
inlineTableStart = '{'
|
||||
inlineTableEnd = '}'
|
||||
)
|
||||
|
||||
type stateFn func(lx *lexer) stateFn
|
||||
|
||||
type lexer struct {
|
||||
input string
|
||||
start int
|
||||
pos int
|
||||
line int
|
||||
state stateFn
|
||||
items chan item
|
||||
|
||||
// Allow for backing up up to three runes.
|
||||
// This is necessary because TOML contains 3-rune tokens (""" and ''').
|
||||
prevWidths [3]int
|
||||
nprev int // how many of prevWidths are in use
|
||||
// If we emit an eof, we can still back up, but it is not OK to call
|
||||
// next again.
|
||||
atEOF bool
|
||||
|
||||
// A stack of state functions used to maintain context.
|
||||
// The idea is to reuse parts of the state machine in various places.
|
||||
// For example, values can appear at the top level or within arbitrarily
|
||||
// nested arrays. The last state on the stack is used after a value has
|
||||
// been lexed. Similarly for comments.
|
||||
stack []stateFn
|
||||
}
|
||||
|
||||
type item struct {
|
||||
typ itemType
|
||||
val string
|
||||
line int
|
||||
}
|
||||
|
||||
func (lx *lexer) nextItem() item {
|
||||
for {
|
||||
select {
|
||||
case item := <-lx.items:
|
||||
return item
|
||||
default:
|
||||
lx.state = lx.state(lx)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func lex(input string) *lexer {
|
||||
lx := &lexer{
|
||||
input: input,
|
||||
state: lexTop,
|
||||
line: 1,
|
||||
items: make(chan item, 10),
|
||||
stack: make([]stateFn, 0, 10),
|
||||
}
|
||||
return lx
|
||||
}
|
||||
|
||||
func (lx *lexer) push(state stateFn) {
|
||||
lx.stack = append(lx.stack, state)
|
||||
}
|
||||
|
||||
func (lx *lexer) pop() stateFn {
|
||||
if len(lx.stack) == 0 {
|
||||
return lx.errorf("BUG in lexer: no states to pop")
|
||||
}
|
||||
last := lx.stack[len(lx.stack)-1]
|
||||
lx.stack = lx.stack[0 : len(lx.stack)-1]
|
||||
return last
|
||||
}
|
||||
|
||||
func (lx *lexer) current() string {
|
||||
return lx.input[lx.start:lx.pos]
|
||||
}
|
||||
|
||||
func (lx *lexer) emit(typ itemType) {
|
||||
lx.items <- item{typ, lx.current(), lx.line}
|
||||
lx.start = lx.pos
|
||||
}
|
||||
|
||||
func (lx *lexer) emitTrim(typ itemType) {
|
||||
lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
|
||||
lx.start = lx.pos
|
||||
}
|
||||
|
||||
func (lx *lexer) next() (r rune) {
|
||||
if lx.atEOF {
|
||||
panic("next called after EOF")
|
||||
}
|
||||
if lx.pos >= len(lx.input) {
|
||||
lx.atEOF = true
|
||||
return eof
|
||||
}
|
||||
|
||||
if lx.input[lx.pos] == '\n' {
|
||||
lx.line++
|
||||
}
|
||||
lx.prevWidths[2] = lx.prevWidths[1]
|
||||
lx.prevWidths[1] = lx.prevWidths[0]
|
||||
if lx.nprev < 3 {
|
||||
lx.nprev++
|
||||
}
|
||||
r, w := utf8.DecodeRuneInString(lx.input[lx.pos:])
|
||||
lx.prevWidths[0] = w
|
||||
lx.pos += w
|
||||
return r
|
||||
}
|
||||
|
||||
// ignore skips over the pending input before this point.
|
||||
func (lx *lexer) ignore() {
|
||||
lx.start = lx.pos
|
||||
}
|
||||
|
||||
// backup steps back one rune. Can be called only twice between calls to next.
|
||||
func (lx *lexer) backup() {
|
||||
if lx.atEOF {
|
||||
lx.atEOF = false
|
||||
return
|
||||
}
|
||||
if lx.nprev < 1 {
|
||||
panic("backed up too far")
|
||||
}
|
||||
w := lx.prevWidths[0]
|
||||
lx.prevWidths[0] = lx.prevWidths[1]
|
||||
lx.prevWidths[1] = lx.prevWidths[2]
|
||||
lx.nprev--
|
||||
lx.pos -= w
|
||||
if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
|
||||
lx.line--
|
||||
}
|
||||
}
|
||||
|
||||
// accept consumes the next rune if it's equal to `valid`.
|
||||
func (lx *lexer) accept(valid rune) bool {
|
||||
if lx.next() == valid {
|
||||
return true
|
||||
}
|
||||
lx.backup()
|
||||
return false
|
||||
}
|
||||
|
||||
// peek returns but does not consume the next rune in the input.
|
||||
func (lx *lexer) peek() rune {
|
||||
r := lx.next()
|
||||
lx.backup()
|
||||
return r
|
||||
}
|
||||
|
||||
// skip ignores all input that matches the given predicate.
|
||||
func (lx *lexer) skip(pred func(rune) bool) {
|
||||
for {
|
||||
r := lx.next()
|
||||
if pred(r) {
|
||||
continue
|
||||
}
|
||||
lx.backup()
|
||||
lx.ignore()
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// errorf stops all lexing by emitting an error and returning `nil`.
|
||||
// Note that any value that is a character is escaped if it's a special
|
||||
// character (newlines, tabs, etc.).
|
||||
func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
|
||||
lx.items <- item{
|
||||
itemError,
|
||||
fmt.Sprintf(format, values...),
|
||||
lx.line,
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// lexTop consumes elements at the top level of TOML data.
|
||||
func lexTop(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isWhitespace(r) || isNL(r) {
|
||||
return lexSkip(lx, lexTop)
|
||||
}
|
||||
switch r {
|
||||
case commentStart:
|
||||
lx.push(lexTop)
|
||||
return lexCommentStart
|
||||
case tableStart:
|
||||
return lexTableStart
|
||||
case eof:
|
||||
if lx.pos > lx.start {
|
||||
return lx.errorf("unexpected EOF")
|
||||
}
|
||||
lx.emit(itemEOF)
|
||||
return nil
|
||||
}
|
||||
|
||||
// At this point, the only valid item can be a key, so we back up
|
||||
// and let the key lexer do the rest.
|
||||
lx.backup()
|
||||
lx.push(lexTopEnd)
|
||||
return lexKeyStart
|
||||
}
|
||||
|
||||
// lexTopEnd is entered whenever a top-level item has been consumed. (A value
|
||||
// or a table.) It must see only whitespace, and will turn back to lexTop
|
||||
// upon a newline. If it sees EOF, it will quit the lexer successfully.
|
||||
func lexTopEnd(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case r == commentStart:
|
||||
// a comment will read to a newline for us.
|
||||
lx.push(lexTop)
|
||||
return lexCommentStart
|
||||
case isWhitespace(r):
|
||||
return lexTopEnd
|
||||
case isNL(r):
|
||||
lx.ignore()
|
||||
return lexTop
|
||||
case r == eof:
|
||||
lx.emit(itemEOF)
|
||||
return nil
|
||||
}
|
||||
return lx.errorf("expected a top-level item to end with a newline, "+
|
||||
"comment, or EOF, but got %q instead", r)
|
||||
}
|
||||
|
||||
// lexTable lexes the beginning of a table. Namely, it makes sure that
|
||||
// it starts with a character other than '.' and ']'.
|
||||
// It assumes that '[' has already been consumed.
|
||||
// It also handles the case that this is an item in an array of tables.
|
||||
// e.g., '[[name]]'.
|
||||
func lexTableStart(lx *lexer) stateFn {
|
||||
if lx.peek() == arrayTableStart {
|
||||
lx.next()
|
||||
lx.emit(itemArrayTableStart)
|
||||
lx.push(lexArrayTableEnd)
|
||||
} else {
|
||||
lx.emit(itemTableStart)
|
||||
lx.push(lexTableEnd)
|
||||
}
|
||||
return lexTableNameStart
|
||||
}
|
||||
|
||||
func lexTableEnd(lx *lexer) stateFn {
|
||||
lx.emit(itemTableEnd)
|
||||
return lexTopEnd
|
||||
}
|
||||
|
||||
func lexArrayTableEnd(lx *lexer) stateFn {
|
||||
if r := lx.next(); r != arrayTableEnd {
|
||||
return lx.errorf("expected end of table array name delimiter %q, "+
|
||||
"but got %q instead", arrayTableEnd, r)
|
||||
}
|
||||
lx.emit(itemArrayTableEnd)
|
||||
return lexTopEnd
|
||||
}
|
||||
|
||||
func lexTableNameStart(lx *lexer) stateFn {
|
||||
lx.skip(isWhitespace)
|
||||
switch r := lx.peek(); {
|
||||
case r == tableEnd || r == eof:
|
||||
return lx.errorf("unexpected end of table name " +
|
||||
"(table names cannot be empty)")
|
||||
case r == tableSep:
|
||||
return lx.errorf("unexpected table separator " +
|
||||
"(table names cannot be empty)")
|
||||
case r == stringStart || r == rawStringStart:
|
||||
lx.ignore()
|
||||
lx.push(lexTableNameEnd)
|
||||
return lexValue // reuse string lexing
|
||||
default:
|
||||
return lexBareTableName
|
||||
}
|
||||
}
|
||||
|
||||
// lexBareTableName lexes the name of a table. It assumes that at least one
|
||||
// valid character for the table has already been read.
|
||||
func lexBareTableName(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isBareKeyChar(r) {
|
||||
return lexBareTableName
|
||||
}
|
||||
lx.backup()
|
||||
lx.emit(itemText)
|
||||
return lexTableNameEnd
|
||||
}
|
||||
|
||||
// lexTableNameEnd reads the end of a piece of a table name, optionally
|
||||
// consuming whitespace.
|
||||
func lexTableNameEnd(lx *lexer) stateFn {
|
||||
lx.skip(isWhitespace)
|
||||
switch r := lx.next(); {
|
||||
case isWhitespace(r):
|
||||
return lexTableNameEnd
|
||||
case r == tableSep:
|
||||
lx.ignore()
|
||||
return lexTableNameStart
|
||||
case r == tableEnd:
|
||||
return lx.pop()
|
||||
default:
|
||||
return lx.errorf("expected '.' or ']' to end table name, "+
|
||||
"but got %q instead", r)
|
||||
}
|
||||
}
|
||||
|
||||
// lexKeyStart consumes a key name up until the first non-whitespace character.
|
||||
// lexKeyStart will ignore whitespace.
|
||||
func lexKeyStart(lx *lexer) stateFn {
|
||||
r := lx.peek()
|
||||
switch {
|
||||
case r == keySep:
|
||||
return lx.errorf("unexpected key separator %q", keySep)
|
||||
case isWhitespace(r) || isNL(r):
|
||||
lx.next()
|
||||
return lexSkip(lx, lexKeyStart)
|
||||
case r == stringStart || r == rawStringStart:
|
||||
lx.ignore()
|
||||
lx.emit(itemKeyStart)
|
||||
lx.push(lexKeyEnd)
|
||||
return lexValue // reuse string lexing
|
||||
default:
|
||||
lx.ignore()
|
||||
lx.emit(itemKeyStart)
|
||||
return lexBareKey
|
||||
}
|
||||
}
|
||||
|
||||
// lexBareKey consumes the text of a bare key. Assumes that the first character
|
||||
// (which is not whitespace) has not yet been consumed.
|
||||
func lexBareKey(lx *lexer) stateFn {
|
||||
switch r := lx.next(); {
|
||||
case isBareKeyChar(r):
|
||||
return lexBareKey
|
||||
case isWhitespace(r):
|
||||
lx.backup()
|
||||
lx.emit(itemText)
|
||||
return lexKeyEnd
|
||||
case r == keySep:
|
||||
lx.backup()
|
||||
lx.emit(itemText)
|
||||
return lexKeyEnd
|
||||
default:
|
||||
return lx.errorf("bare keys cannot contain %q", r)
|
||||
}
|
||||
}
|
||||
|
||||
// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
|
||||
// separator).
|
||||
func lexKeyEnd(lx *lexer) stateFn {
|
||||
switch r := lx.next(); {
|
||||
case r == keySep:
|
||||
return lexSkip(lx, lexValue)
|
||||
case isWhitespace(r):
|
||||
return lexSkip(lx, lexKeyEnd)
|
||||
default:
|
||||
return lx.errorf("expected key separator %q, but got %q instead",
|
||||
keySep, r)
|
||||
}
|
||||
}
|
||||
|
||||
// lexValue starts the consumption of a value anywhere a value is expected.
|
||||
// lexValue will ignore whitespace.
|
||||
// After a value is lexed, the last state on the next is popped and returned.
|
||||
func lexValue(lx *lexer) stateFn {
|
||||
// We allow whitespace to precede a value, but NOT newlines.
|
||||
// In array syntax, the array states are responsible for ignoring newlines.
|
||||
r := lx.next()
|
||||
switch {
|
||||
case isWhitespace(r):
|
||||
return lexSkip(lx, lexValue)
|
||||
case isDigit(r):
|
||||
lx.backup() // avoid an extra state and use the same as above
|
||||
return lexNumberOrDateStart
|
||||
}
|
||||
switch r {
|
||||
case arrayStart:
|
||||
lx.ignore()
|
||||
lx.emit(itemArray)
|
||||
return lexArrayValue
|
||||
case inlineTableStart:
|
||||
lx.ignore()
|
||||
lx.emit(itemInlineTableStart)
|
||||
return lexInlineTableValue
|
||||
case stringStart:
|
||||
if lx.accept(stringStart) {
|
||||
if lx.accept(stringStart) {
|
||||
lx.ignore() // Ignore """
|
||||
return lexMultilineString
|
||||
}
|
||||
lx.backup()
|
||||
}
|
||||
lx.ignore() // ignore the '"'
|
||||
return lexString
|
||||
case rawStringStart:
|
||||
if lx.accept(rawStringStart) {
|
||||
if lx.accept(rawStringStart) {
|
||||
lx.ignore() // Ignore """
|
||||
return lexMultilineRawString
|
||||
}
|
||||
lx.backup()
|
||||
}
|
||||
lx.ignore() // ignore the "'"
|
||||
return lexRawString
|
||||
case '+', '-':
|
||||
return lexNumberStart
|
||||
case '.': // special error case, be kind to users
|
||||
return lx.errorf("floats must start with a digit, not '.'")
|
||||
}
|
||||
if unicode.IsLetter(r) {
|
||||
// Be permissive here; lexBool will give a nice error if the
|
||||
// user wrote something like
|
||||
// x = foo
|
||||
// (i.e. not 'true' or 'false' but is something else word-like.)
|
||||
lx.backup()
|
||||
return lexBool
|
||||
}
|
||||
return lx.errorf("expected value but found %q instead", r)
|
||||
}
|
||||
|
||||
// lexArrayValue consumes one value in an array. It assumes that '[' or ','
|
||||
// have already been consumed. All whitespace and newlines are ignored.
|
||||
func lexArrayValue(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case isWhitespace(r) || isNL(r):
|
||||
return lexSkip(lx, lexArrayValue)
|
||||
case r == commentStart:
|
||||
lx.push(lexArrayValue)
|
||||
return lexCommentStart
|
||||
case r == comma:
|
||||
return lx.errorf("unexpected comma")
|
||||
case r == arrayEnd:
|
||||
// NOTE(caleb): The spec isn't clear about whether you can have
|
||||
// a trailing comma or not, so we'll allow it.
|
||||
return lexArrayEnd
|
||||
}
|
||||
|
||||
lx.backup()
|
||||
lx.push(lexArrayValueEnd)
|
||||
return lexValue
|
||||
}
|
||||
|
||||
// lexArrayValueEnd consumes everything between the end of an array value and
|
||||
// the next value (or the end of the array): it ignores whitespace and newlines
|
||||
// and expects either a ',' or a ']'.
|
||||
func lexArrayValueEnd(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case isWhitespace(r) || isNL(r):
|
||||
return lexSkip(lx, lexArrayValueEnd)
|
||||
case r == commentStart:
|
||||
lx.push(lexArrayValueEnd)
|
||||
return lexCommentStart
|
||||
case r == comma:
|
||||
lx.ignore()
|
||||
return lexArrayValue // move on to the next value
|
||||
case r == arrayEnd:
|
||||
return lexArrayEnd
|
||||
}
|
||||
return lx.errorf(
|
||||
"expected a comma or array terminator %q, but got %q instead",
|
||||
arrayEnd, r,
|
||||
)
|
||||
}
|
||||
|
||||
// lexArrayEnd finishes the lexing of an array.
|
||||
// It assumes that a ']' has just been consumed.
|
||||
func lexArrayEnd(lx *lexer) stateFn {
|
||||
lx.ignore()
|
||||
lx.emit(itemArrayEnd)
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexInlineTableValue consumes one key/value pair in an inline table.
|
||||
// It assumes that '{' or ',' have already been consumed. Whitespace is ignored.
|
||||
func lexInlineTableValue(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case isWhitespace(r):
|
||||
return lexSkip(lx, lexInlineTableValue)
|
||||
case isNL(r):
|
||||
return lx.errorf("newlines not allowed within inline tables")
|
||||
case r == commentStart:
|
||||
lx.push(lexInlineTableValue)
|
||||
return lexCommentStart
|
||||
case r == comma:
|
||||
return lx.errorf("unexpected comma")
|
||||
case r == inlineTableEnd:
|
||||
return lexInlineTableEnd
|
||||
}
|
||||
lx.backup()
|
||||
lx.push(lexInlineTableValueEnd)
|
||||
return lexKeyStart
|
||||
}
|
||||
|
||||
// lexInlineTableValueEnd consumes everything between the end of an inline table
|
||||
// key/value pair and the next pair (or the end of the table):
|
||||
// it ignores whitespace and expects either a ',' or a '}'.
|
||||
func lexInlineTableValueEnd(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case isWhitespace(r):
|
||||
return lexSkip(lx, lexInlineTableValueEnd)
|
||||
case isNL(r):
|
||||
return lx.errorf("newlines not allowed within inline tables")
|
||||
case r == commentStart:
|
||||
lx.push(lexInlineTableValueEnd)
|
||||
return lexCommentStart
|
||||
case r == comma:
|
||||
lx.ignore()
|
||||
return lexInlineTableValue
|
||||
case r == inlineTableEnd:
|
||||
return lexInlineTableEnd
|
||||
}
|
||||
return lx.errorf("expected a comma or an inline table terminator %q, "+
|
||||
"but got %q instead", inlineTableEnd, r)
|
||||
}
|
||||
|
||||
// lexInlineTableEnd finishes the lexing of an inline table.
|
||||
// It assumes that a '}' has just been consumed.
|
||||
func lexInlineTableEnd(lx *lexer) stateFn {
|
||||
lx.ignore()
|
||||
lx.emit(itemInlineTableEnd)
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexString consumes the inner contents of a string. It assumes that the
|
||||
// beginning '"' has already been consumed and ignored.
|
||||
func lexString(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case r == eof:
|
||||
return lx.errorf("unexpected EOF")
|
||||
case isNL(r):
|
||||
return lx.errorf("strings cannot contain newlines")
|
||||
case r == '\\':
|
||||
lx.push(lexString)
|
||||
return lexStringEscape
|
||||
case r == stringEnd:
|
||||
lx.backup()
|
||||
lx.emit(itemString)
|
||||
lx.next()
|
||||
lx.ignore()
|
||||
return lx.pop()
|
||||
}
|
||||
return lexString
|
||||
}
|
||||
|
||||
// lexMultilineString consumes the inner contents of a string. It assumes that
|
||||
// the beginning '"""' has already been consumed and ignored.
|
||||
func lexMultilineString(lx *lexer) stateFn {
|
||||
switch lx.next() {
|
||||
case eof:
|
||||
return lx.errorf("unexpected EOF")
|
||||
case '\\':
|
||||
return lexMultilineStringEscape
|
||||
case stringEnd:
|
||||
if lx.accept(stringEnd) {
|
||||
if lx.accept(stringEnd) {
|
||||
lx.backup()
|
||||
lx.backup()
|
||||
lx.backup()
|
||||
lx.emit(itemMultilineString)
|
||||
lx.next()
|
||||
lx.next()
|
||||
lx.next()
|
||||
lx.ignore()
|
||||
return lx.pop()
|
||||
}
|
||||
lx.backup()
|
||||
}
|
||||
}
|
||||
return lexMultilineString
|
||||
}
|
||||
|
||||
// lexRawString consumes a raw string. Nothing can be escaped in such a string.
|
||||
// It assumes that the beginning "'" has already been consumed and ignored.
|
||||
func lexRawString(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case r == eof:
|
||||
return lx.errorf("unexpected EOF")
|
||||
case isNL(r):
|
||||
return lx.errorf("strings cannot contain newlines")
|
||||
case r == rawStringEnd:
|
||||
lx.backup()
|
||||
lx.emit(itemRawString)
|
||||
lx.next()
|
||||
lx.ignore()
|
||||
return lx.pop()
|
||||
}
|
||||
return lexRawString
|
||||
}
|
||||
|
||||
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
|
||||
// a string. It assumes that the beginning "'''" has already been consumed and
|
||||
// ignored.
|
||||
func lexMultilineRawString(lx *lexer) stateFn {
|
||||
switch lx.next() {
|
||||
case eof:
|
||||
return lx.errorf("unexpected EOF")
|
||||
case rawStringEnd:
|
||||
if lx.accept(rawStringEnd) {
|
||||
if lx.accept(rawStringEnd) {
|
||||
lx.backup()
|
||||
lx.backup()
|
||||
lx.backup()
|
||||
lx.emit(itemRawMultilineString)
|
||||
lx.next()
|
||||
lx.next()
|
||||
lx.next()
|
||||
lx.ignore()
|
||||
return lx.pop()
|
||||
}
|
||||
lx.backup()
|
||||
}
|
||||
}
|
||||
return lexMultilineRawString
|
||||
}
|
||||
|
||||
// lexMultilineStringEscape consumes an escaped character. It assumes that the
|
||||
// preceding '\\' has already been consumed.
|
||||
func lexMultilineStringEscape(lx *lexer) stateFn {
|
||||
// Handle the special case first:
|
||||
if isNL(lx.next()) {
|
||||
return lexMultilineString
|
||||
}
|
||||
lx.backup()
|
||||
lx.push(lexMultilineString)
|
||||
return lexStringEscape(lx)
|
||||
}
|
||||
|
||||
func lexStringEscape(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch r {
|
||||
case 'b':
|
||||
fallthrough
|
||||
case 't':
|
||||
fallthrough
|
||||
case 'n':
|
||||
fallthrough
|
||||
case 'f':
|
||||
fallthrough
|
||||
case 'r':
|
||||
fallthrough
|
||||
case '"':
|
||||
fallthrough
|
||||
case '\\':
|
||||
return lx.pop()
|
||||
case 'u':
|
||||
return lexShortUnicodeEscape
|
||||
case 'U':
|
||||
return lexLongUnicodeEscape
|
||||
}
|
||||
return lx.errorf("invalid escape character %q; only the following "+
|
||||
"escape characters are allowed: "+
|
||||
`\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX`, r)
|
||||
}
|
||||
|
||||
func lexShortUnicodeEscape(lx *lexer) stateFn {
|
||||
var r rune
|
||||
for i := 0; i < 4; i++ {
|
||||
r = lx.next()
|
||||
if !isHexadecimal(r) {
|
||||
return lx.errorf(`expected four hexadecimal digits after '\u', `+
|
||||
"but got %q instead", lx.current())
|
||||
}
|
||||
}
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
func lexLongUnicodeEscape(lx *lexer) stateFn {
|
||||
var r rune
|
||||
for i := 0; i < 8; i++ {
|
||||
r = lx.next()
|
||||
if !isHexadecimal(r) {
|
||||
return lx.errorf(`expected eight hexadecimal digits after '\U', `+
|
||||
"but got %q instead", lx.current())
|
||||
}
|
||||
}
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexNumberOrDateStart consumes either an integer, a float, or datetime.
|
||||
func lexNumberOrDateStart(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isDigit(r) {
|
||||
return lexNumberOrDate
|
||||
}
|
||||
switch r {
|
||||
case '_':
|
||||
return lexNumber
|
||||
case 'e', 'E':
|
||||
return lexFloat
|
||||
case '.':
|
||||
return lx.errorf("floats must start with a digit, not '.'")
|
||||
}
|
||||
return lx.errorf("expected a digit but got %q", r)
|
||||
}
|
||||
|
||||
// lexNumberOrDate consumes either an integer, float or datetime.
|
||||
func lexNumberOrDate(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isDigit(r) {
|
||||
return lexNumberOrDate
|
||||
}
|
||||
switch r {
|
||||
case '-':
|
||||
return lexDatetime
|
||||
case '_':
|
||||
return lexNumber
|
||||
case '.', 'e', 'E':
|
||||
return lexFloat
|
||||
}
|
||||
|
||||
lx.backup()
|
||||
lx.emit(itemInteger)
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexDatetime consumes a Datetime, to a first approximation.
|
||||
// The parser validates that it matches one of the accepted formats.
|
||||
func lexDatetime(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isDigit(r) {
|
||||
return lexDatetime
|
||||
}
|
||||
switch r {
|
||||
case '-', 'T', ':', '.', 'Z', '+':
|
||||
return lexDatetime
|
||||
}
|
||||
|
||||
lx.backup()
|
||||
lx.emit(itemDatetime)
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexNumberStart consumes either an integer or a float. It assumes that a sign
|
||||
// has already been read, but that *no* digits have been consumed.
|
||||
// lexNumberStart will move to the appropriate integer or float states.
|
||||
func lexNumberStart(lx *lexer) stateFn {
|
||||
// We MUST see a digit. Even floats have to start with a digit.
|
||||
r := lx.next()
|
||||
if !isDigit(r) {
|
||||
if r == '.' {
|
||||
return lx.errorf("floats must start with a digit, not '.'")
|
||||
}
|
||||
return lx.errorf("expected a digit but got %q", r)
|
||||
}
|
||||
return lexNumber
|
||||
}
|
||||
|
||||
// lexNumber consumes an integer or a float after seeing the first digit.
|
||||
func lexNumber(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isDigit(r) {
|
||||
return lexNumber
|
||||
}
|
||||
switch r {
|
||||
case '_':
|
||||
return lexNumber
|
||||
case '.', 'e', 'E':
|
||||
return lexFloat
|
||||
}
|
||||
|
||||
lx.backup()
|
||||
lx.emit(itemInteger)
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexFloat consumes the elements of a float. It allows any sequence of
|
||||
// float-like characters, so floats emitted by the lexer are only a first
|
||||
// approximation and must be validated by the parser.
|
||||
func lexFloat(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isDigit(r) {
|
||||
return lexFloat
|
||||
}
|
||||
switch r {
|
||||
case '_', '.', '-', '+', 'e', 'E':
|
||||
return lexFloat
|
||||
}
|
||||
|
||||
lx.backup()
|
||||
lx.emit(itemFloat)
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexBool consumes a bool string: 'true' or 'false.
|
||||
func lexBool(lx *lexer) stateFn {
|
||||
var rs []rune
|
||||
for {
|
||||
r := lx.next()
|
||||
if !unicode.IsLetter(r) {
|
||||
lx.backup()
|
||||
break
|
||||
}
|
||||
rs = append(rs, r)
|
||||
}
|
||||
s := string(rs)
|
||||
switch s {
|
||||
case "true", "false":
|
||||
lx.emit(itemBool)
|
||||
return lx.pop()
|
||||
}
|
||||
return lx.errorf("expected value but found %q instead", s)
|
||||
}
|
||||
|
||||
// lexCommentStart begins the lexing of a comment. It will emit
|
||||
// itemCommentStart and consume no characters, passing control to lexComment.
|
||||
func lexCommentStart(lx *lexer) stateFn {
|
||||
lx.ignore()
|
||||
lx.emit(itemCommentStart)
|
||||
return lexComment
|
||||
}
|
||||
|
||||
// lexComment lexes an entire comment. It assumes that '#' has been consumed.
|
||||
// It will consume *up to* the first newline character, and pass control
|
||||
// back to the last state on the stack.
|
||||
func lexComment(lx *lexer) stateFn {
|
||||
r := lx.peek()
|
||||
if isNL(r) || r == eof {
|
||||
lx.emit(itemText)
|
||||
return lx.pop()
|
||||
}
|
||||
lx.next()
|
||||
return lexComment
|
||||
}
|
||||
|
||||
// lexSkip ignores all slurped input and moves on to the next state.
|
||||
func lexSkip(lx *lexer, nextState stateFn) stateFn {
|
||||
return func(lx *lexer) stateFn {
|
||||
lx.ignore()
|
||||
return nextState
|
||||
}
|
||||
}
|
||||
|
||||
// isWhitespace returns true if `r` is a whitespace character according
|
||||
// to the spec.
|
||||
func isWhitespace(r rune) bool {
|
||||
return r == '\t' || r == ' '
|
||||
}
|
||||
|
||||
func isNL(r rune) bool {
|
||||
return r == '\n' || r == '\r'
|
||||
}
|
||||
|
||||
func isDigit(r rune) bool {
|
||||
return r >= '0' && r <= '9'
|
||||
}
|
||||
|
||||
func isHexadecimal(r rune) bool {
|
||||
return (r >= '0' && r <= '9') ||
|
||||
(r >= 'a' && r <= 'f') ||
|
||||
(r >= 'A' && r <= 'F')
|
||||
}
|
||||
|
||||
func isBareKeyChar(r rune) bool {
|
||||
return (r >= 'A' && r <= 'Z') ||
|
||||
(r >= 'a' && r <= 'z') ||
|
||||
(r >= '0' && r <= '9') ||
|
||||
r == '_' ||
|
||||
r == '-'
|
||||
}
|
||||
|
||||
func (itype itemType) String() string {
|
||||
switch itype {
|
||||
case itemError:
|
||||
return "Error"
|
||||
case itemNIL:
|
||||
return "NIL"
|
||||
case itemEOF:
|
||||
return "EOF"
|
||||
case itemText:
|
||||
return "Text"
|
||||
case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
|
||||
return "String"
|
||||
case itemBool:
|
||||
return "Bool"
|
||||
case itemInteger:
|
||||
return "Integer"
|
||||
case itemFloat:
|
||||
return "Float"
|
||||
case itemDatetime:
|
||||
return "DateTime"
|
||||
case itemTableStart:
|
||||
return "TableStart"
|
||||
case itemTableEnd:
|
||||
return "TableEnd"
|
||||
case itemKeyStart:
|
||||
return "KeyStart"
|
||||
case itemArray:
|
||||
return "Array"
|
||||
case itemArrayEnd:
|
||||
return "ArrayEnd"
|
||||
case itemCommentStart:
|
||||
return "CommentStart"
|
||||
}
|
||||
panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
|
||||
}
|
||||
|
||||
func (item item) String() string {
|
||||
return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
|
||||
}
|
592
vendor/github.com/BurntSushi/toml/parse.go
generated
vendored
Normal file
592
vendor/github.com/BurntSushi/toml/parse.go
generated
vendored
Normal file
|
@ -0,0 +1,592 @@
|
|||
package toml
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
type parser struct {
|
||||
mapping map[string]interface{}
|
||||
types map[string]tomlType
|
||||
lx *lexer
|
||||
|
||||
// A list of keys in the order that they appear in the TOML data.
|
||||
ordered []Key
|
||||
|
||||
// the full key for the current hash in scope
|
||||
context Key
|
||||
|
||||
// the base key name for everything except hashes
|
||||
currentKey string
|
||||
|
||||
// rough approximation of line number
|
||||
approxLine int
|
||||
|
||||
// A map of 'key.group.names' to whether they were created implicitly.
|
||||
implicits map[string]bool
|
||||
}
|
||||
|
||||
type parseError string
|
||||
|
||||
func (pe parseError) Error() string {
|
||||
return string(pe)
|
||||
}
|
||||
|
||||
func parse(data string) (p *parser, err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
var ok bool
|
||||
if err, ok = r.(parseError); ok {
|
||||
return
|
||||
}
|
||||
panic(r)
|
||||
}
|
||||
}()
|
||||
|
||||
p = &parser{
|
||||
mapping: make(map[string]interface{}),
|
||||
types: make(map[string]tomlType),
|
||||
lx: lex(data),
|
||||
ordered: make([]Key, 0),
|
||||
implicits: make(map[string]bool),
|
||||
}
|
||||
for {
|
||||
item := p.next()
|
||||
if item.typ == itemEOF {
|
||||
break
|
||||
}
|
||||
p.topLevel(item)
|
||||
}
|
||||
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func (p *parser) panicf(format string, v ...interface{}) {
|
||||
msg := fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
|
||||
p.approxLine, p.current(), fmt.Sprintf(format, v...))
|
||||
panic(parseError(msg))
|
||||
}
|
||||
|
||||
func (p *parser) next() item {
|
||||
it := p.lx.nextItem()
|
||||
if it.typ == itemError {
|
||||
p.panicf("%s", it.val)
|
||||
}
|
||||
return it
|
||||
}
|
||||
|
||||
func (p *parser) bug(format string, v ...interface{}) {
|
||||
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
|
||||
}
|
||||
|
||||
func (p *parser) expect(typ itemType) item {
|
||||
it := p.next()
|
||||
p.assertEqual(typ, it.typ)
|
||||
return it
|
||||
}
|
||||
|
||||
func (p *parser) assertEqual(expected, got itemType) {
|
||||
if expected != got {
|
||||
p.bug("Expected '%s' but got '%s'.", expected, got)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *parser) topLevel(item item) {
|
||||
switch item.typ {
|
||||
case itemCommentStart:
|
||||
p.approxLine = item.line
|
||||
p.expect(itemText)
|
||||
case itemTableStart:
|
||||
kg := p.next()
|
||||
p.approxLine = kg.line
|
||||
|
||||
var key Key
|
||||
for ; kg.typ != itemTableEnd && kg.typ != itemEOF; kg = p.next() {
|
||||
key = append(key, p.keyString(kg))
|
||||
}
|
||||
p.assertEqual(itemTableEnd, kg.typ)
|
||||
|
||||
p.establishContext(key, false)
|
||||
p.setType("", tomlHash)
|
||||
p.ordered = append(p.ordered, key)
|
||||
case itemArrayTableStart:
|
||||
kg := p.next()
|
||||
p.approxLine = kg.line
|
||||
|
||||
var key Key
|
||||
for ; kg.typ != itemArrayTableEnd && kg.typ != itemEOF; kg = p.next() {
|
||||
key = append(key, p.keyString(kg))
|
||||
}
|
||||
p.assertEqual(itemArrayTableEnd, kg.typ)
|
||||
|
||||
p.establishContext(key, true)
|
||||
p.setType("", tomlArrayHash)
|
||||
p.ordered = append(p.ordered, key)
|
||||
case itemKeyStart:
|
||||
kname := p.next()
|
||||
p.approxLine = kname.line
|
||||
p.currentKey = p.keyString(kname)
|
||||
|
||||
val, typ := p.value(p.next())
|
||||
p.setValue(p.currentKey, val)
|
||||
p.setType(p.currentKey, typ)
|
||||
p.ordered = append(p.ordered, p.context.add(p.currentKey))
|
||||
p.currentKey = ""
|
||||
default:
|
||||
p.bug("Unexpected type at top level: %s", item.typ)
|
||||
}
|
||||
}
|
||||
|
||||
// Gets a string for a key (or part of a key in a table name).
|
||||
func (p *parser) keyString(it item) string {
|
||||
switch it.typ {
|
||||
case itemText:
|
||||
return it.val
|
||||
case itemString, itemMultilineString,
|
||||
itemRawString, itemRawMultilineString:
|
||||
s, _ := p.value(it)
|
||||
return s.(string)
|
||||
default:
|
||||
p.bug("Unexpected key type: %s", it.typ)
|
||||
panic("unreachable")
|
||||
}
|
||||
}
|
||||
|
||||
// value translates an expected value from the lexer into a Go value wrapped
|
||||
// as an empty interface.
|
||||
func (p *parser) value(it item) (interface{}, tomlType) {
|
||||
switch it.typ {
|
||||
case itemString:
|
||||
return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
|
||||
case itemMultilineString:
|
||||
trimmed := stripFirstNewline(stripEscapedWhitespace(it.val))
|
||||
return p.replaceEscapes(trimmed), p.typeOfPrimitive(it)
|
||||
case itemRawString:
|
||||
return it.val, p.typeOfPrimitive(it)
|
||||
case itemRawMultilineString:
|
||||
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
|
||||
case itemBool:
|
||||
switch it.val {
|
||||
case "true":
|
||||
return true, p.typeOfPrimitive(it)
|
||||
case "false":
|
||||
return false, p.typeOfPrimitive(it)
|
||||
}
|
||||
p.bug("Expected boolean value, but got '%s'.", it.val)
|
||||
case itemInteger:
|
||||
if !numUnderscoresOK(it.val) {
|
||||
p.panicf("Invalid integer %q: underscores must be surrounded by digits",
|
||||
it.val)
|
||||
}
|
||||
val := strings.Replace(it.val, "_", "", -1)
|
||||
num, err := strconv.ParseInt(val, 10, 64)
|
||||
if err != nil {
|
||||
// Distinguish integer values. Normally, it'd be a bug if the lexer
|
||||
// provides an invalid integer, but it's possible that the number is
|
||||
// out of range of valid values (which the lexer cannot determine).
|
||||
// So mark the former as a bug but the latter as a legitimate user
|
||||
// error.
|
||||
if e, ok := err.(*strconv.NumError); ok &&
|
||||
e.Err == strconv.ErrRange {
|
||||
|
||||
p.panicf("Integer '%s' is out of the range of 64-bit "+
|
||||
"signed integers.", it.val)
|
||||
} else {
|
||||
p.bug("Expected integer value, but got '%s'.", it.val)
|
||||
}
|
||||
}
|
||||
return num, p.typeOfPrimitive(it)
|
||||
case itemFloat:
|
||||
parts := strings.FieldsFunc(it.val, func(r rune) bool {
|
||||
switch r {
|
||||
case '.', 'e', 'E':
|
||||
return true
|
||||
}
|
||||
return false
|
||||
})
|
||||
for _, part := range parts {
|
||||
if !numUnderscoresOK(part) {
|
||||
p.panicf("Invalid float %q: underscores must be "+
|
||||
"surrounded by digits", it.val)
|
||||
}
|
||||
}
|
||||
if !numPeriodsOK(it.val) {
|
||||
// As a special case, numbers like '123.' or '1.e2',
|
||||
// which are valid as far as Go/strconv are concerned,
|
||||
// must be rejected because TOML says that a fractional
|
||||
// part consists of '.' followed by 1+ digits.
|
||||
p.panicf("Invalid float %q: '.' must be followed "+
|
||||
"by one or more digits", it.val)
|
||||
}
|
||||
val := strings.Replace(it.val, "_", "", -1)
|
||||
num, err := strconv.ParseFloat(val, 64)
|
||||
if err != nil {
|
||||
if e, ok := err.(*strconv.NumError); ok &&
|
||||
e.Err == strconv.ErrRange {
|
||||
|
||||
p.panicf("Float '%s' is out of the range of 64-bit "+
|
||||
"IEEE-754 floating-point numbers.", it.val)
|
||||
} else {
|
||||
p.panicf("Invalid float value: %q", it.val)
|
||||
}
|
||||
}
|
||||
return num, p.typeOfPrimitive(it)
|
||||
case itemDatetime:
|
||||
var t time.Time
|
||||
var ok bool
|
||||
var err error
|
||||
for _, format := range []string{
|
||||
"2006-01-02T15:04:05Z07:00",
|
||||
"2006-01-02T15:04:05",
|
||||
"2006-01-02",
|
||||
} {
|
||||
t, err = time.ParseInLocation(format, it.val, time.Local)
|
||||
if err == nil {
|
||||
ok = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !ok {
|
||||
p.panicf("Invalid TOML Datetime: %q.", it.val)
|
||||
}
|
||||
return t, p.typeOfPrimitive(it)
|
||||
case itemArray:
|
||||
array := make([]interface{}, 0)
|
||||
types := make([]tomlType, 0)
|
||||
|
||||
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
|
||||
if it.typ == itemCommentStart {
|
||||
p.expect(itemText)
|
||||
continue
|
||||
}
|
||||
|
||||
val, typ := p.value(it)
|
||||
array = append(array, val)
|
||||
types = append(types, typ)
|
||||
}
|
||||
return array, p.typeOfArray(types)
|
||||
case itemInlineTableStart:
|
||||
var (
|
||||
hash = make(map[string]interface{})
|
||||
outerContext = p.context
|
||||
outerKey = p.currentKey
|
||||
)
|
||||
|
||||
p.context = append(p.context, p.currentKey)
|
||||
p.currentKey = ""
|
||||
for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
|
||||
if it.typ != itemKeyStart {
|
||||
p.bug("Expected key start but instead found %q, around line %d",
|
||||
it.val, p.approxLine)
|
||||
}
|
||||
if it.typ == itemCommentStart {
|
||||
p.expect(itemText)
|
||||
continue
|
||||
}
|
||||
|
||||
// retrieve key
|
||||
k := p.next()
|
||||
p.approxLine = k.line
|
||||
kname := p.keyString(k)
|
||||
|
||||
// retrieve value
|
||||
p.currentKey = kname
|
||||
val, typ := p.value(p.next())
|
||||
// make sure we keep metadata up to date
|
||||
p.setType(kname, typ)
|
||||
p.ordered = append(p.ordered, p.context.add(p.currentKey))
|
||||
hash[kname] = val
|
||||
}
|
||||
p.context = outerContext
|
||||
p.currentKey = outerKey
|
||||
return hash, tomlHash
|
||||
}
|
||||
p.bug("Unexpected value type: %s", it.typ)
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
// numUnderscoresOK checks whether each underscore in s is surrounded by
|
||||
// characters that are not underscores.
|
||||
func numUnderscoresOK(s string) bool {
|
||||
accept := false
|
||||
for _, r := range s {
|
||||
if r == '_' {
|
||||
if !accept {
|
||||
return false
|
||||
}
|
||||
accept = false
|
||||
continue
|
||||
}
|
||||
accept = true
|
||||
}
|
||||
return accept
|
||||
}
|
||||
|
||||
// numPeriodsOK checks whether every period in s is followed by a digit.
|
||||
func numPeriodsOK(s string) bool {
|
||||
period := false
|
||||
for _, r := range s {
|
||||
if period && !isDigit(r) {
|
||||
return false
|
||||
}
|
||||
period = r == '.'
|
||||
}
|
||||
return !period
|
||||
}
|
||||
|
||||
// establishContext sets the current context of the parser,
|
||||
// where the context is either a hash or an array of hashes. Which one is
|
||||
// set depends on the value of the `array` parameter.
|
||||
//
|
||||
// Establishing the context also makes sure that the key isn't a duplicate, and
|
||||
// will create implicit hashes automatically.
|
||||
func (p *parser) establishContext(key Key, array bool) {
|
||||
var ok bool
|
||||
|
||||
// Always start at the top level and drill down for our context.
|
||||
hashContext := p.mapping
|
||||
keyContext := make(Key, 0)
|
||||
|
||||
// We only need implicit hashes for key[0:-1]
|
||||
for _, k := range key[0 : len(key)-1] {
|
||||
_, ok = hashContext[k]
|
||||
keyContext = append(keyContext, k)
|
||||
|
||||
// No key? Make an implicit hash and move on.
|
||||
if !ok {
|
||||
p.addImplicit(keyContext)
|
||||
hashContext[k] = make(map[string]interface{})
|
||||
}
|
||||
|
||||
// If the hash context is actually an array of tables, then set
|
||||
// the hash context to the last element in that array.
|
||||
//
|
||||
// Otherwise, it better be a table, since this MUST be a key group (by
|
||||
// virtue of it not being the last element in a key).
|
||||
switch t := hashContext[k].(type) {
|
||||
case []map[string]interface{}:
|
||||
hashContext = t[len(t)-1]
|
||||
case map[string]interface{}:
|
||||
hashContext = t
|
||||
default:
|
||||
p.panicf("Key '%s' was already created as a hash.", keyContext)
|
||||
}
|
||||
}
|
||||
|
||||
p.context = keyContext
|
||||
if array {
|
||||
// If this is the first element for this array, then allocate a new
|
||||
// list of tables for it.
|
||||
k := key[len(key)-1]
|
||||
if _, ok := hashContext[k]; !ok {
|
||||
hashContext[k] = make([]map[string]interface{}, 0, 5)
|
||||
}
|
||||
|
||||
// Add a new table. But make sure the key hasn't already been used
|
||||
// for something else.
|
||||
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
|
||||
hashContext[k] = append(hash, make(map[string]interface{}))
|
||||
} else {
|
||||
p.panicf("Key '%s' was already created and cannot be used as "+
|
||||
"an array.", keyContext)
|
||||
}
|
||||
} else {
|
||||
p.setValue(key[len(key)-1], make(map[string]interface{}))
|
||||
}
|
||||
p.context = append(p.context, key[len(key)-1])
|
||||
}
|
||||
|
||||
// setValue sets the given key to the given value in the current context.
|
||||
// It will make sure that the key hasn't already been defined, account for
|
||||
// implicit key groups.
|
||||
func (p *parser) setValue(key string, value interface{}) {
|
||||
var tmpHash interface{}
|
||||
var ok bool
|
||||
|
||||
hash := p.mapping
|
||||
keyContext := make(Key, 0)
|
||||
for _, k := range p.context {
|
||||
keyContext = append(keyContext, k)
|
||||
if tmpHash, ok = hash[k]; !ok {
|
||||
p.bug("Context for key '%s' has not been established.", keyContext)
|
||||
}
|
||||
switch t := tmpHash.(type) {
|
||||
case []map[string]interface{}:
|
||||
// The context is a table of hashes. Pick the most recent table
|
||||
// defined as the current hash.
|
||||
hash = t[len(t)-1]
|
||||
case map[string]interface{}:
|
||||
hash = t
|
||||
default:
|
||||
p.bug("Expected hash to have type 'map[string]interface{}', but "+
|
||||
"it has '%T' instead.", tmpHash)
|
||||
}
|
||||
}
|
||||
keyContext = append(keyContext, key)
|
||||
|
||||
if _, ok := hash[key]; ok {
|
||||
// Typically, if the given key has already been set, then we have
|
||||
// to raise an error since duplicate keys are disallowed. However,
|
||||
// it's possible that a key was previously defined implicitly. In this
|
||||
// case, it is allowed to be redefined concretely. (See the
|
||||
// `tests/valid/implicit-and-explicit-after.toml` test in `toml-test`.)
|
||||
//
|
||||
// But we have to make sure to stop marking it as an implicit. (So that
|
||||
// another redefinition provokes an error.)
|
||||
//
|
||||
// Note that since it has already been defined (as a hash), we don't
|
||||
// want to overwrite it. So our business is done.
|
||||
if p.isImplicit(keyContext) {
|
||||
p.removeImplicit(keyContext)
|
||||
return
|
||||
}
|
||||
|
||||
// Otherwise, we have a concrete key trying to override a previous
|
||||
// key, which is *always* wrong.
|
||||
p.panicf("Key '%s' has already been defined.", keyContext)
|
||||
}
|
||||
hash[key] = value
|
||||
}
|
||||
|
||||
// setType sets the type of a particular value at a given key.
|
||||
// It should be called immediately AFTER setValue.
|
||||
//
|
||||
// Note that if `key` is empty, then the type given will be applied to the
|
||||
// current context (which is either a table or an array of tables).
|
||||
func (p *parser) setType(key string, typ tomlType) {
|
||||
keyContext := make(Key, 0, len(p.context)+1)
|
||||
for _, k := range p.context {
|
||||
keyContext = append(keyContext, k)
|
||||
}
|
||||
if len(key) > 0 { // allow type setting for hashes
|
||||
keyContext = append(keyContext, key)
|
||||
}
|
||||
p.types[keyContext.String()] = typ
|
||||
}
|
||||
|
||||
// addImplicit sets the given Key as having been created implicitly.
|
||||
func (p *parser) addImplicit(key Key) {
|
||||
p.implicits[key.String()] = true
|
||||
}
|
||||
|
||||
// removeImplicit stops tagging the given key as having been implicitly
|
||||
// created.
|
||||
func (p *parser) removeImplicit(key Key) {
|
||||
p.implicits[key.String()] = false
|
||||
}
|
||||
|
||||
// isImplicit returns true if the key group pointed to by the key was created
|
||||
// implicitly.
|
||||
func (p *parser) isImplicit(key Key) bool {
|
||||
return p.implicits[key.String()]
|
||||
}
|
||||
|
||||
// current returns the full key name of the current context.
|
||||
func (p *parser) current() string {
|
||||
if len(p.currentKey) == 0 {
|
||||
return p.context.String()
|
||||
}
|
||||
if len(p.context) == 0 {
|
||||
return p.currentKey
|
||||
}
|
||||
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
|
||||
}
|
||||
|
||||
func stripFirstNewline(s string) string {
|
||||
if len(s) == 0 || s[0] != '\n' {
|
||||
return s
|
||||
}
|
||||
return s[1:]
|
||||
}
|
||||
|
||||
func stripEscapedWhitespace(s string) string {
|
||||
esc := strings.Split(s, "\\\n")
|
||||
if len(esc) > 1 {
|
||||
for i := 1; i < len(esc); i++ {
|
||||
esc[i] = strings.TrimLeftFunc(esc[i], unicode.IsSpace)
|
||||
}
|
||||
}
|
||||
return strings.Join(esc, "")
|
||||
}
|
||||
|
||||
func (p *parser) replaceEscapes(str string) string {
|
||||
var replaced []rune
|
||||
s := []byte(str)
|
||||
r := 0
|
||||
for r < len(s) {
|
||||
if s[r] != '\\' {
|
||||
c, size := utf8.DecodeRune(s[r:])
|
||||
r += size
|
||||
replaced = append(replaced, c)
|
||||
continue
|
||||
}
|
||||
r += 1
|
||||
if r >= len(s) {
|
||||
p.bug("Escape sequence at end of string.")
|
||||
return ""
|
||||
}
|
||||
switch s[r] {
|
||||
default:
|
||||
p.bug("Expected valid escape code after \\, but got %q.", s[r])
|
||||
return ""
|
||||
case 'b':
|
||||
replaced = append(replaced, rune(0x0008))
|
||||
r += 1
|
||||
case 't':
|
||||
replaced = append(replaced, rune(0x0009))
|
||||
r += 1
|
||||
case 'n':
|
||||
replaced = append(replaced, rune(0x000A))
|
||||
r += 1
|
||||
case 'f':
|
||||
replaced = append(replaced, rune(0x000C))
|
||||
r += 1
|
||||
case 'r':
|
||||
replaced = append(replaced, rune(0x000D))
|
||||
r += 1
|
||||
case '"':
|
||||
replaced = append(replaced, rune(0x0022))
|
||||
r += 1
|
||||
case '\\':
|
||||
replaced = append(replaced, rune(0x005C))
|
||||
r += 1
|
||||
case 'u':
|
||||
// At this point, we know we have a Unicode escape of the form
|
||||
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
|
||||
// for us.)
|
||||
escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
|
||||
replaced = append(replaced, escaped)
|
||||
r += 5
|
||||
case 'U':
|
||||
// At this point, we know we have a Unicode escape of the form
|
||||
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
|
||||
// for us.)
|
||||
escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
|
||||
replaced = append(replaced, escaped)
|
||||
r += 9
|
||||
}
|
||||
}
|
||||
return string(replaced)
|
||||
}
|
||||
|
||||
func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
|
||||
s := string(bs)
|
||||
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
|
||||
if err != nil {
|
||||
p.bug("Could not parse '%s' as a hexadecimal number, but the "+
|
||||
"lexer claims it's OK: %s", s, err)
|
||||
}
|
||||
if !utf8.ValidRune(rune(hex)) {
|
||||
p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
|
||||
}
|
||||
return rune(hex)
|
||||
}
|
||||
|
||||
func isStringType(ty itemType) bool {
|
||||
return ty == itemString || ty == itemMultilineString ||
|
||||
ty == itemRawString || ty == itemRawMultilineString
|
||||
}
|
1
vendor/github.com/BurntSushi/toml/session.vim
generated
vendored
Normal file
1
vendor/github.com/BurntSushi/toml/session.vim
generated
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
au BufWritePost *.go silent!make tags > /dev/null 2>&1
|
91
vendor/github.com/BurntSushi/toml/type_check.go
generated
vendored
Normal file
91
vendor/github.com/BurntSushi/toml/type_check.go
generated
vendored
Normal file
|
@ -0,0 +1,91 @@
|
|||
package toml
|
||||
|
||||
// tomlType represents any Go type that corresponds to a TOML type.
|
||||
// While the first draft of the TOML spec has a simplistic type system that
|
||||
// probably doesn't need this level of sophistication, we seem to be militating
|
||||
// toward adding real composite types.
|
||||
type tomlType interface {
|
||||
typeString() string
|
||||
}
|
||||
|
||||
// typeEqual accepts any two types and returns true if they are equal.
|
||||
func typeEqual(t1, t2 tomlType) bool {
|
||||
if t1 == nil || t2 == nil {
|
||||
return false
|
||||
}
|
||||
return t1.typeString() == t2.typeString()
|
||||
}
|
||||
|
||||
func typeIsHash(t tomlType) bool {
|
||||
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
|
||||
}
|
||||
|
||||
type tomlBaseType string
|
||||
|
||||
func (btype tomlBaseType) typeString() string {
|
||||
return string(btype)
|
||||
}
|
||||
|
||||
func (btype tomlBaseType) String() string {
|
||||
return btype.typeString()
|
||||
}
|
||||
|
||||
var (
|
||||
tomlInteger tomlBaseType = "Integer"
|
||||
tomlFloat tomlBaseType = "Float"
|
||||
tomlDatetime tomlBaseType = "Datetime"
|
||||
tomlString tomlBaseType = "String"
|
||||
tomlBool tomlBaseType = "Bool"
|
||||
tomlArray tomlBaseType = "Array"
|
||||
tomlHash tomlBaseType = "Hash"
|
||||
tomlArrayHash tomlBaseType = "ArrayHash"
|
||||
)
|
||||
|
||||
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
|
||||
// Primitive values are: Integer, Float, Datetime, String and Bool.
|
||||
//
|
||||
// Passing a lexer item other than the following will cause a BUG message
|
||||
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
|
||||
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
|
||||
switch lexItem.typ {
|
||||
case itemInteger:
|
||||
return tomlInteger
|
||||
case itemFloat:
|
||||
return tomlFloat
|
||||
case itemDatetime:
|
||||
return tomlDatetime
|
||||
case itemString:
|
||||
return tomlString
|
||||
case itemMultilineString:
|
||||
return tomlString
|
||||
case itemRawString:
|
||||
return tomlString
|
||||
case itemRawMultilineString:
|
||||
return tomlString
|
||||
case itemBool:
|
||||
return tomlBool
|
||||
}
|
||||
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
// typeOfArray returns a tomlType for an array given a list of types of its
|
||||
// values.
|
||||
//
|
||||
// In the current spec, if an array is homogeneous, then its type is always
|
||||
// "Array". If the array is not homogeneous, an error is generated.
|
||||
func (p *parser) typeOfArray(types []tomlType) tomlType {
|
||||
// Empty arrays are cool.
|
||||
if len(types) == 0 {
|
||||
return tomlArray
|
||||
}
|
||||
|
||||
theType := types[0]
|
||||
for _, t := range types[1:] {
|
||||
if !typeEqual(theType, t) {
|
||||
p.panicf("Array contains values of type '%s' and '%s', but "+
|
||||
"arrays must be homogeneous.", theType, t)
|
||||
}
|
||||
}
|
||||
return tomlArray
|
||||
}
|
242
vendor/github.com/BurntSushi/toml/type_fields.go
generated
vendored
Normal file
242
vendor/github.com/BurntSushi/toml/type_fields.go
generated
vendored
Normal file
|
@ -0,0 +1,242 @@
|
|||
package toml
|
||||
|
||||
// Struct field handling is adapted from code in encoding/json:
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the Go distribution.
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"sort"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// A field represents a single field found in a struct.
|
||||
type field struct {
|
||||
name string // the name of the field (`toml` tag included)
|
||||
tag bool // whether field has a `toml` tag
|
||||
index []int // represents the depth of an anonymous field
|
||||
typ reflect.Type // the type of the field
|
||||
}
|
||||
|
||||
// byName sorts field by name, breaking ties with depth,
|
||||
// then breaking ties with "name came from toml tag", then
|
||||
// breaking ties with index sequence.
|
||||
type byName []field
|
||||
|
||||
func (x byName) Len() int { return len(x) }
|
||||
|
||||
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
|
||||
|
||||
func (x byName) Less(i, j int) bool {
|
||||
if x[i].name != x[j].name {
|
||||
return x[i].name < x[j].name
|
||||
}
|
||||
if len(x[i].index) != len(x[j].index) {
|
||||
return len(x[i].index) < len(x[j].index)
|
||||
}
|
||||
if x[i].tag != x[j].tag {
|
||||
return x[i].tag
|
||||
}
|
||||
return byIndex(x).Less(i, j)
|
||||
}
|
||||
|
||||
// byIndex sorts field by index sequence.
|
||||
type byIndex []field
|
||||
|
||||
func (x byIndex) Len() int { return len(x) }
|
||||
|
||||
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
|
||||
|
||||
func (x byIndex) Less(i, j int) bool {
|
||||
for k, xik := range x[i].index {
|
||||
if k >= len(x[j].index) {
|
||||
return false
|
||||
}
|
||||
if xik != x[j].index[k] {
|
||||
return xik < x[j].index[k]
|
||||
}
|
||||
}
|
||||
return len(x[i].index) < len(x[j].index)
|
||||
}
|
||||
|
||||
// typeFields returns a list of fields that TOML should recognize for the given
|
||||
// type. The algorithm is breadth-first search over the set of structs to
|
||||
// include - the top struct and then any reachable anonymous structs.
|
||||
func typeFields(t reflect.Type) []field {
|
||||
// Anonymous fields to explore at the current level and the next.
|
||||
current := []field{}
|
||||
next := []field{{typ: t}}
|
||||
|
||||
// Count of queued names for current level and the next.
|
||||
count := map[reflect.Type]int{}
|
||||
nextCount := map[reflect.Type]int{}
|
||||
|
||||
// Types already visited at an earlier level.
|
||||
visited := map[reflect.Type]bool{}
|
||||
|
||||
// Fields found.
|
||||
var fields []field
|
||||
|
||||
for len(next) > 0 {
|
||||
current, next = next, current[:0]
|
||||
count, nextCount = nextCount, map[reflect.Type]int{}
|
||||
|
||||
for _, f := range current {
|
||||
if visited[f.typ] {
|
||||
continue
|
||||
}
|
||||
visited[f.typ] = true
|
||||
|
||||
// Scan f.typ for fields to include.
|
||||
for i := 0; i < f.typ.NumField(); i++ {
|
||||
sf := f.typ.Field(i)
|
||||
if sf.PkgPath != "" && !sf.Anonymous { // unexported
|
||||
continue
|
||||
}
|
||||
opts := getOptions(sf.Tag)
|
||||
if opts.skip {
|
||||
continue
|
||||
}
|
||||
index := make([]int, len(f.index)+1)
|
||||
copy(index, f.index)
|
||||
index[len(f.index)] = i
|
||||
|
||||
ft := sf.Type
|
||||
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
|
||||
// Follow pointer.
|
||||
ft = ft.Elem()
|
||||
}
|
||||
|
||||
// Record found field and index sequence.
|
||||
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
|
||||
tagged := opts.name != ""
|
||||
name := opts.name
|
||||
if name == "" {
|
||||
name = sf.Name
|
||||
}
|
||||
fields = append(fields, field{name, tagged, index, ft})
|
||||
if count[f.typ] > 1 {
|
||||
// If there were multiple instances, add a second,
|
||||
// so that the annihilation code will see a duplicate.
|
||||
// It only cares about the distinction between 1 or 2,
|
||||
// so don't bother generating any more copies.
|
||||
fields = append(fields, fields[len(fields)-1])
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// Record new anonymous struct to explore in next round.
|
||||
nextCount[ft]++
|
||||
if nextCount[ft] == 1 {
|
||||
f := field{name: ft.Name(), index: index, typ: ft}
|
||||
next = append(next, f)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sort.Sort(byName(fields))
|
||||
|
||||
// Delete all fields that are hidden by the Go rules for embedded fields,
|
||||
// except that fields with TOML tags are promoted.
|
||||
|
||||
// The fields are sorted in primary order of name, secondary order
|
||||
// of field index length. Loop over names; for each name, delete
|
||||
// hidden fields by choosing the one dominant field that survives.
|
||||
out := fields[:0]
|
||||
for advance, i := 0, 0; i < len(fields); i += advance {
|
||||
// One iteration per name.
|
||||
// Find the sequence of fields with the name of this first field.
|
||||
fi := fields[i]
|
||||
name := fi.name
|
||||
for advance = 1; i+advance < len(fields); advance++ {
|
||||
fj := fields[i+advance]
|
||||
if fj.name != name {
|
||||
break
|
||||
}
|
||||
}
|
||||
if advance == 1 { // Only one field with this name
|
||||
out = append(out, fi)
|
||||
continue
|
||||
}
|
||||
dominant, ok := dominantField(fields[i : i+advance])
|
||||
if ok {
|
||||
out = append(out, dominant)
|
||||
}
|
||||
}
|
||||
|
||||
fields = out
|
||||
sort.Sort(byIndex(fields))
|
||||
|
||||
return fields
|
||||
}
|
||||
|
||||
// dominantField looks through the fields, all of which are known to
|
||||
// have the same name, to find the single field that dominates the
|
||||
// others using Go's embedding rules, modified by the presence of
|
||||
// TOML tags. If there are multiple top-level fields, the boolean
|
||||
// will be false: This condition is an error in Go and we skip all
|
||||
// the fields.
|
||||
func dominantField(fields []field) (field, bool) {
|
||||
// The fields are sorted in increasing index-length order. The winner
|
||||
// must therefore be one with the shortest index length. Drop all
|
||||
// longer entries, which is easy: just truncate the slice.
|
||||
length := len(fields[0].index)
|
||||
tagged := -1 // Index of first tagged field.
|
||||
for i, f := range fields {
|
||||
if len(f.index) > length {
|
||||
fields = fields[:i]
|
||||
break
|
||||
}
|
||||
if f.tag {
|
||||
if tagged >= 0 {
|
||||
// Multiple tagged fields at the same level: conflict.
|
||||
// Return no field.
|
||||
return field{}, false
|
||||
}
|
||||
tagged = i
|
||||
}
|
||||
}
|
||||
if tagged >= 0 {
|
||||
return fields[tagged], true
|
||||
}
|
||||
// All remaining fields have the same length. If there's more than one,
|
||||
// we have a conflict (two fields named "X" at the same level) and we
|
||||
// return no field.
|
||||
if len(fields) > 1 {
|
||||
return field{}, false
|
||||
}
|
||||
return fields[0], true
|
||||
}
|
||||
|
||||
var fieldCache struct {
|
||||
sync.RWMutex
|
||||
m map[reflect.Type][]field
|
||||
}
|
||||
|
||||
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
|
||||
func cachedTypeFields(t reflect.Type) []field {
|
||||
fieldCache.RLock()
|
||||
f := fieldCache.m[t]
|
||||
fieldCache.RUnlock()
|
||||
if f != nil {
|
||||
return f
|
||||
}
|
||||
|
||||
// Compute fields without lock.
|
||||
// Might duplicate effort but won't hold other computations back.
|
||||
f = typeFields(t)
|
||||
if f == nil {
|
||||
f = []field{}
|
||||
}
|
||||
|
||||
fieldCache.Lock()
|
||||
if fieldCache.m == nil {
|
||||
fieldCache.m = map[reflect.Type][]field{}
|
||||
}
|
||||
fieldCache.m[t] = f
|
||||
fieldCache.Unlock()
|
||||
return f
|
||||
}
|
117
vendor/github.com/Graylog2/go-gelf/README.md
generated
vendored
117
vendor/github.com/Graylog2/go-gelf/README.md
generated
vendored
|
@ -1,117 +0,0 @@
|
|||
go-gelf - GELF Library and Writer for Go
|
||||
========================================
|
||||
|
||||
[GELF] (Graylog Extended Log Format) is an application-level logging
|
||||
protocol that avoids many of the shortcomings of [syslog]. While it
|
||||
can be run over any stream or datagram transport protocol, it has
|
||||
special support ([chunking]) to allow long messages to be split over
|
||||
multiple datagrams.
|
||||
|
||||
Versions
|
||||
--------
|
||||
|
||||
In order to enable versionning of this package with Go, this project
|
||||
is using GoPkg.in. The default branch of this project will be v1
|
||||
for some time to prevent breaking clients. We encourage all project
|
||||
to change their imports to the new GoPkg.in URIs as soon as possible.
|
||||
|
||||
To see up to date code, make sure to switch to the master branch.
|
||||
|
||||
v1.0.0
|
||||
------
|
||||
|
||||
This implementation currently supports UDP and TCP as a transport
|
||||
protocol. TLS is unsupported.
|
||||
|
||||
The library provides an API that applications can use to log messages
|
||||
directly to a Graylog server and an `io.Writer` that can be used to
|
||||
redirect the standard library's log messages (`os.Stdout`) to a
|
||||
Graylog server.
|
||||
|
||||
[GELF]: http://docs.graylog.org/en/2.2/pages/gelf.html
|
||||
[syslog]: https://tools.ietf.org/html/rfc5424
|
||||
[chunking]: http://docs.graylog.org/en/2.2/pages/gelf.html#chunked-gelf
|
||||
|
||||
|
||||
Installing
|
||||
----------
|
||||
|
||||
go-gelf is go get-able:
|
||||
|
||||
go get gopkg.in/Graylog2/go-gelf.v1/gelf
|
||||
|
||||
or
|
||||
|
||||
go get github.com/Graylog2/go-gelf/gelf
|
||||
|
||||
This will get you version 1.0.0, with only UDP support and legacy API.
|
||||
Newer versions are available through GoPkg.in:
|
||||
|
||||
go get gopkg.in/Graylog2/go-gelf.v2/gelf
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
The easiest way to integrate graylog logging into your go app is by
|
||||
having your `main` function (or even `init`) call `log.SetOutput()`.
|
||||
By using an `io.MultiWriter`, we can log to both stdout and graylog -
|
||||
giving us both centralized and local logs. (Redundancy is nice).
|
||||
|
||||
```golang
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"gopkg.in/Graylog2/go-gelf.v2/gelf"
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
)
|
||||
|
||||
func main() {
|
||||
var graylogAddr string
|
||||
|
||||
flag.StringVar(&graylogAddr, "graylog", "", "graylog server addr")
|
||||
flag.Parse()
|
||||
|
||||
if graylogAddr != "" {
|
||||
// If using UDP
|
||||
gelfWriter, err := gelf.NewUDPWriter(graylogAddr)
|
||||
// If using TCP
|
||||
//gelfWriter, err := gelf.NewTCPWriter(graylogAddr)
|
||||
if err != nil {
|
||||
log.Fatalf("gelf.NewWriter: %s", err)
|
||||
}
|
||||
// log to both stderr and graylog2
|
||||
log.SetOutput(io.MultiWriter(os.Stderr, gelfWriter))
|
||||
log.Printf("logging to stderr & graylog2@'%s'", graylogAddr)
|
||||
}
|
||||
|
||||
// From here on out, any calls to log.Print* functions
|
||||
// will appear on stdout, and be sent over UDP or TCP to the
|
||||
// specified Graylog2 server.
|
||||
|
||||
log.Printf("Hello gray World")
|
||||
|
||||
// ...
|
||||
}
|
||||
```
|
||||
The above program can be invoked as:
|
||||
|
||||
go run test.go -graylog=localhost:12201
|
||||
|
||||
When using UDP messages may be dropped or re-ordered. However, Graylog
|
||||
server availability will not impact application performance; there is
|
||||
a small, fixed overhead per log call regardless of whether the target
|
||||
server is reachable or not.
|
||||
|
||||
|
||||
To Do
|
||||
-----
|
||||
|
||||
- WriteMessage example
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
go-gelf is offered under the MIT license, see LICENSE for details.
|
1
vendor/github.com/Microsoft/go-winio/.gitignore
generated
vendored
Normal file
1
vendor/github.com/Microsoft/go-winio/.gitignore
generated
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
*.exe
|
1
vendor/github.com/Microsoft/go-winio/CODEOWNERS
generated
vendored
Normal file
1
vendor/github.com/Microsoft/go-winio/CODEOWNERS
generated
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
* @microsoft/containerplat
|
9
vendor/github.com/Microsoft/go-winio/go.mod
generated
vendored
9
vendor/github.com/Microsoft/go-winio/go.mod
generated
vendored
|
@ -1,9 +0,0 @@
|
|||
module github.com/Microsoft/go-winio
|
||||
|
||||
go 1.12
|
||||
|
||||
require (
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/sirupsen/logrus v1.7.0
|
||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c
|
||||
)
|
18
vendor/github.com/Microsoft/go-winio/pkg/etwlogrus/HookTest.wprp
generated
vendored
Normal file
18
vendor/github.com/Microsoft/go-winio/pkg/etwlogrus/HookTest.wprp
generated
vendored
Normal file
|
@ -0,0 +1,18 @@
|
|||
<WindowsPerformanceRecorder Version="1">
|
||||
<Profiles>
|
||||
<EventCollector Id="Collector" Name="MyCollector">
|
||||
<BufferSize Value="256"/>
|
||||
<Buffers Value="100"/>
|
||||
</EventCollector>
|
||||
<EventProvider Id="HookTest" Name="5e50de03-107c-5a83-74c6-998c4491e7e9"/>
|
||||
<Profile Id="Test.Verbose.File" Name="Test" Description="Test" LoggingMode="File" DetailLevel="Verbose">
|
||||
<Collectors>
|
||||
<EventCollectorId Value="Collector">
|
||||
<EventProviders>
|
||||
<EventProviderId Value="HookTest"/>
|
||||
</EventProviders>
|
||||
</EventCollectorId>
|
||||
</Collectors>
|
||||
</Profile>
|
||||
</Profiles>
|
||||
</WindowsPerformanceRecorder>
|
1
vendor/github.com/Microsoft/hcsshim/.gitattributes
generated
vendored
Normal file
1
vendor/github.com/Microsoft/hcsshim/.gitattributes
generated
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
* text=auto eol=lf
|
3
vendor/github.com/Microsoft/hcsshim/.gitignore
generated
vendored
Normal file
3
vendor/github.com/Microsoft/hcsshim/.gitignore
generated
vendored
Normal file
|
@ -0,0 +1,3 @@
|
|||
*.exe
|
||||
.idea
|
||||
.vscode
|
1
vendor/github.com/Microsoft/hcsshim/CODEOWNERS
generated
vendored
Normal file
1
vendor/github.com/Microsoft/hcsshim/CODEOWNERS
generated
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
* @microsoft/containerplat
|
49
vendor/github.com/Microsoft/hcsshim/Protobuild.toml
generated
vendored
Normal file
49
vendor/github.com/Microsoft/hcsshim/Protobuild.toml
generated
vendored
Normal file
|
@ -0,0 +1,49 @@
|
|||
version = "unstable"
|
||||
generator = "gogoctrd"
|
||||
plugins = ["grpc", "fieldpath"]
|
||||
|
||||
# Control protoc include paths. Below are usually some good defaults, but feel
|
||||
# free to try it without them if it works for your project.
|
||||
[includes]
|
||||
# Include paths that will be added before all others. Typically, you want to
|
||||
# treat the root of the project as an include, but this may not be necessary.
|
||||
before = ["./protobuf"]
|
||||
|
||||
# Paths that should be treated as include roots in relation to the vendor
|
||||
# directory. These will be calculated with the vendor directory nearest the
|
||||
# target package.
|
||||
packages = ["github.com/gogo/protobuf"]
|
||||
|
||||
# Paths that will be added untouched to the end of the includes. We use
|
||||
# `/usr/local/include` to pickup the common install location of protobuf.
|
||||
# This is the default.
|
||||
after = ["/usr/local/include"]
|
||||
|
||||
# This section maps protobuf imports to Go packages. These will become
|
||||
# `-M` directives in the call to the go protobuf generator.
|
||||
[packages]
|
||||
"gogoproto/gogo.proto" = "github.com/gogo/protobuf/gogoproto"
|
||||
"google/protobuf/any.proto" = "github.com/gogo/protobuf/types"
|
||||
"google/protobuf/empty.proto" = "github.com/gogo/protobuf/types"
|
||||
"google/protobuf/struct.proto" = "github.com/gogo/protobuf/types"
|
||||
"google/protobuf/descriptor.proto" = "github.com/gogo/protobuf/protoc-gen-gogo/descriptor"
|
||||
"google/protobuf/field_mask.proto" = "github.com/gogo/protobuf/types"
|
||||
"google/protobuf/timestamp.proto" = "github.com/gogo/protobuf/types"
|
||||
"google/protobuf/duration.proto" = "github.com/gogo/protobuf/types"
|
||||
"github/containerd/cgroups/stats/v1/metrics.proto" = "github.com/containerd/cgroups/stats/v1"
|
||||
|
||||
[[overrides]]
|
||||
prefixes = ["github.com/Microsoft/hcsshim/internal/shimdiag"]
|
||||
plugins = ["ttrpc"]
|
||||
|
||||
[[overrides]]
|
||||
prefixes = ["github.com/Microsoft/hcsshim/internal/computeagent"]
|
||||
plugins = ["ttrpc"]
|
||||
|
||||
[[overrides]]
|
||||
prefixes = ["github.com/Microsoft/hcsshim/internal/ncproxyttrpc"]
|
||||
plugins = ["ttrpc"]
|
||||
|
||||
[[overrides]]
|
||||
prefixes = ["github.com/Microsoft/hcsshim/internal/vmservice"]
|
||||
plugins = ["ttrpc"]
|
0
vendor/github.com/Microsoft/hcsshim/cmd/containerd-shim-runhcs-v1/options/next.pb.txt
generated
vendored
Normal file
0
vendor/github.com/Microsoft/hcsshim/cmd/containerd-shim-runhcs-v1/options/next.pb.txt
generated
vendored
Normal file
12
vendor/github.com/Microsoft/hcsshim/functional_tests.ps1
generated
vendored
Normal file
12
vendor/github.com/Microsoft/hcsshim/functional_tests.ps1
generated
vendored
Normal file
|
@ -0,0 +1,12 @@
|
|||
# Requirements so far:
|
||||
# dockerd running
|
||||
# - image microsoft/nanoserver (matching host base image) docker load -i c:\baseimages\nanoserver.tar
|
||||
# - image alpine (linux) docker pull --platform=linux alpine
|
||||
|
||||
|
||||
# TODO: Add this a parameter for debugging. ie "functional-tests -debug=$true"
|
||||
#$env:HCSSHIM_FUNCTIONAL_TESTS_DEBUG="yes please"
|
||||
|
||||
#pushd uvm
|
||||
go test -v -tags "functional uvmcreate uvmscratch uvmscsi uvmvpmem uvmvsmb uvmp9" ./...
|
||||
#popd
|
31
vendor/github.com/Microsoft/hcsshim/go.mod
generated
vendored
31
vendor/github.com/Microsoft/hcsshim/go.mod
generated
vendored
|
@ -1,31 +0,0 @@
|
|||
module github.com/Microsoft/hcsshim
|
||||
|
||||
go 1.13
|
||||
|
||||
require (
|
||||
github.com/Microsoft/go-winio v0.4.17
|
||||
github.com/cenkalti/backoff/v4 v4.1.1
|
||||
github.com/containerd/cgroups v1.0.1
|
||||
github.com/containerd/console v1.0.2
|
||||
github.com/containerd/containerd v1.4.9
|
||||
github.com/containerd/continuity v0.1.0 // indirect
|
||||
github.com/containerd/fifo v1.0.0 // indirect
|
||||
github.com/containerd/go-runc v1.0.0
|
||||
github.com/containerd/ttrpc v1.1.0
|
||||
github.com/containerd/typeurl v1.0.2
|
||||
github.com/gogo/protobuf v1.3.2
|
||||
github.com/opencontainers/runtime-spec v1.0.3-0.20200929063507-e6143ca7d51d
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/sirupsen/logrus v1.8.1
|
||||
github.com/urfave/cli v1.22.2
|
||||
go.opencensus.io v0.22.3
|
||||
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a
|
||||
golang.org/x/sys v0.0.0-20210324051608-47abb6519492
|
||||
google.golang.org/grpc v1.33.2
|
||||
gotest.tools/v3 v3.0.3 // indirect
|
||||
)
|
||||
|
||||
replace (
|
||||
google.golang.org/genproto => google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63
|
||||
google.golang.org/grpc => google.golang.org/grpc v1.27.1
|
||||
)
|
1
vendor/github.com/RackSec/srslog/.gitignore
generated
vendored
Normal file
1
vendor/github.com/RackSec/srslog/.gitignore
generated
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
.cover
|
18
vendor/github.com/RackSec/srslog/.travis.yml
generated
vendored
Normal file
18
vendor/github.com/RackSec/srslog/.travis.yml
generated
vendored
Normal file
|
@ -0,0 +1,18 @@
|
|||
sudo: required
|
||||
dist: trusty
|
||||
group: edge
|
||||
language: go
|
||||
go:
|
||||
- 1.5
|
||||
before_install:
|
||||
- pip install --user codecov
|
||||
script:
|
||||
- |
|
||||
go get ./...
|
||||
go test -v -coverprofile=coverage.txt -covermode=atomic
|
||||
go vet
|
||||
after_success:
|
||||
- codecov
|
||||
notifications:
|
||||
slack:
|
||||
secure: dtDue9gP6CRR1jYjEf6raXXFak3QKGcCFvCf5mfvv5XScdpmc3udwgqc5TdyjC0goaC9OK/4jTcCD30dYZm/u6ux3E9mo3xwMl2xRLHx76p5r9rSQtloH19BDwA2+A+bpDfFQVz05k2YXuTiGSvNMMdwzx+Dr294Sl/z43RFB4+b9/R/6LlFpRW89IwftvpLAFnBy4K/ZcspQzKM+rQfQTL5Kk+iZ/KBsuR/VziDq6MoJ8t43i4ee8vwS06vFBKDbUiZ4FIZpLgc2RAL5qso5aWRKYXL6waXfoKHZWKPe0w4+9IY1rDJxG1jEb7YGgcbLaF9xzPRRs2b2yO/c87FKpkh6PDgYHfLjpgXotCoojZrL4p1x6MI1ldJr3NhARGPxS9r4liB9n6Y5nD+ErXi1IMf55fuUHcPY27Jc0ySeLFeM6cIWJ8OhFejCgGw6a5DnnmJo0PqopsaBDHhadpLejT1+K6bL2iGkT4SLcVNuRGLs+VyuNf1+5XpkWZvy32vquO7SZOngLLBv+GIem+t3fWm0Z9s/0i1uRCQei1iUutlYjoV/LBd35H2rhob4B5phIuJin9kb0zbHf6HnaoN0CtN8r0d8G5CZiInVlG5Xcid5Byb4dddf5U2EJTDuCMVyyiM7tcnfjqw9UbVYNxtYM9SzcqIq+uVqM8pYL9xSec=
|
50
vendor/github.com/RackSec/srslog/CODE_OF_CONDUCT.md
generated
vendored
Normal file
50
vendor/github.com/RackSec/srslog/CODE_OF_CONDUCT.md
generated
vendored
Normal file
|
@ -0,0 +1,50 @@
|
|||
# Contributor Code of Conduct
|
||||
|
||||
As contributors and maintainers of this project, and in the interest of
|
||||
fostering an open and welcoming community, we pledge to respect all people who
|
||||
contribute through reporting issues, posting feature requests, updating
|
||||
documentation, submitting pull requests or patches, and other activities.
|
||||
|
||||
We are committed to making participation in this project a harassment-free
|
||||
experience for everyone, regardless of level of experience, gender, gender
|
||||
identity and expression, sexual orientation, disability, personal appearance,
|
||||
body size, race, ethnicity, age, religion, or nationality.
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery
|
||||
* Personal attacks
|
||||
* Trolling or insulting/derogatory comments
|
||||
* Public or private harassment
|
||||
* Publishing other's private information, such as physical or electronic
|
||||
addresses, without explicit permission
|
||||
* Other unethical or unprofessional conduct
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or
|
||||
reject comments, commits, code, wiki edits, issues, and other contributions
|
||||
that are not aligned to this Code of Conduct, or to ban temporarily or
|
||||
permanently any contributor for other behaviors that they deem inappropriate,
|
||||
threatening, offensive, or harmful.
|
||||
|
||||
By adopting this Code of Conduct, project maintainers commit themselves to
|
||||
fairly and consistently applying these principles to every aspect of managing
|
||||
this project. Project maintainers who do not follow or enforce the Code of
|
||||
Conduct may be permanently removed from the project team.
|
||||
|
||||
This Code of Conduct applies both within project spaces and in public spaces
|
||||
when an individual is representing the project or its community.
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported by contacting a project maintainer at [sirsean@gmail.com]. All
|
||||
complaints will be reviewed and investigated and will result in a response that
|
||||
is deemed necessary and appropriate to the circumstances. Maintainers are
|
||||
obligated to maintain confidentiality with regard to the reporter of an
|
||||
incident.
|
||||
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 1.3.0, available at
|
||||
[http://contributor-covenant.org/version/1/3/0/][version]
|
||||
|
||||
[homepage]: http://contributor-covenant.org
|
||||
[version]: http://contributor-covenant.org/version/1/3/0/
|
22
vendor/github.com/armon/go-metrics/.gitignore
generated
vendored
Normal file
22
vendor/github.com/armon/go-metrics/.gitignore
generated
vendored
Normal file
|
@ -0,0 +1,22 @@
|
|||
# Compiled Object files, Static and Dynamic libs (Shared Objects)
|
||||
*.o
|
||||
*.a
|
||||
*.so
|
||||
|
||||
# Folders
|
||||
_obj
|
||||
_test
|
||||
|
||||
# Architecture specific extensions/prefixes
|
||||
*.[568vq]
|
||||
[568vq].out
|
||||
|
||||
*.cgo1.go
|
||||
*.cgo2.c
|
||||
_cgo_defun.c
|
||||
_cgo_gotypes.go
|
||||
_cgo_export.*
|
||||
|
||||
_testmain.go
|
||||
|
||||
*.exe
|
0
vendor/github.com/armon/go-metrics/metrics.go
generated
vendored
Executable file → Normal file
0
vendor/github.com/armon/go-metrics/metrics.go
generated
vendored
Executable file → Normal file
0
vendor/github.com/armon/go-metrics/sink.go
generated
vendored
Executable file → Normal file
0
vendor/github.com/armon/go-metrics/sink.go
generated
vendored
Executable file → Normal file
0
vendor/github.com/armon/go-metrics/start.go
generated
vendored
Executable file → Normal file
0
vendor/github.com/armon/go-metrics/start.go
generated
vendored
Executable file → Normal file
0
vendor/github.com/armon/go-metrics/statsite.go
generated
vendored
Executable file → Normal file
0
vendor/github.com/armon/go-metrics/statsite.go
generated
vendored
Executable file → Normal file
22
vendor/github.com/armon/go-radix/.gitignore
generated
vendored
Normal file
22
vendor/github.com/armon/go-radix/.gitignore
generated
vendored
Normal file
|
@ -0,0 +1,22 @@
|
|||
# Compiled Object files, Static and Dynamic libs (Shared Objects)
|
||||
*.o
|
||||
*.a
|
||||
*.so
|
||||
|
||||
# Folders
|
||||
_obj
|
||||
_test
|
||||
|
||||
# Architecture specific extensions/prefixes
|
||||
*.[568vq]
|
||||
[568vq].out
|
||||
|
||||
*.cgo1.go
|
||||
*.cgo2.c
|
||||
_cgo_defun.c
|
||||
_cgo_gotypes.go
|
||||
_cgo_export.*
|
||||
|
||||
_testmain.go
|
||||
|
||||
*.exe
|
3
vendor/github.com/armon/go-radix/.travis.yml
generated
vendored
Normal file
3
vendor/github.com/armon/go-radix/.travis.yml
generated
vendored
Normal file
|
@ -0,0 +1,3 @@
|
|||
language: go
|
||||
go:
|
||||
- tip
|
502
vendor/github.com/aws/aws-sdk-go/README.md
generated
vendored
502
vendor/github.com/aws/aws-sdk-go/README.md
generated
vendored
|
@ -1,502 +0,0 @@
|
|||
[](https://docs.aws.amazon.com/sdk-for-go/api) [](https://gitter.im/aws/aws-sdk-go?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [](https://travis-ci.org/aws/aws-sdk-go) [](https://github.com/aws/aws-sdk-go/blob/master/LICENSE.txt)
|
||||
|
||||
# AWS SDK for Go
|
||||
|
||||
aws-sdk-go is the official AWS SDK for the Go programming language.
|
||||
|
||||
Checkout our [release notes](https://github.com/aws/aws-sdk-go/releases) for
|
||||
information about the latest bug fixes, updates, and features added to the SDK.
|
||||
|
||||
We [announced](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-2-0-developer-preview/) the Developer Preview for the [v2 AWS SDK for Go](https://github.com/aws/aws-sdk-go-v2). The v2 SDK source is available at https://github.com/aws/aws-sdk-go-v2, and add it to your project with `go get github.com/aws/aws-sdk-go-v2`. Check out the v2 SDK's [changes and updates](https://github.com/aws/aws-sdk-go-v2/blob/master/CHANGELOG.md), and let us know what you think. We want your feedback.
|
||||
|
||||
We have a pilot redesign of the [AWS SDK for Go API reference documentation](https://docs.aws.amazon.com/sdk-for-go/v1/api/gosdk-apiref.html). Let us know what you think.
|
||||
|
||||
## Installing
|
||||
|
||||
Use `go get` to retrieve the SDK to add it to your `GOPATH` workspace, or
|
||||
project's Go module dependencies.
|
||||
|
||||
go get github.com/aws/aws-sdk-go
|
||||
|
||||
To update the SDK use `go get -u` to retrieve the latest version of the SDK.
|
||||
|
||||
go get -u github.com/aws/aws-sdk-go
|
||||
|
||||
### Dependencies
|
||||
|
||||
The SDK includes a `vendor` folder containing the runtime dependencies of the
|
||||
SDK. The metadata of the SDK's dependencies can be found in the Go module file
|
||||
`go.mod` or Dep file `Gopkg.toml`.
|
||||
|
||||
### Go Modules
|
||||
|
||||
If you are using Go modules, your `go get` will default to the latest tagged
|
||||
release version of the SDK. To get a specific release version of the SDK use
|
||||
`@<tag>` in your `go get` command.
|
||||
|
||||
go get github.com/aws/aws-sdk-go@v1.15.77
|
||||
|
||||
To get the latest SDK repository change use `@latest`.
|
||||
|
||||
go get github.com/aws/aws-sdk-go@latest
|
||||
|
||||
### Go 1.5
|
||||
|
||||
If you are using Go 1.5 without vendoring enabled, (`GO15VENDOREXPERIMENT=1`),
|
||||
you will need to use `...` when retrieving the SDK to get its dependencies.
|
||||
|
||||
go get github.com/aws/aws-sdk-go/...
|
||||
|
||||
This will still include the `vendor` folder. The `vendor` folder can be deleted
|
||||
if not used by your environment.
|
||||
|
||||
rm -rf $GOPATH/src/github.com/aws/aws-sdk-go/vendor
|
||||
|
||||
## Getting Help
|
||||
|
||||
Please use these community resources for getting help. We use the GitHub issues
|
||||
for tracking bugs and feature requests.
|
||||
|
||||
* Ask a question on [StackOverflow](http://stackoverflow.com/) and tag it with the [`aws-sdk-go`](http://stackoverflow.com/questions/tagged/aws-sdk-go) tag.
|
||||
* Come join the AWS SDK for Go community chat on [gitter](https://gitter.im/aws/aws-sdk-go).
|
||||
* Open a support ticket with [AWS Support](http://docs.aws.amazon.com/awssupport/latest/user/getting-started.html).
|
||||
* If you think you may have found a bug, please open an [issue](https://github.com/aws/aws-sdk-go/issues/new).
|
||||
|
||||
## Opening Issues
|
||||
|
||||
If you encounter a bug with the AWS SDK for Go we would like to hear about it.
|
||||
Search the [existing issues](https://github.com/aws/aws-sdk-go/issues) and see
|
||||
if others are also experiencing the issue before opening a new issue. Please
|
||||
include the version of AWS SDK for Go, Go language, and OS you’re using. Please
|
||||
also include reproduction case when appropriate.
|
||||
|
||||
The GitHub issues are intended for bug reports and feature requests. For help
|
||||
and questions with using AWS SDK for GO please make use of the resources listed
|
||||
in the [Getting Help](https://github.com/aws/aws-sdk-go#getting-help) section.
|
||||
Keeping the list of open issues lean will help us respond in a timely manner.
|
||||
|
||||
## Reference Documentation
|
||||
|
||||
[`Getting Started Guide`](https://aws.amazon.com/sdk-for-go/) - This document
|
||||
is a general introduction on how to configure and make requests with the SDK.
|
||||
If this is your first time using the SDK, this documentation and the API
|
||||
documentation will help you get started. This document focuses on the syntax
|
||||
and behavior of the SDK. The [Service Developer
|
||||
Guide](https://aws.amazon.com/documentation/) will help you get started using
|
||||
specific AWS services.
|
||||
|
||||
[`SDK API Reference
|
||||
Documentation`](https://docs.aws.amazon.com/sdk-for-go/api/) - Use this
|
||||
document to look up all API operation input and output parameters for AWS
|
||||
services supported by the SDK. The API reference also includes documentation of
|
||||
the SDK, and examples how to using the SDK, service client API operations, and
|
||||
API operation require parameters.
|
||||
|
||||
[`Service Developer Guide`](https://aws.amazon.com/documentation/) - Use this
|
||||
documentation to learn how to interface with AWS services. These guides are
|
||||
great for getting started with a service, or when looking for more
|
||||
information about a service. While this document is not required for coding,
|
||||
services may supply helpful samples to look out for.
|
||||
|
||||
[`SDK Examples`](https://github.com/aws/aws-sdk-go/tree/master/example) -
|
||||
Included in the SDK's repo are several hand crafted examples using the SDK
|
||||
features and AWS services.
|
||||
|
||||
## Overview of SDK's Packages
|
||||
|
||||
The SDK is composed of two main components, SDK core, and service clients.
|
||||
The SDK core packages are all available under the aws package at the root of
|
||||
the SDK. Each client for a supported AWS service is available within its own
|
||||
package under the service folder at the root of the SDK.
|
||||
|
||||
* aws - SDK core, provides common shared types such as Config, Logger,
|
||||
and utilities to make working with API parameters easier.
|
||||
|
||||
* awserr - Provides the error interface that the SDK will use for all
|
||||
errors that occur in the SDK's processing. This includes service API
|
||||
response errors as well. The Error type is made up of a code and message.
|
||||
Cast the SDK's returned error type to awserr.Error and call the Code
|
||||
method to compare returned error to specific error codes. See the package's
|
||||
documentation for additional values that can be extracted such as RequestID.
|
||||
|
||||
* credentials - Provides the types and built in credentials providers
|
||||
the SDK will use to retrieve AWS credentials to make API requests with.
|
||||
Nested under this folder are also additional credentials providers such as
|
||||
stscreds for assuming IAM roles, and ec2rolecreds for EC2 Instance roles.
|
||||
|
||||
* endpoints - Provides the AWS Regions and Endpoints metadata for the SDK.
|
||||
Use this to lookup AWS service endpoint information such as which services
|
||||
are in a region, and what regions a service is in. Constants are also provided
|
||||
for all region identifiers, e.g UsWest2RegionID for "us-west-2".
|
||||
|
||||
* session - Provides initial default configuration, and load
|
||||
configuration from external sources such as environment and shared
|
||||
credentials file.
|
||||
|
||||
* request - Provides the API request sending, and retry logic for the SDK.
|
||||
This package also includes utilities for defining your own request
|
||||
retryer, and configuring how the SDK processes the request.
|
||||
|
||||
* service - Clients for AWS services. All services supported by the SDK are
|
||||
available under this folder.
|
||||
|
||||
## How to Use the SDK's AWS Service Clients
|
||||
|
||||
The SDK includes the Go types and utilities you can use to make requests to
|
||||
AWS service APIs. Within the service folder at the root of the SDK you'll find
|
||||
a package for each AWS service the SDK supports. All service clients follow common pattern of creation and usage.
|
||||
|
||||
When creating a client for an AWS service you'll first need to have a Session
|
||||
value constructed. The Session provides shared configuration that can be shared
|
||||
between your service clients. When service clients are created you can pass
|
||||
in additional configuration via the aws.Config type to override configuration
|
||||
provided by in the Session to create service client instances with custom
|
||||
configuration.
|
||||
|
||||
Once the service's client is created you can use it to make API requests the
|
||||
AWS service. These clients are safe to use concurrently.
|
||||
|
||||
## Configuring the SDK
|
||||
|
||||
In the AWS SDK for Go, you can configure settings for service clients, such
|
||||
as the log level and maximum number of retries. Most settings are optional;
|
||||
however, for each service client, you must specify a region and your credentials.
|
||||
The SDK uses these values to send requests to the correct AWS region and sign
|
||||
requests with the correct credentials. You can specify these values as part
|
||||
of a session or as environment variables.
|
||||
|
||||
See the SDK's [configuration guide][config_guide] for more information.
|
||||
|
||||
See the [session][session_pkg] package documentation for more information on how to use Session
|
||||
with the SDK.
|
||||
|
||||
See the [Config][config_typ] type in the [aws][aws_pkg] package for more information on configuration
|
||||
options.
|
||||
|
||||
[config_guide]: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html
|
||||
[session_pkg]: https://docs.aws.amazon.com/sdk-for-go/api/aws/session/
|
||||
[config_typ]: https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config
|
||||
[aws_pkg]: https://docs.aws.amazon.com/sdk-for-go/api/aws/
|
||||
|
||||
### Configuring Credentials
|
||||
|
||||
When using the SDK you'll generally need your AWS credentials to authenticate
|
||||
with AWS services. The SDK supports multiple methods of supporting these
|
||||
credentials. By default the SDK will source credentials automatically from
|
||||
its default credential chain. See the session package for more information
|
||||
on this chain, and how to configure it. The common items in the credential
|
||||
chain are the following:
|
||||
|
||||
* Environment Credentials - Set of environment variables that are useful
|
||||
when sub processes are created for specific roles.
|
||||
|
||||
* Shared Credentials file (~/.aws/credentials) - This file stores your
|
||||
credentials based on a profile name and is useful for local development.
|
||||
|
||||
* EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials
|
||||
to application running on an EC2 instance. This removes the need to manage
|
||||
credential files in production.
|
||||
|
||||
Credentials can be configured in code as well by setting the Config's Credentials
|
||||
value to a custom provider or using one of the providers included with the
|
||||
SDK to bypass the default credential chain and use a custom one. This is
|
||||
helpful when you want to instruct the SDK to only use a specific set of
|
||||
credentials or providers.
|
||||
|
||||
This example creates a credential provider for assuming an IAM role, "myRoleARN"
|
||||
and configures the S3 service client to use that role for API requests.
|
||||
|
||||
```go
|
||||
// Initial credentials loaded from SDK's default credential chain. Such as
|
||||
// the environment, shared credentials (~/.aws/credentials), or EC2 Instance
|
||||
// Role. These credentials will be used to to make the STS Assume Role API.
|
||||
sess := session.Must(session.NewSession())
|
||||
|
||||
// Create the credentials from AssumeRoleProvider to assume the role
|
||||
// referenced by the "myRoleARN" ARN.
|
||||
creds := stscreds.NewCredentials(sess, "myRoleArn")
|
||||
|
||||
// Create service client value configured for credentials
|
||||
// from assumed role.
|
||||
svc := s3.New(sess, &aws.Config{Credentials: creds})
|
||||
```
|
||||
|
||||
See the [credentials][credentials_pkg] package documentation for more information on credential
|
||||
providers included with the SDK, and how to customize the SDK's usage of
|
||||
credentials.
|
||||
|
||||
The SDK has support for the shared configuration file (~/.aws/config). This
|
||||
support can be enabled by setting the environment variable, "AWS_SDK_LOAD_CONFIG=1",
|
||||
or enabling the feature in code when creating a Session via the
|
||||
Option's SharedConfigState parameter.
|
||||
|
||||
```go
|
||||
sess := session.Must(session.NewSessionWithOptions(session.Options{
|
||||
SharedConfigState: session.SharedConfigEnable,
|
||||
}))
|
||||
```
|
||||
|
||||
[credentials_pkg]: https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials
|
||||
|
||||
### Configuring AWS Region
|
||||
|
||||
In addition to the credentials you'll need to specify the region the SDK
|
||||
will use to make AWS API requests to. In the SDK you can specify the region
|
||||
either with an environment variable, or directly in code when a Session or
|
||||
service client is created. The last value specified in code wins if the region
|
||||
is specified multiple ways.
|
||||
|
||||
To set the region via the environment variable set the "AWS_REGION" to the
|
||||
region you want to the SDK to use. Using this method to set the region will
|
||||
allow you to run your application in multiple regions without needing additional
|
||||
code in the application to select the region.
|
||||
|
||||
AWS_REGION=us-west-2
|
||||
|
||||
The endpoints package includes constants for all regions the SDK knows. The
|
||||
values are all suffixed with RegionID. These values are helpful, because they
|
||||
reduce the need to type the region string manually.
|
||||
|
||||
To set the region on a Session use the aws package's Config struct parameter
|
||||
Region to the AWS region you want the service clients created from the session to
|
||||
use. This is helpful when you want to create multiple service clients, and
|
||||
all of the clients make API requests to the same region.
|
||||
|
||||
```go
|
||||
sess := session.Must(session.NewSession(&aws.Config{
|
||||
Region: aws.String(endpoints.UsWest2RegionID),
|
||||
}))
|
||||
```
|
||||
|
||||
See the [endpoints][endpoints_pkg] package for the AWS Regions and Endpoints metadata.
|
||||
|
||||
In addition to setting the region when creating a Session you can also set
|
||||
the region on a per service client bases. This overrides the region of a
|
||||
Session. This is helpful when you want to create service clients in specific
|
||||
regions different from the Session's region.
|
||||
|
||||
```go
|
||||
svc := s3.New(sess, &aws.Config{
|
||||
Region: aws.String(endpoints.UsWest2RegionID),
|
||||
})
|
||||
```
|
||||
|
||||
See the [Config][config_typ] type in the [aws][aws_pkg] package for more information and additional
|
||||
options such as setting the Endpoint, and other service client configuration options.
|
||||
|
||||
[endpoints_pkg]: https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/
|
||||
|
||||
## Making API Requests
|
||||
|
||||
Once the client is created you can make an API request to the service.
|
||||
Each API method takes a input parameter, and returns the service response
|
||||
and an error. The SDK provides methods for making the API call in multiple ways.
|
||||
|
||||
In this list we'll use the S3 ListObjects API as an example for the different
|
||||
ways of making API requests.
|
||||
|
||||
* ListObjects - Base API operation that will make the API request to the service.
|
||||
|
||||
* ListObjectsRequest - API methods suffixed with Request will construct the
|
||||
API request, but not send it. This is also helpful when you want to get a
|
||||
presigned URL for a request, and share the presigned URL instead of your
|
||||
application making the request directly.
|
||||
|
||||
* ListObjectsPages - Same as the base API operation, but uses a callback to
|
||||
automatically handle pagination of the API's response.
|
||||
|
||||
* ListObjectsWithContext - Same as base API operation, but adds support for
|
||||
the Context pattern. This is helpful for controlling the canceling of in
|
||||
flight requests. See the Go standard library context package for more
|
||||
information. This method also takes request package's Option functional
|
||||
options as the variadic argument for modifying how the request will be
|
||||
made, or extracting information from the raw HTTP response.
|
||||
|
||||
* ListObjectsPagesWithContext - same as ListObjectsPages, but adds support for
|
||||
the Context pattern. Similar to ListObjectsWithContext this method also
|
||||
takes the request package's Option function option types as the variadic
|
||||
argument.
|
||||
|
||||
In addition to the API operations the SDK also includes several higher level
|
||||
methods that abstract checking for and waiting for an AWS resource to be in
|
||||
a desired state. In this list we'll use WaitUntilBucketExists to demonstrate
|
||||
the different forms of waiters.
|
||||
|
||||
* WaitUntilBucketExists. - Method to make API request to query an AWS service for
|
||||
a resource's state. Will return successfully when that state is accomplished.
|
||||
|
||||
* WaitUntilBucketExistsWithContext - Same as WaitUntilBucketExists, but adds
|
||||
support for the Context pattern. In addition these methods take request
|
||||
package's WaiterOptions to configure the waiter, and how underlying request
|
||||
will be made by the SDK.
|
||||
|
||||
The API method will document which error codes the service might return for
|
||||
the operation. These errors will also be available as const strings prefixed
|
||||
with "ErrCode" in the service client's package. If there are no errors listed
|
||||
in the API's SDK documentation you'll need to consult the AWS service's API
|
||||
documentation for the errors that could be returned.
|
||||
|
||||
```go
|
||||
ctx := context.Background()
|
||||
|
||||
result, err := svc.GetObjectWithContext(ctx, &s3.GetObjectInput{
|
||||
Bucket: aws.String("my-bucket"),
|
||||
Key: aws.String("my-key"),
|
||||
})
|
||||
if err != nil {
|
||||
// Cast err to awserr.Error to handle specific error codes.
|
||||
aerr, ok := err.(awserr.Error)
|
||||
if ok && aerr.Code() == s3.ErrCodeNoSuchKey {
|
||||
// Specific error code handling
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// Make sure to close the body when done with it for S3 GetObject APIs or
|
||||
// will leak connections.
|
||||
defer result.Body.Close()
|
||||
|
||||
fmt.Println("Object Size:", aws.Int64Value(result.ContentLength))
|
||||
```
|
||||
|
||||
### API Request Pagination and Resource Waiters
|
||||
|
||||
Pagination helper methods are suffixed with "Pages", and provide the
|
||||
functionality needed to round trip API page requests. Pagination methods
|
||||
take a callback function that will be called for each page of the API's response.
|
||||
|
||||
```go
|
||||
objects := []string{}
|
||||
err := svc.ListObjectsPagesWithContext(ctx, &s3.ListObjectsInput{
|
||||
Bucket: aws.String(myBucket),
|
||||
}, func(p *s3.ListObjectsOutput, lastPage bool) bool {
|
||||
for _, o := range p.Contents {
|
||||
objects = append(objects, aws.StringValue(o.Key))
|
||||
}
|
||||
return true // continue paging
|
||||
})
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("failed to list objects for bucket, %s, %v", myBucket, err))
|
||||
}
|
||||
|
||||
fmt.Println("Objects in bucket:", objects)
|
||||
```
|
||||
|
||||
Waiter helper methods provide the functionality to wait for an AWS resource
|
||||
state. These methods abstract the logic needed to check the state of an
|
||||
AWS resource, and wait until that resource is in a desired state. The waiter
|
||||
will block until the resource is in the state that is desired, an error occurs,
|
||||
or the waiter times out. If a resource times out the error code returned will
|
||||
be request.WaiterResourceNotReadyErrorCode.
|
||||
|
||||
```go
|
||||
err := svc.WaitUntilBucketExistsWithContext(ctx, &s3.HeadBucketInput{
|
||||
Bucket: aws.String(myBucket),
|
||||
})
|
||||
if err != nil {
|
||||
aerr, ok := err.(awserr.Error)
|
||||
if ok && aerr.Code() == request.WaiterResourceNotReadyErrorCode {
|
||||
fmt.Fprintf(os.Stderr, "timed out while waiting for bucket to exist")
|
||||
}
|
||||
panic(fmt.Errorf("failed to wait for bucket to exist, %v", err))
|
||||
}
|
||||
fmt.Println("Bucket", myBucket, "exists")
|
||||
```
|
||||
|
||||
## Complete SDK Example
|
||||
|
||||
This example shows a complete working Go file which will upload a file to S3
|
||||
and use the Context pattern to implement timeout logic that will cancel the
|
||||
request if it takes too long. This example highlights how to use sessions,
|
||||
create a service client, make a request, handle the error, and process the
|
||||
response.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/aws/awserr"
|
||||
"github.com/aws/aws-sdk-go/aws/request"
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
"github.com/aws/aws-sdk-go/service/s3"
|
||||
)
|
||||
|
||||
// Uploads a file to S3 given a bucket and object key. Also takes a duration
|
||||
// value to terminate the update if it doesn't complete within that time.
|
||||
//
|
||||
// The AWS Region needs to be provided in the AWS shared config or on the
|
||||
// environment variable as `AWS_REGION`. Credentials also must be provided
|
||||
// Will default to shared config file, but can load from environment if provided.
|
||||
//
|
||||
// Usage:
|
||||
// # Upload myfile.txt to myBucket/myKey. Must complete within 10 minutes or will fail
|
||||
// go run withContext.go -b mybucket -k myKey -d 10m < myfile.txt
|
||||
func main() {
|
||||
var bucket, key string
|
||||
var timeout time.Duration
|
||||
|
||||
flag.StringVar(&bucket, "b", "", "Bucket name.")
|
||||
flag.StringVar(&key, "k", "", "Object key name.")
|
||||
flag.DurationVar(&timeout, "d", 0, "Upload timeout.")
|
||||
flag.Parse()
|
||||
|
||||
// All clients require a Session. The Session provides the client with
|
||||
// shared configuration such as region, endpoint, and credentials. A
|
||||
// Session should be shared where possible to take advantage of
|
||||
// configuration and credential caching. See the session package for
|
||||
// more information.
|
||||
sess := session.Must(session.NewSession())
|
||||
|
||||
// Create a new instance of the service's client with a Session.
|
||||
// Optional aws.Config values can also be provided as variadic arguments
|
||||
// to the New function. This option allows you to provide service
|
||||
// specific configuration.
|
||||
svc := s3.New(sess)
|
||||
|
||||
// Create a context with a timeout that will abort the upload if it takes
|
||||
// more than the passed in timeout.
|
||||
ctx := context.Background()
|
||||
var cancelFn func()
|
||||
if timeout > 0 {
|
||||
ctx, cancelFn = context.WithTimeout(ctx, timeout)
|
||||
}
|
||||
// Ensure the context is canceled to prevent leaking.
|
||||
// See context package for more information, https://golang.org/pkg/context/
|
||||
if cancelFn != nil {
|
||||
defer cancelFn()
|
||||
}
|
||||
|
||||
// Uploads the object to S3. The Context will interrupt the request if the
|
||||
// timeout expires.
|
||||
_, err := svc.PutObjectWithContext(ctx, &s3.PutObjectInput{
|
||||
Bucket: aws.String(bucket),
|
||||
Key: aws.String(key),
|
||||
Body: os.Stdin,
|
||||
})
|
||||
if err != nil {
|
||||
if aerr, ok := err.(awserr.Error); ok && aerr.Code() == request.CanceledErrorCode {
|
||||
// If the SDK can determine the request or retry delay was canceled
|
||||
// by a context the CanceledErrorCode error code will be returned.
|
||||
fmt.Fprintf(os.Stderr, "upload canceled due to timeout, %v\n", err)
|
||||
} else {
|
||||
fmt.Fprintf(os.Stderr, "failed to upload object, %v\n", err)
|
||||
}
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
fmt.Printf("successfully uploaded file to %s/%s\n", bucket, key)
|
||||
}
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
This SDK is distributed under the
|
||||
[Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0),
|
||||
see LICENSE.txt and NOTICE.txt for more information.
|
12
vendor/github.com/aws/aws-sdk-go/aws/credentials/example.ini
generated
vendored
Normal file
12
vendor/github.com/aws/aws-sdk-go/aws/credentials/example.ini
generated
vendored
Normal file
|
@ -0,0 +1,12 @@
|
|||
[default]
|
||||
aws_access_key_id = accessKey
|
||||
aws_secret_access_key = secret
|
||||
aws_session_token = token
|
||||
|
||||
[no_token]
|
||||
aws_access_key_id = accessKey
|
||||
aws_secret_access_key = secret
|
||||
|
||||
[with_colon]
|
||||
aws_access_key_id: accessKey
|
||||
aws_secret_access_key: secret
|
3
vendor/github.com/aws/aws-sdk-go/go.mod
generated
vendored
3
vendor/github.com/aws/aws-sdk-go/go.mod
generated
vendored
|
@ -1,3 +0,0 @@
|
|||
module github.com/aws/aws-sdk-go
|
||||
|
||||
require github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af
|
4
vendor/github.com/aws/aws-sdk-go/private/README.md
generated
vendored
4
vendor/github.com/aws/aws-sdk-go/private/README.md
generated
vendored
|
@ -1,4 +0,0 @@
|
|||
## AWS SDK for Go Private packages ##
|
||||
`private` is a collection of packages used internally by the SDK, and is subject to have breaking changes. This package is not `internal` so that if you really need to use its functionality, and understand breaking changes will be made, you are able to.
|
||||
|
||||
These packages will be refactored in the future so that the API generator and model parsers are exposed cleanly on their own. Making it easier for you to generate your own code based on the API models.
|
31
vendor/github.com/beorn7/perks/README.md
generated
vendored
31
vendor/github.com/beorn7/perks/README.md
generated
vendored
|
@ -1,31 +0,0 @@
|
|||
# Perks for Go (golang.org)
|
||||
|
||||
Perks contains the Go package quantile that computes approximate quantiles over
|
||||
an unbounded data stream within low memory and CPU bounds.
|
||||
|
||||
For more information and examples, see:
|
||||
http://godoc.org/github.com/bmizerany/perks
|
||||
|
||||
A very special thank you and shout out to Graham Cormode (Rutgers University),
|
||||
Flip Korn (AT&T Labs–Research), S. Muthukrishnan (Rutgers University), and
|
||||
Divesh Srivastava (AT&T Labs–Research) for their research and publication of
|
||||
[Effective Computation of Biased Quantiles over Data Streams](http://www.cs.rutgers.edu/~muthu/bquant.pdf)
|
||||
|
||||
Thank you, also:
|
||||
* Armon Dadgar (@armon)
|
||||
* Andrew Gerrand (@nf)
|
||||
* Brad Fitzpatrick (@bradfitz)
|
||||
* Keith Rarick (@kr)
|
||||
|
||||
FAQ:
|
||||
|
||||
Q: Why not move the quantile package into the project root?
|
||||
A: I want to add more packages to perks later.
|
||||
|
||||
Copyright (C) 2013 Blake Mizerany
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
3
vendor/github.com/beorn7/perks/go.mod
generated
vendored
3
vendor/github.com/beorn7/perks/go.mod
generated
vendored
|
@ -1,3 +0,0 @@
|
|||
module github.com/beorn7/perks
|
||||
|
||||
go 1.11
|
2388
vendor/github.com/beorn7/perks/quantile/exampledata.txt
generated
vendored
Normal file
2388
vendor/github.com/beorn7/perks/quantile/exampledata.txt
generated
vendored
Normal file
File diff suppressed because it is too large
Load diff
23
vendor/github.com/bsphere/le_go/.gitignore
generated
vendored
Normal file
23
vendor/github.com/bsphere/le_go/.gitignore
generated
vendored
Normal file
|
@ -0,0 +1,23 @@
|
|||
# Compiled Object files, Static and Dynamic libs (Shared Objects)
|
||||
*.o
|
||||
*.a
|
||||
*.so
|
||||
|
||||
# Folders
|
||||
_obj
|
||||
_test
|
||||
|
||||
# Architecture specific extensions/prefixes
|
||||
*.[568vq]
|
||||
[568vq].out
|
||||
|
||||
*.cgo1.go
|
||||
*.cgo2.c
|
||||
_cgo_defun.c
|
||||
_cgo_gotypes.go
|
||||
_cgo_export.*
|
||||
|
||||
_testmain.go
|
||||
|
||||
*.exe
|
||||
*.test
|
4
vendor/github.com/bsphere/le_go/.travis.yml
generated
vendored
Normal file
4
vendor/github.com/bsphere/le_go/.travis.yml
generated
vendored
Normal file
|
@ -0,0 +1,4 @@
|
|||
language: go
|
||||
|
||||
go:
|
||||
- 1.4
|
8
vendor/github.com/cespare/xxhash/v2/.travis.yml
generated
vendored
Normal file
8
vendor/github.com/cespare/xxhash/v2/.travis.yml
generated
vendored
Normal file
|
@ -0,0 +1,8 @@
|
|||
language: go
|
||||
go:
|
||||
- "1.x"
|
||||
- master
|
||||
env:
|
||||
- TAGS=""
|
||||
- TAGS="-tags purego"
|
||||
script: go test $TAGS -v ./...
|
3
vendor/github.com/cespare/xxhash/v2/go.mod
generated
vendored
3
vendor/github.com/cespare/xxhash/v2/go.mod
generated
vendored
|
@ -1,3 +0,0 @@
|
|||
module github.com/cespare/xxhash/v2
|
||||
|
||||
go 1.11
|
17
vendor/github.com/cilium/ebpf/.clang-format
generated
vendored
Normal file
17
vendor/github.com/cilium/ebpf/.clang-format
generated
vendored
Normal file
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
Language: Cpp
|
||||
BasedOnStyle: LLVM
|
||||
AlignAfterOpenBracket: DontAlign
|
||||
AlignConsecutiveAssignments: true
|
||||
AlignEscapedNewlines: DontAlign
|
||||
AlwaysBreakBeforeMultilineStrings: true
|
||||
AlwaysBreakTemplateDeclarations: false
|
||||
AllowAllParametersOfDeclarationOnNextLine: false
|
||||
AllowShortFunctionsOnASingleLine: false
|
||||
BreakBeforeBraces: Attach
|
||||
IndentWidth: 4
|
||||
KeepEmptyLinesAtTheStartOfBlocks: false
|
||||
TabWidth: 4
|
||||
UseTab: ForContinuationAndIndentation
|
||||
ColumnLimit: 1000
|
||||
...
|
13
vendor/github.com/cilium/ebpf/.gitignore
generated
vendored
Normal file
13
vendor/github.com/cilium/ebpf/.gitignore
generated
vendored
Normal file
|
@ -0,0 +1,13 @@
|
|||
# Binaries for programs and plugins
|
||||
*.exe
|
||||
*.exe~
|
||||
*.dll
|
||||
*.so
|
||||
*.dylib
|
||||
*.o
|
||||
|
||||
# Test binary, build with `go test -c`
|
||||
*.test
|
||||
|
||||
# Output of the go coverage tool, specifically when used with LiteIDE
|
||||
*.out
|
29
vendor/github.com/cilium/ebpf/.golangci.yaml
generated
vendored
Normal file
29
vendor/github.com/cilium/ebpf/.golangci.yaml
generated
vendored
Normal file
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
issues:
|
||||
exclude-rules:
|
||||
# syscall param structs will have unused fields in Go code.
|
||||
- path: syscall.*.go
|
||||
linters:
|
||||
- structcheck
|
||||
|
||||
linters:
|
||||
disable-all: true
|
||||
enable:
|
||||
- deadcode
|
||||
- errcheck
|
||||
- goimports
|
||||
- gosimple
|
||||
- govet
|
||||
- ineffassign
|
||||
- misspell
|
||||
- staticcheck
|
||||
- structcheck
|
||||
- typecheck
|
||||
- unused
|
||||
- varcheck
|
||||
|
||||
# Could be enabled later:
|
||||
# - gocyclo
|
||||
# - prealloc
|
||||
# - maligned
|
||||
# - gosec
|
80
vendor/github.com/cilium/ebpf/ARCHITECTURE.md
generated
vendored
Normal file
80
vendor/github.com/cilium/ebpf/ARCHITECTURE.md
generated
vendored
Normal file
|
@ -0,0 +1,80 @@
|
|||
Architecture of the library
|
||||
===
|
||||
|
||||
ELF -> Specifications -> Objects -> Links
|
||||
|
||||
ELF
|
||||
---
|
||||
|
||||
BPF is usually produced by using Clang to compile a subset of C. Clang outputs
|
||||
an ELF file which contains program byte code (aka BPF), but also metadata for
|
||||
maps used by the program. The metadata follows the conventions set by libbpf
|
||||
shipped with the kernel. Certain ELF sections have special meaning
|
||||
and contain structures defined by libbpf. Newer versions of clang emit
|
||||
additional metadata in BPF Type Format (aka BTF).
|
||||
|
||||
The library aims to be compatible with libbpf so that moving from a C toolchain
|
||||
to a Go one creates little friction. To that end, the [ELF reader](elf_reader.go)
|
||||
is tested against the Linux selftests and avoids introducing custom behaviour
|
||||
if possible.
|
||||
|
||||
The output of the ELF reader is a `CollectionSpec` which encodes
|
||||
all of the information contained in the ELF in a form that is easy to work with
|
||||
in Go.
|
||||
|
||||
### BTF
|
||||
|
||||
The BPF Type Format describes more than just the types used by a BPF program. It
|
||||
includes debug aids like which source line corresponds to which instructions and
|
||||
what global variables are used.
|
||||
|
||||
[BTF parsing](internal/btf/) lives in a separate internal package since exposing
|
||||
it would mean an additional maintenance burden, and because the API still
|
||||
has sharp corners. The most important concept is the `btf.Type` interface, which
|
||||
also describes things that aren't really types like `.rodata` or `.bss` sections.
|
||||
`btf.Type`s can form cyclical graphs, which can easily lead to infinite loops if
|
||||
one is not careful. Hopefully a safe pattern to work with `btf.Type` emerges as
|
||||
we write more code that deals with it.
|
||||
|
||||
Specifications
|
||||
---
|
||||
|
||||
`CollectionSpec`, `ProgramSpec` and `MapSpec` are blueprints for in-kernel
|
||||
objects and contain everything necessary to execute the relevant `bpf(2)`
|
||||
syscalls. Since the ELF reader outputs a `CollectionSpec` it's possible to
|
||||
modify clang-compiled BPF code, for example to rewrite constants. At the same
|
||||
time the [asm](asm/) package provides an assembler that can be used to generate
|
||||
`ProgramSpec` on the fly.
|
||||
|
||||
Creating a spec should never require any privileges or be restricted in any way,
|
||||
for example by only allowing programs in native endianness. This ensures that
|
||||
the library stays flexible.
|
||||
|
||||
Objects
|
||||
---
|
||||
|
||||
`Program` and `Map` are the result of loading specs into the kernel. Sometimes
|
||||
loading a spec will fail because the kernel is too old, or a feature is not
|
||||
enabled. There are multiple ways the library deals with that:
|
||||
|
||||
* Fallback: older kernels don't allowing naming programs and maps. The library
|
||||
automatically detects support for names, and omits them during load if
|
||||
necessary. This works since name is primarily a debug aid.
|
||||
|
||||
* Sentinel error: sometimes it's possible to detect that a feature isn't available.
|
||||
In that case the library will return an error wrapping `ErrNotSupported`.
|
||||
This is also useful to skip tests that can't run on the current kernel.
|
||||
|
||||
Once program and map objects are loaded they expose the kernel's low-level API,
|
||||
e.g. `NextKey`. Often this API is awkward to use in Go, so there are safer
|
||||
wrappers on top of the low-level API, like `MapIterator`. The low-level API is
|
||||
useful as an out when our higher-level API doesn't support a particular use case.
|
||||
|
||||
Links
|
||||
---
|
||||
|
||||
BPF can be attached to many different points in the kernel and newer BPF hooks
|
||||
tend to use bpf_link to do so. Older hooks unfortunately use a combination of
|
||||
syscalls, netlink messages, etc. Adding support for a new link type should not
|
||||
pull in large dependencies like netlink, so XDP programs or tracepoints are
|
||||
out of scope.
|
46
vendor/github.com/cilium/ebpf/CODE_OF_CONDUCT.md
generated
vendored
Normal file
46
vendor/github.com/cilium/ebpf/CODE_OF_CONDUCT.md
generated
vendored
Normal file
|
@ -0,0 +1,46 @@
|
|||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to creating a positive environment include:
|
||||
|
||||
* Using welcoming and inclusive language
|
||||
* Being respectful of differing viewpoints and experiences
|
||||
* Gracefully accepting constructive criticism
|
||||
* Focusing on what is best for the community
|
||||
* Showing empathy towards other community members
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery and unwelcome sexual attention or advances
|
||||
* Trolling, insulting/derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or electronic address, without explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a professional setting
|
||||
|
||||
## Our Responsibilities
|
||||
|
||||
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at nathanjsweet at gmail dot com or i at lmb dot io. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
|
||||
|
||||
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
|
||||
|
||||
[homepage]: http://contributor-covenant.org
|
||||
[version]: http://contributor-covenant.org/version/1/4/
|
40
vendor/github.com/cilium/ebpf/CONTRIBUTING.md
generated
vendored
Normal file
40
vendor/github.com/cilium/ebpf/CONTRIBUTING.md
generated
vendored
Normal file
|
@ -0,0 +1,40 @@
|
|||
# How to contribute
|
||||
|
||||
Development is on [GitHub](https://github.com/cilium/ebpf) and contributions in
|
||||
the form of pull requests and issues reporting bugs or suggesting new features
|
||||
are welcome. Please take a look at [the architecture](ARCHITECTURE.md) to get
|
||||
a better understanding for the high-level goals.
|
||||
|
||||
New features must be accompanied by tests. Before starting work on any large
|
||||
feature, please [join](https://cilium.herokuapp.com/) the
|
||||
[#libbpf-go](https://cilium.slack.com/messages/libbpf-go) channel on Slack to
|
||||
discuss the design first.
|
||||
|
||||
When submitting pull requests, consider writing details about what problem you
|
||||
are solving and why the proposed approach solves that problem in commit messages
|
||||
and/or pull request description to help future library users and maintainers to
|
||||
reason about the proposed changes.
|
||||
|
||||
## Running the tests
|
||||
|
||||
Many of the tests require privileges to set resource limits and load eBPF code.
|
||||
The easiest way to obtain these is to run the tests with `sudo`.
|
||||
|
||||
To test the current package with your local kernel you can simply run:
|
||||
```
|
||||
go test -exec sudo ./...
|
||||
```
|
||||
|
||||
To test the current package with a different kernel version you can use the [run-tests.sh](run-tests.sh) script.
|
||||
It requires [virtme](https://github.com/amluto/virtme) and qemu to be installed.
|
||||
|
||||
Examples:
|
||||
|
||||
```bash
|
||||
# Run all tests on a 5.4 kernel
|
||||
./run-tests.sh 5.4
|
||||
|
||||
# Run a subset of tests:
|
||||
./run-tests.sh 5.4 go test ./link
|
||||
```
|
||||
|
70
vendor/github.com/cilium/ebpf/Makefile
generated
vendored
Normal file
70
vendor/github.com/cilium/ebpf/Makefile
generated
vendored
Normal file
|
@ -0,0 +1,70 @@
|
|||
# The development version of clang is distributed as the 'clang' binary,
|
||||
# while stable/released versions have a version number attached.
|
||||
# Pin the default clang to a stable version.
|
||||
CLANG ?= clang-12
|
||||
CFLAGS := -target bpf -O2 -g -Wall -Werror $(CFLAGS)
|
||||
|
||||
# Obtain an absolute path to the directory of the Makefile.
|
||||
# Assume the Makefile is in the root of the repository.
|
||||
REPODIR := $(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
|
||||
UIDGID := $(shell stat -c '%u:%g' ${REPODIR})
|
||||
|
||||
IMAGE := $(shell cat ${REPODIR}/testdata/docker/IMAGE)
|
||||
VERSION := $(shell cat ${REPODIR}/testdata/docker/VERSION)
|
||||
|
||||
# clang <8 doesn't tag relocs properly (STT_NOTYPE)
|
||||
# clang 9 is the first version emitting BTF
|
||||
TARGETS := \
|
||||
testdata/loader-clang-7 \
|
||||
testdata/loader-clang-9 \
|
||||
testdata/loader-$(CLANG) \
|
||||
testdata/invalid_map \
|
||||
testdata/raw_tracepoint \
|
||||
testdata/invalid_map_static \
|
||||
testdata/initialized_btf_map \
|
||||
testdata/strings \
|
||||
internal/btf/testdata/relocs
|
||||
|
||||
.PHONY: all clean docker-all docker-shell
|
||||
|
||||
.DEFAULT_TARGET = docker-all
|
||||
|
||||
# Build all ELF binaries using a Dockerized LLVM toolchain.
|
||||
docker-all:
|
||||
docker run --rm --user "${UIDGID}" \
|
||||
-v "${REPODIR}":/ebpf -w /ebpf --env MAKEFLAGS \
|
||||
--env CFLAGS="-fdebug-prefix-map=/ebpf=." \
|
||||
"${IMAGE}:${VERSION}" \
|
||||
make all
|
||||
|
||||
# (debug) Drop the user into a shell inside the Docker container as root.
|
||||
docker-shell:
|
||||
docker run --rm -ti \
|
||||
-v "${REPODIR}":/ebpf -w /ebpf \
|
||||
"${IMAGE}:${VERSION}"
|
||||
|
||||
clean:
|
||||
-$(RM) testdata/*.elf
|
||||
-$(RM) internal/btf/testdata/*.elf
|
||||
|
||||
all: $(addsuffix -el.elf,$(TARGETS)) $(addsuffix -eb.elf,$(TARGETS))
|
||||
ln -srf testdata/loader-$(CLANG)-el.elf testdata/loader-el.elf
|
||||
ln -srf testdata/loader-$(CLANG)-eb.elf testdata/loader-eb.elf
|
||||
|
||||
testdata/loader-%-el.elf: testdata/loader.c
|
||||
$* $(CFLAGS) -mlittle-endian -c $< -o $@
|
||||
|
||||
testdata/loader-%-eb.elf: testdata/loader.c
|
||||
$* $(CFLAGS) -mbig-endian -c $< -o $@
|
||||
|
||||
%-el.elf: %.c
|
||||
$(CLANG) $(CFLAGS) -mlittle-endian -c $< -o $@
|
||||
|
||||
%-eb.elf : %.c
|
||||
$(CLANG) $(CFLAGS) -mbig-endian -c $< -o $@
|
||||
|
||||
# Usage: make VMLINUX=/path/to/vmlinux vmlinux-btf
|
||||
.PHONY: vmlinux-btf
|
||||
vmlinux-btf: internal/btf/testdata/vmlinux-btf.gz
|
||||
internal/btf/testdata/vmlinux-btf.gz: $(VMLINUX)
|
||||
objcopy --dump-section .BTF=/dev/stdout "$<" /dev/null | gzip > "$@"
|
13
vendor/github.com/cilium/ebpf/examples/README.md
generated
vendored
13
vendor/github.com/cilium/ebpf/examples/README.md
generated
vendored
|
@ -1,13 +0,0 @@
|
|||
# eBPF Examples
|
||||
|
||||
- [kprobe](kprobe/) - Attach a program to the entry or exit of an arbitrary kernel symbol (function).
|
||||
- [uretprobe](uretprobe/) - Like a kprobe, but for symbols in userspace binaries (e.g. `bash`).
|
||||
- [tracepoint](tracepoint/) - Attach a program to predetermined kernel tracepoints.
|
||||
- Add your use case(s) here!
|
||||
|
||||
## How to run
|
||||
|
||||
```bash
|
||||
cd ebpf/examples/
|
||||
go run -exec sudo [./kprobe, ./uretprobe, ./tracepoint, ...]
|
||||
```
|
8
vendor/github.com/cilium/ebpf/examples/go.mod
generated
vendored
8
vendor/github.com/cilium/ebpf/examples/go.mod
generated
vendored
|
@ -1,8 +0,0 @@
|
|||
module github.com/cilium/ebpf/examples
|
||||
|
||||
go 1.15
|
||||
|
||||
require (
|
||||
github.com/cilium/ebpf v0.6.1-0.20210610105443-1e7f01c7124c
|
||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c
|
||||
)
|
3265
vendor/github.com/cilium/ebpf/examples/headers/bpf_helper_defs.h
generated
vendored
3265
vendor/github.com/cilium/ebpf/examples/headers/bpf_helper_defs.h
generated
vendored
File diff suppressed because it is too large
Load diff
80
vendor/github.com/cilium/ebpf/examples/headers/bpf_helpers.h
generated
vendored
80
vendor/github.com/cilium/ebpf/examples/headers/bpf_helpers.h
generated
vendored
|
@ -1,80 +0,0 @@
|
|||
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
|
||||
#ifndef __BPF_HELPERS__
|
||||
#define __BPF_HELPERS__
|
||||
|
||||
/*
|
||||
* Note that bpf programs need to include either
|
||||
* vmlinux.h (auto-generated from BTF) or linux/types.h
|
||||
* in advance since bpf_helper_defs.h uses such types
|
||||
* as __u64.
|
||||
*/
|
||||
#include "bpf_helper_defs.h"
|
||||
|
||||
#define __uint(name, val) int (*name)[val]
|
||||
#define __type(name, val) typeof(val) *name
|
||||
#define __array(name, val) typeof(val) *name[]
|
||||
|
||||
/* Helper macro to print out debug messages */
|
||||
#define bpf_printk(fmt, ...) \
|
||||
({ \
|
||||
char ____fmt[] = fmt; \
|
||||
bpf_trace_printk(____fmt, sizeof(____fmt), \
|
||||
##__VA_ARGS__); \
|
||||
})
|
||||
|
||||
/*
|
||||
* Helper macro to place programs, maps, license in
|
||||
* different sections in elf_bpf file. Section names
|
||||
* are interpreted by elf_bpf loader
|
||||
*/
|
||||
#define SEC(NAME) __attribute__((section(NAME), used))
|
||||
|
||||
#ifndef __always_inline
|
||||
#define __always_inline __attribute__((always_inline))
|
||||
#endif
|
||||
#ifndef __weak
|
||||
#define __weak __attribute__((weak))
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Helper macro to manipulate data structures
|
||||
*/
|
||||
#ifndef offsetof
|
||||
#define offsetof(TYPE, MEMBER) __builtin_offsetof(TYPE, MEMBER)
|
||||
#endif
|
||||
#ifndef container_of
|
||||
#define container_of(ptr, type, member) \
|
||||
({ \
|
||||
void *__mptr = (void *)(ptr); \
|
||||
((type *)(__mptr - offsetof(type, member))); \
|
||||
})
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Helper structure used by eBPF C program
|
||||
* to describe BPF map attributes to libbpf loader
|
||||
*/
|
||||
struct bpf_map_def {
|
||||
unsigned int type;
|
||||
unsigned int key_size;
|
||||
unsigned int value_size;
|
||||
unsigned int max_entries;
|
||||
unsigned int map_flags;
|
||||
};
|
||||
|
||||
enum libbpf_pin_type {
|
||||
LIBBPF_PIN_NONE,
|
||||
/* PIN_BY_NAME: pin maps by name (in /sys/fs/bpf by default) */
|
||||
LIBBPF_PIN_BY_NAME,
|
||||
};
|
||||
|
||||
enum libbpf_tristate {
|
||||
TRI_NO = 0,
|
||||
TRI_YES = 1,
|
||||
TRI_MODULE = 2,
|
||||
};
|
||||
|
||||
#define __kconfig __attribute__((section(".kconfig")))
|
||||
#define __ksym __attribute__((section(".ksyms")))
|
||||
|
||||
#endif
|
107
vendor/github.com/cilium/ebpf/examples/headers/common.h
generated
vendored
107
vendor/github.com/cilium/ebpf/examples/headers/common.h
generated
vendored
|
@ -1,107 +0,0 @@
|
|||
// This is a compact version of `vmlinux.h` to be used in the examples using C code.
|
||||
|
||||
#ifndef __VMLINUX_H__
|
||||
#define __VMLINUX_H__
|
||||
|
||||
typedef unsigned char __u8;
|
||||
typedef short int __s16;
|
||||
typedef short unsigned int __u16;
|
||||
typedef int __s32;
|
||||
typedef unsigned int __u32;
|
||||
typedef long long int __s64;
|
||||
typedef long long unsigned int __u64;
|
||||
typedef __u8 u8;
|
||||
typedef __s16 s16;
|
||||
typedef __u16 u16;
|
||||
typedef __s32 s32;
|
||||
typedef __u32 u32;
|
||||
typedef __s64 s64;
|
||||
typedef __u64 u64;
|
||||
typedef __u16 __le16;
|
||||
typedef __u16 __be16;
|
||||
typedef __u32 __be32;
|
||||
typedef __u64 __be64;
|
||||
typedef __u32 __wsum;
|
||||
|
||||
enum bpf_map_type {
|
||||
BPF_MAP_TYPE_UNSPEC = 0,
|
||||
BPF_MAP_TYPE_HASH = 1,
|
||||
BPF_MAP_TYPE_ARRAY = 2,
|
||||
BPF_MAP_TYPE_PROG_ARRAY = 3,
|
||||
BPF_MAP_TYPE_PERF_EVENT_ARRAY = 4,
|
||||
BPF_MAP_TYPE_PERCPU_HASH = 5,
|
||||
BPF_MAP_TYPE_PERCPU_ARRAY = 6,
|
||||
BPF_MAP_TYPE_STACK_TRACE = 7,
|
||||
BPF_MAP_TYPE_CGROUP_ARRAY = 8,
|
||||
BPF_MAP_TYPE_LRU_HASH = 9,
|
||||
BPF_MAP_TYPE_LRU_PERCPU_HASH = 10,
|
||||
BPF_MAP_TYPE_LPM_TRIE = 11,
|
||||
BPF_MAP_TYPE_ARRAY_OF_MAPS = 12,
|
||||
BPF_MAP_TYPE_HASH_OF_MAPS = 13,
|
||||
BPF_MAP_TYPE_DEVMAP = 14,
|
||||
BPF_MAP_TYPE_SOCKMAP = 15,
|
||||
BPF_MAP_TYPE_CPUMAP = 16,
|
||||
BPF_MAP_TYPE_XSKMAP = 17,
|
||||
BPF_MAP_TYPE_SOCKHASH = 18,
|
||||
BPF_MAP_TYPE_CGROUP_STORAGE = 19,
|
||||
BPF_MAP_TYPE_REUSEPORT_SOCKARRAY = 20,
|
||||
BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE = 21,
|
||||
BPF_MAP_TYPE_QUEUE = 22,
|
||||
BPF_MAP_TYPE_STACK = 23,
|
||||
BPF_MAP_TYPE_SK_STORAGE = 24,
|
||||
BPF_MAP_TYPE_DEVMAP_HASH = 25,
|
||||
BPF_MAP_TYPE_STRUCT_OPS = 26,
|
||||
BPF_MAP_TYPE_RINGBUF = 27,
|
||||
BPF_MAP_TYPE_INODE_STORAGE = 28,
|
||||
};
|
||||
|
||||
enum {
|
||||
BPF_ANY = 0,
|
||||
BPF_NOEXIST = 1,
|
||||
BPF_EXIST = 2,
|
||||
BPF_F_LOCK = 4,
|
||||
};
|
||||
|
||||
/* BPF_FUNC_perf_event_output, BPF_FUNC_perf_event_read and
|
||||
* BPF_FUNC_perf_event_read_value flags.
|
||||
*/
|
||||
#define BPF_F_INDEX_MASK 0xffffffffULL
|
||||
#define BPF_F_CURRENT_CPU BPF_F_INDEX_MASK
|
||||
|
||||
#define PT_REGS_RC(x) ((x)->rax)
|
||||
struct pt_regs {
|
||||
/*
|
||||
* C ABI says these regs are callee-preserved. They aren't saved on kernel entry
|
||||
* unless syscall needs a complete, fully filled "struct pt_regs".
|
||||
*/
|
||||
unsigned long r15;
|
||||
unsigned long r14;
|
||||
unsigned long r13;
|
||||
unsigned long r12;
|
||||
unsigned long rbp;
|
||||
unsigned long rbx;
|
||||
/* These regs are callee-clobbered. Always saved on kernel entry. */
|
||||
unsigned long r11;
|
||||
unsigned long r10;
|
||||
unsigned long r9;
|
||||
unsigned long r8;
|
||||
unsigned long rax;
|
||||
unsigned long rcx;
|
||||
unsigned long rdx;
|
||||
unsigned long rsi;
|
||||
unsigned long rdi;
|
||||
/*
|
||||
* On syscall entry, this is syscall#. On CPU exception, this is error code.
|
||||
* On hw interrupt, it's IRQ number:
|
||||
*/
|
||||
unsigned long orig_rax;
|
||||
/* Return frame for iretq */
|
||||
unsigned long rip;
|
||||
unsigned long cs;
|
||||
unsigned long eflags;
|
||||
unsigned long rsp;
|
||||
unsigned long ss;
|
||||
/* top of stack page */
|
||||
};
|
||||
|
||||
#endif /* __VMLINUX_H__ */
|
26
vendor/github.com/cilium/ebpf/examples/kprobe/bpf/kprobe_example.c
generated
vendored
26
vendor/github.com/cilium/ebpf/examples/kprobe/bpf/kprobe_example.c
generated
vendored
|
@ -1,26 +0,0 @@
|
|||
#include "common.h"
|
||||
#include "bpf_helpers.h"
|
||||
|
||||
char __license[] SEC("license") = "Dual MIT/GPL";
|
||||
|
||||
struct bpf_map_def SEC("maps") kprobe_map = {
|
||||
.type = BPF_MAP_TYPE_ARRAY,
|
||||
.key_size = sizeof(u32),
|
||||
.value_size = sizeof(u64),
|
||||
.max_entries = 1,
|
||||
};
|
||||
|
||||
SEC("kprobe/sys_execve")
|
||||
int kprobe_execve() {
|
||||
u32 key = 0;
|
||||
u64 initval = 1, *valp;
|
||||
|
||||
valp = bpf_map_lookup_elem(&kprobe_map, &key);
|
||||
if (!valp) {
|
||||
bpf_map_update_elem(&kprobe_map, &key, &initval, BPF_ANY);
|
||||
return 0;
|
||||
}
|
||||
__sync_fetch_and_add(valp, 1);
|
||||
|
||||
return 0;
|
||||
}
|
25
vendor/github.com/cilium/ebpf/examples/uretprobe/bpf/uretprobe_example.c
generated
vendored
25
vendor/github.com/cilium/ebpf/examples/uretprobe/bpf/uretprobe_example.c
generated
vendored
|
@ -1,25 +0,0 @@
|
|||
#include "common.h"
|
||||
#include "bpf_helpers.h"
|
||||
|
||||
char __license[] SEC("license") = "Dual MIT/GPL";
|
||||
|
||||
struct event_t {
|
||||
u32 pid;
|
||||
char str[80];
|
||||
};
|
||||
|
||||
struct {
|
||||
__uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY);
|
||||
} events SEC(".maps");
|
||||
|
||||
SEC("uretprobe/bash_readline")
|
||||
int uretprobe_bash_readline(struct pt_regs *ctx) {
|
||||
struct event_t event;
|
||||
|
||||
event.pid = bpf_get_current_pid_tgid();
|
||||
bpf_probe_read(&event.str, sizeof(event.str), (void *)PT_REGS_RC(ctx));
|
||||
|
||||
bpf_perf_event_output(ctx, &events, BPF_F_CURRENT_CPU, &event, sizeof(event));
|
||||
|
||||
return 0;
|
||||
}
|
9
vendor/github.com/cilium/ebpf/go.mod
generated
vendored
9
vendor/github.com/cilium/ebpf/go.mod
generated
vendored
|
@ -1,9 +0,0 @@
|
|||
module github.com/cilium/ebpf
|
||||
|
||||
go 1.15
|
||||
|
||||
require (
|
||||
github.com/frankban/quicktest v1.11.3
|
||||
github.com/google/go-cmp v0.5.4
|
||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c
|
||||
)
|
123
vendor/github.com/cilium/ebpf/run-tests.sh
generated
vendored
Normal file
123
vendor/github.com/cilium/ebpf/run-tests.sh
generated
vendored
Normal file
|
@ -0,0 +1,123 @@
|
|||
#!/bin/bash
|
||||
# Test the current package under a different kernel.
|
||||
# Requires virtme and qemu to be installed.
|
||||
# Examples:
|
||||
# Run all tests on a 5.4 kernel
|
||||
# $ ./run-tests.sh 5.4
|
||||
# Run a subset of tests:
|
||||
# $ ./run-tests.sh 5.4 go test ./link
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
script="$(realpath "$0")"
|
||||
readonly script
|
||||
|
||||
# This script is a bit like a Matryoshka doll since it keeps re-executing itself
|
||||
# in various different contexts:
|
||||
#
|
||||
# 1. invoked by the user like run-tests.sh 5.4
|
||||
# 2. invoked by go test like run-tests.sh --exec-vm
|
||||
# 3. invoked by init in the vm like run-tests.sh --exec-test
|
||||
#
|
||||
# This allows us to use all available CPU on the host machine to compile our
|
||||
# code, and then only use the VM to execute the test. This is because the VM
|
||||
# is usually slower at compiling than the host.
|
||||
if [[ "${1:-}" = "--exec-vm" ]]; then
|
||||
shift
|
||||
|
||||
input="$1"
|
||||
shift
|
||||
|
||||
# Use sudo if /dev/kvm isn't accessible by the current user.
|
||||
sudo=""
|
||||
if [[ ! -r /dev/kvm || ! -w /dev/kvm ]]; then
|
||||
sudo="sudo"
|
||||
fi
|
||||
readonly sudo
|
||||
|
||||
testdir="$(dirname "$1")"
|
||||
output="$(mktemp -d)"
|
||||
printf -v cmd "%q " "$@"
|
||||
|
||||
if [[ "$(stat -c '%t:%T' -L /proc/$$/fd/0)" == "1:3" ]]; then
|
||||
# stdin is /dev/null, which doesn't play well with qemu. Use a fifo as a
|
||||
# blocking substitute.
|
||||
mkfifo "${output}/fake-stdin"
|
||||
# Open for reading and writing to avoid blocking.
|
||||
exec 0<> "${output}/fake-stdin"
|
||||
rm "${output}/fake-stdin"
|
||||
fi
|
||||
|
||||
$sudo virtme-run --kimg "${input}/bzImage" --memory 768M --pwd \
|
||||
--rwdir="${testdir}=${testdir}" \
|
||||
--rodir=/run/input="${input}" \
|
||||
--rwdir=/run/output="${output}" \
|
||||
--script-sh "PATH=\"$PATH\" \"$script\" --exec-test $cmd" \
|
||||
--qemu-opts -smp 2 # need at least two CPUs for some tests
|
||||
|
||||
if [[ ! -e "${output}/success" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
$sudo rm -r "$output"
|
||||
exit 0
|
||||
elif [[ "${1:-}" = "--exec-test" ]]; then
|
||||
shift
|
||||
|
||||
mount -t bpf bpf /sys/fs/bpf
|
||||
mount -t tracefs tracefs /sys/kernel/debug/tracing
|
||||
|
||||
if [[ -d "/run/input/bpf" ]]; then
|
||||
export KERNEL_SELFTESTS="/run/input/bpf"
|
||||
fi
|
||||
|
||||
dmesg -C
|
||||
if ! "$@"; then
|
||||
dmesg
|
||||
exit 1
|
||||
fi
|
||||
touch "/run/output/success"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
readonly kernel_version="${1:-}"
|
||||
if [[ -z "${kernel_version}" ]]; then
|
||||
echo "Expecting kernel version as first argument"
|
||||
exit 1
|
||||
fi
|
||||
shift
|
||||
|
||||
readonly kernel="linux-${kernel_version}.bz"
|
||||
readonly selftests="linux-${kernel_version}-selftests-bpf.bz"
|
||||
readonly input="$(mktemp -d)"
|
||||
readonly tmp_dir="${TMPDIR:-/tmp}"
|
||||
readonly branch="${BRANCH:-master}"
|
||||
|
||||
fetch() {
|
||||
echo Fetching "${1}"
|
||||
wget -nv -N -P "${tmp_dir}" "https://github.com/cilium/ci-kernels/raw/${branch}/${1}"
|
||||
}
|
||||
|
||||
fetch "${kernel}"
|
||||
cp "${tmp_dir}/${kernel}" "${input}/bzImage"
|
||||
|
||||
if fetch "${selftests}"; then
|
||||
mkdir "${input}/bpf"
|
||||
tar --strip-components=4 -xjf "${tmp_dir}/${selftests}" -C "${input}/bpf"
|
||||
else
|
||||
echo "No selftests found, disabling"
|
||||
fi
|
||||
|
||||
args=(-v -short -coverpkg=./... -coverprofile=coverage.out -count 1 ./...)
|
||||
if (( $# > 0 )); then
|
||||
args=("$@")
|
||||
fi
|
||||
|
||||
export GOFLAGS=-mod=readonly
|
||||
export CGO_ENABLED=0
|
||||
|
||||
echo Testing on "${kernel_version}"
|
||||
go test -exec "$script --exec-vm $input" "${args[@]}"
|
||||
echo "Test successful on ${kernel_version}"
|
||||
|
||||
rm -r "${input}"
|
436
vendor/github.com/cloudflare/cfssl/README.md
generated
vendored
436
vendor/github.com/cloudflare/cfssl/README.md
generated
vendored
|
@ -1,436 +0,0 @@
|
|||
# CFSSL
|
||||
|
||||
[](https://travis-ci.org/cloudflare/cfssl)
|
||||
[](http://codecov.io/github/cloudflare/cfssl?branch=master)
|
||||
[](https://godoc.org/github.com/cloudflare/cfssl)
|
||||
|
||||
## CloudFlare's PKI/TLS toolkit
|
||||
|
||||
CFSSL is CloudFlare's PKI/TLS swiss army knife. It is both a command line
|
||||
tool and an HTTP API server for signing, verifying, and bundling TLS
|
||||
certificates. It requires Go 1.8+ to build.
|
||||
|
||||
Note that certain linux distributions have certain algorithms removed
|
||||
(RHEL-based distributions in particular), so the golang from the
|
||||
official repositories will not work. Users of these distributions should
|
||||
[install go manually](//golang.org/dl) to install CFSSL.
|
||||
|
||||
CFSSL consists of:
|
||||
|
||||
* a set of packages useful for building custom TLS PKI tools
|
||||
* the `cfssl` program, which is the canonical command line utility
|
||||
using the CFSSL packages.
|
||||
* the `multirootca` program, which is a certificate authority server
|
||||
that can use multiple signing keys.
|
||||
* the `mkbundle` program is used to build certificate pool bundles.
|
||||
* the `cfssljson` program, which takes the JSON output from the
|
||||
`cfssl` and `multirootca` programs and writes certificates, keys,
|
||||
CSRs, and bundles to disk.
|
||||
|
||||
### Building
|
||||
|
||||
See [BUILDING](BUILDING.md)
|
||||
|
||||
### Installation
|
||||
|
||||
Installation requires a
|
||||
[working Go 1.8+ installation](http://golang.org/doc/install) and a
|
||||
properly set `GOPATH`.
|
||||
|
||||
```
|
||||
$ go get -u github.com/cloudflare/cfssl/cmd/cfssl
|
||||
```
|
||||
|
||||
will download and build the CFSSL tool, installing it in
|
||||
`$GOPATH/bin/cfssl`.
|
||||
|
||||
To install any of the other utility programs that are
|
||||
in this repo (for instance `cffsljson` in this case):
|
||||
|
||||
```
|
||||
$ go get -u github.com/cloudflare/cfssl/cmd/cfssljson
|
||||
```
|
||||
|
||||
This will download and build the CFSSLJSON tool, installing it in
|
||||
`$GOPATH/bin/`.
|
||||
|
||||
And to simply install __all__ of the programs in this repo:
|
||||
|
||||
```
|
||||
$ go get -u github.com/cloudflare/cfssl/cmd/...
|
||||
```
|
||||
|
||||
This will download, build, and install all of the utility programs
|
||||
(including `cfssl`, `cfssljson`, and `mkbundle` among others) into the
|
||||
`$GOPATH/bin/` directory.
|
||||
|
||||
### Using the Command Line Tool
|
||||
|
||||
The `cfssl` command line tool takes a command to specify what
|
||||
operation it should carry out:
|
||||
|
||||
sign signs a certificate
|
||||
bundle build a certificate bundle
|
||||
genkey generate a private key and a certificate request
|
||||
gencert generate a private key and a certificate
|
||||
serve start the API server
|
||||
version prints out the current version
|
||||
selfsign generates a self-signed certificate
|
||||
print-defaults print default configurations
|
||||
|
||||
Use `cfssl [command] -help` to find out more about a command.
|
||||
The `version` command takes no arguments.
|
||||
|
||||
#### Signing
|
||||
|
||||
```
|
||||
cfssl sign [-ca cert] [-ca-key key] [-hostname comma,separated,hostnames] csr [subject]
|
||||
```
|
||||
|
||||
The `csr` is the client's certificate request. The `-ca` and `-ca-key`
|
||||
flags are the CA's certificate and private key, respectively. By
|
||||
default, they are `ca.pem` and `ca_key.pem`. The `-hostname` is
|
||||
a comma separated hostname list that overrides the DNS names and
|
||||
IP address in the certificate SAN extension.
|
||||
For example, assuming the CA's private key is in
|
||||
`/etc/ssl/private/cfssl_key.pem` and the CA's certificate is in
|
||||
`/etc/ssl/certs/cfssl.pem`, to sign the `cloudflare.pem` certificate
|
||||
for cloudflare.com:
|
||||
|
||||
```
|
||||
cfssl sign -ca /etc/ssl/certs/cfssl.pem \
|
||||
-ca-key /etc/ssl/private/cfssl_key.pem \
|
||||
-hostname cloudflare.com \
|
||||
./cloudflare.pem
|
||||
```
|
||||
|
||||
It is also possible to specify CSR with the `-csr` flag. By doing so,
|
||||
flag values take precedence and will overwrite the argument.
|
||||
|
||||
The subject is an optional file that contains subject information that
|
||||
should be used in place of the information from the CSR. It should be
|
||||
a JSON file as follows:
|
||||
|
||||
```json
|
||||
{
|
||||
"CN": "example.com",
|
||||
"names": [
|
||||
{
|
||||
"C": "US",
|
||||
"L": "San Francisco",
|
||||
"O": "Internet Widgets, Inc.",
|
||||
"OU": "WWW",
|
||||
"ST": "California"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**N.B.** As of Go 1.7, self-signed certificates will not include
|
||||
[the AKI](https://go.googlesource.com/go/+/b623b71509b2d24df915d5bc68602e1c6edf38ca).
|
||||
|
||||
#### Bundling
|
||||
|
||||
```
|
||||
cfssl bundle [-ca-bundle bundle] [-int-bundle bundle] \
|
||||
[-metadata metadata_file] [-flavor bundle_flavor] \
|
||||
-cert certificate_file [-key key_file]
|
||||
```
|
||||
|
||||
The bundles are used for the root and intermediate certificate
|
||||
pools. In addition, platform metadata is specified through `-metadata`.
|
||||
The bundle files, metadata file (and auxiliary files) can be
|
||||
found at:
|
||||
|
||||
https://github.com/cloudflare/cfssl_trust
|
||||
|
||||
Specify PEM-encoded client certificate and key through `-cert` and
|
||||
`-key` respectively. If key is specified, the bundle will be built
|
||||
and verified with the key. Otherwise the bundle will be built
|
||||
without a private key. Instead of file path, use `-` for reading
|
||||
certificate PEM from stdin. It is also acceptable that the certificate
|
||||
file should contain a (partial) certificate bundle.
|
||||
|
||||
Specify bundling flavor through `-flavor`. There are three flavors:
|
||||
`optimal` to generate a bundle of shortest chain and most advanced
|
||||
cryptographic algorithms, `ubiquitous` to generate a bundle of most
|
||||
widely acceptance across different browsers and OS platforms, and
|
||||
`force` to find an acceptable bundle which is identical to the
|
||||
content of the input certificate file.
|
||||
|
||||
Alternatively, the client certificate can be pulled directly from
|
||||
a domain. It is also possible to connect to the remote address
|
||||
through `-ip`.
|
||||
|
||||
```
|
||||
cfssl bundle [-ca-bundle bundle] [-int-bundle bundle] \
|
||||
[-metadata metadata_file] [-flavor bundle_flavor] \
|
||||
-domain domain_name [-ip ip_address]
|
||||
```
|
||||
|
||||
The bundle output form should follow the example:
|
||||
|
||||
```json
|
||||
{
|
||||
"bundle": "CERT_BUNDLE_IN_PEM",
|
||||
"crt": "LEAF_CERT_IN_PEM",
|
||||
"crl_support": true,
|
||||
"expires": "2015-12-31T23:59:59Z",
|
||||
"hostnames": ["example.com"],
|
||||
"issuer": "ISSUER CERT SUBJECT",
|
||||
"key": "KEY_IN_PEM",
|
||||
"key_size": 2048,
|
||||
"key_type": "2048-bit RSA",
|
||||
"ocsp": ["http://ocsp.example-ca.com"],
|
||||
"ocsp_support": true,
|
||||
"root": "ROOT_CA_CERT_IN_PEM",
|
||||
"signature": "SHA1WithRSA",
|
||||
"subject": "LEAF CERT SUBJECT",
|
||||
"status": {
|
||||
"rebundled": false,
|
||||
"expiring_SKIs": [],
|
||||
"untrusted_root_stores": [],
|
||||
"messages": [],
|
||||
"code": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
#### Generating certificate signing request and private key
|
||||
|
||||
```
|
||||
cfssl genkey csr.json
|
||||
```
|
||||
|
||||
To generate a private key and corresponding certificate request, specify
|
||||
the key request as a JSON file. This file should follow the form:
|
||||
|
||||
```json
|
||||
{
|
||||
"hosts": [
|
||||
"example.com",
|
||||
"www.example.com"
|
||||
],
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"C": "US",
|
||||
"L": "San Francisco",
|
||||
"O": "Internet Widgets, Inc.",
|
||||
"OU": "WWW",
|
||||
"ST": "California"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Generating self-signed root CA certificate and private key
|
||||
|
||||
```
|
||||
cfssl genkey -initca csr.json | cfssljson -bare ca
|
||||
```
|
||||
|
||||
To generate a self-signed root CA certificate, specify the key request as
|
||||
a JSON file in the same format as in 'genkey'. Three PEM-encoded entities
|
||||
will appear in the output: the private key, the csr, and the self-signed
|
||||
certificate.
|
||||
|
||||
#### Generating a remote-issued certificate and private key.
|
||||
|
||||
```
|
||||
cfssl gencert -remote=remote_server [-hostname=comma,separated,hostnames] csr.json
|
||||
```
|
||||
|
||||
This calls `genkey` but has a remote CFSSL server sign and issue
|
||||
the certificate. You may use `-hostname` to override certificate SANs.
|
||||
|
||||
#### Generating a local-issued certificate and private key.
|
||||
|
||||
```
|
||||
cfssl gencert -ca cert -ca-key key [-hostname=comma,separated,hostnames] csr.json
|
||||
```
|
||||
|
||||
This generates and issues a certificate and private key from a local CA
|
||||
via a JSON request. You may use `-hostname` to override certificate SANs.
|
||||
|
||||
|
||||
#### Updating an OCSP responses file with a newly issued certificate
|
||||
|
||||
```
|
||||
cfssl ocspsign -ca cert -responder key -responder-key key -cert cert \
|
||||
| cfssljson -bare -stdout >> responses
|
||||
```
|
||||
|
||||
This will generate an OCSP response for the `cert` and add it to the
|
||||
`responses` file. You can then pass `responses` to `ocspserve` to start an
|
||||
OCSP server.
|
||||
|
||||
### Starting the API Server
|
||||
|
||||
CFSSL comes with an HTTP-based API server; the endpoints are
|
||||
documented in `doc/api/intro.txt`. The server is started with the `serve`
|
||||
command:
|
||||
|
||||
```
|
||||
cfssl serve [-address address] [-ca cert] [-ca-bundle bundle] \
|
||||
[-ca-key key] [-int-bundle bundle] [-int-dir dir] [-port port] \
|
||||
[-metadata file] [-remote remote_host] [-config config] \
|
||||
[-responder cert] [-responder-key key] [-db-config db-config]
|
||||
```
|
||||
|
||||
Address and port default to "127.0.0.1:8888". The `-ca` and `-ca-key`
|
||||
arguments should be the PEM-encoded certificate and private key to use
|
||||
for signing; by default, they are `ca.pem` and `ca_key.pem`. The
|
||||
`-ca-bundle` and `-int-bundle` should be the certificate bundles used
|
||||
for the root and intermediate certificate pools, respectively. These
|
||||
default to `ca-bundle.crt` and `int-bundle.crt` respectively. If the
|
||||
`-remote` option is specified, all signature operations will be forwarded
|
||||
to the remote CFSSL.
|
||||
|
||||
`-int-dir` specifies an intermediates directory. `-metadata` is a file for
|
||||
root certificate presence. The content of the file is a json dictionary
|
||||
(k,v) such that each key k is an SHA-1 digest of a root certificate while value v
|
||||
is a list of key store filenames. `-config` specifies a path to a configuration
|
||||
file. `-responder` and `-responder-key` are the certificate and the
|
||||
private key for the OCSP responder, respectively.
|
||||
|
||||
The amount of logging can be controlled with the `-loglevel` option. This
|
||||
comes *after* the serve command:
|
||||
|
||||
```
|
||||
cfssl serve -loglevel 2
|
||||
```
|
||||
|
||||
The levels are:
|
||||
|
||||
* 0 - DEBUG
|
||||
* 1 - INFO (this is the default level)
|
||||
* 2 - WARNING
|
||||
* 3 - ERROR
|
||||
* 4 - CRITICAL
|
||||
|
||||
### The multirootca
|
||||
|
||||
The `cfssl` program can act as an online certificate authority, but it
|
||||
only uses a single key. If multiple signing keys are needed, the
|
||||
`multirootca` program can be used. It only provides the `sign`,
|
||||
`authsign` and `info` endpoints. The documentation contains instructions
|
||||
for configuring and running the CA.
|
||||
|
||||
### The mkbundle Utility
|
||||
|
||||
`mkbundle` is used to build the root and intermediate bundles used in
|
||||
verifying certificates. It can be installed with
|
||||
|
||||
```
|
||||
go get -u github.com/cloudflare/cfssl/cmd/mkbundle
|
||||
```
|
||||
|
||||
It takes a collection of certificates, checks for CRL revocation (OCSP
|
||||
support is planned for the next release) and expired certificates, and
|
||||
bundles them into one file. It takes directories of certificates and
|
||||
certificate files (which may contain multiple certificates). For example,
|
||||
if the directory `intermediates` contains a number of intermediate
|
||||
certificates:
|
||||
|
||||
```
|
||||
mkbundle -f int-bundle.crt intermediates
|
||||
```
|
||||
|
||||
will check those certificates and combine valid certificates into a single
|
||||
`int-bundle.crt` file.
|
||||
|
||||
The `-f` flag specifies an output name; `-loglevel` specifies the verbosity
|
||||
of the logging (using the same loglevels as above), and `-nw` controls the
|
||||
number of revocation-checking workers.
|
||||
|
||||
### The cfssljson Utility
|
||||
|
||||
Most of the output from `cfssl` is in JSON. The `cfssljson` utility can take
|
||||
this output and split it out into separate `key`, `certificate`, `CSR`, and
|
||||
`bundle` files as appropriate. The tool takes a single flag, `-f`, that
|
||||
specifies the input file, and an argument that specifies the base name for
|
||||
the files produced. If the input filename is `-` (which is the default),
|
||||
cfssljson reads from standard input. It maps keys in the JSON file to
|
||||
filenames in the following way:
|
||||
|
||||
* if __cert__ or __certificate__ is specified, __basename.pem__ will be produced.
|
||||
* if __key__ or __private_key__ is specified, __basename-key.pem__ will be produced.
|
||||
* if __csr__ or __certificate_request__ is specified, __basename.csr__ will be produced.
|
||||
* if __bundle__ is specified, __basename-bundle.pem__ will be produced.
|
||||
* if __ocspResponse__ is specified, __basename-response.der__ will be produced.
|
||||
|
||||
Instead of saving to a file, you can pass `-stdout` to output the encoded
|
||||
contents to standard output.
|
||||
|
||||
### Static Builds
|
||||
|
||||
By default, the web assets are accessed from disk, based on their
|
||||
relative locations. If you wish to distribute a single,
|
||||
statically-linked, `cfssl` binary, you’ll want to embed these resources
|
||||
before building. This can by done with the
|
||||
[go.rice](https://github.com/GeertJohan/go.rice) tool.
|
||||
|
||||
```
|
||||
pushd cli/serve && rice embed-go && popd
|
||||
```
|
||||
|
||||
Then building with `go build` will use the embedded resources.
|
||||
|
||||
### Using a PKCS#11 hardware token / HSM
|
||||
|
||||
For better security, you may wish to store your private key in an HSM or
|
||||
smartcard. The interface to both of these categories of device is described by
|
||||
the PKCS#11 spec. If you need to do approximately one signing operation per
|
||||
second or fewer, the Yubikey NEO and NEO-n are inexpensive smartcard options:
|
||||
|
||||
https://www.yubico.com/products/yubikey-hardware/yubikey-neo/
|
||||
|
||||
In general you should look for a product that supports PIV (personal identity verification). If
|
||||
your signing needs are in the hundreds of signatures per second, you will need
|
||||
to purchase an expensive HSM (in the thousands to many thousands of USD).
|
||||
|
||||
If you wish to try out the PKCS#11 signing modes without a hardware token, you
|
||||
can use the [SoftHSM](https://github.com/opendnssec/SoftHSMv1#softhsm)
|
||||
implementation. Please note that using SoftHSM simply stores your private key in
|
||||
a file on disk and does not increase security.
|
||||
|
||||
To get started with your PKCS#11 token you will need to initialize it with a
|
||||
private key, PIN, and token label. The instructions to do this will be specific
|
||||
to each hardware device, and you should follow the instructions provided by your
|
||||
vendor. You will also need to find the path to your `module`, a shared object
|
||||
file (.so). Having initialized your device, you can query it to check your token
|
||||
label with:
|
||||
|
||||
pkcs11-tool --module <module path> --list-token-slots
|
||||
|
||||
You'll also want to check the label of the private key you imported (or
|
||||
generated). Run the following command and look for a `Private Key Object`:
|
||||
|
||||
pkcs11-tool --module <module path> --pin <pin> \
|
||||
--list-token-slots --login --list-objects
|
||||
|
||||
You now have all the information you need to use your PKCS#11 token with CFSSL.
|
||||
CFSSL supports PKCS#11 for certificate signing and OCSP signing. To create a
|
||||
Signer (for certificate signing), import `signer/universal` and call NewSigner
|
||||
with a Root object containing the module, pin, token label and private label
|
||||
from above, plus a path to your certificate. The structure of the Root object is
|
||||
documented in `universal.go`.
|
||||
|
||||
Alternately, you can construct a pkcs11key.Key or pkcs11key.Pool yourself, and
|
||||
pass it to ocsp.NewSigner (for OCSP) or local.NewSigner (for certificate
|
||||
signing). This will be necessary, for example, if you are using a single-session
|
||||
token like the Yubikey and need both OCSP signing and certificate signing at the
|
||||
same time.
|
||||
|
||||
### Additional Documentation
|
||||
|
||||
Additional documentation can be found in the "doc" directory:
|
||||
|
||||
* `api/intro.txt`: documents the API endpoints
|
||||
* `bootstrap.txt`: a walkthrough from building the package to getting
|
||||
up and running
|
2
vendor/github.com/containerd/cgroups/.gitignore
generated
vendored
Normal file
2
vendor/github.com/containerd/cgroups/.gitignore
generated
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
example/example
|
||||
cmd/cgctl/cgctl
|
24
vendor/github.com/containerd/cgroups/Makefile
generated
vendored
Normal file
24
vendor/github.com/containerd/cgroups/Makefile
generated
vendored
Normal file
|
@ -0,0 +1,24 @@
|
|||
# Copyright The containerd Authors.
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
PACKAGES=$(shell go list ./... | grep -v /vendor/)
|
||||
|
||||
all: cgutil
|
||||
go build -v
|
||||
|
||||
cgutil:
|
||||
cd cmd/cgctl && go build -v
|
||||
|
||||
proto:
|
||||
protobuild --quiet ${PACKAGES}
|
46
vendor/github.com/containerd/cgroups/Protobuild.toml
generated
vendored
Normal file
46
vendor/github.com/containerd/cgroups/Protobuild.toml
generated
vendored
Normal file
|
@ -0,0 +1,46 @@
|
|||
version = "unstable"
|
||||
generator = "gogoctrd"
|
||||
plugins = ["grpc"]
|
||||
|
||||
# Control protoc include paths. Below are usually some good defaults, but feel
|
||||
# free to try it without them if it works for your project.
|
||||
[includes]
|
||||
# Include paths that will be added before all others. Typically, you want to
|
||||
# treat the root of the project as an include, but this may not be necessary.
|
||||
# before = ["."]
|
||||
|
||||
# Paths that should be treated as include roots in relation to the vendor
|
||||
# directory. These will be calculated with the vendor directory nearest the
|
||||
# target package.
|
||||
# vendored = ["github.com/gogo/protobuf"]
|
||||
packages = ["github.com/gogo/protobuf"]
|
||||
|
||||
# Paths that will be added untouched to the end of the includes. We use
|
||||
# `/usr/local/include` to pickup the common install location of protobuf.
|
||||
# This is the default.
|
||||
after = ["/usr/local/include", "/usr/include"]
|
||||
|
||||
# This section maps protobuf imports to Go packages. These will become
|
||||
# `-M` directives in the call to the go protobuf generator.
|
||||
[packages]
|
||||
"gogoproto/gogo.proto" = "github.com/gogo/protobuf/gogoproto"
|
||||
"google/protobuf/any.proto" = "github.com/gogo/protobuf/types"
|
||||
"google/protobuf/descriptor.proto" = "github.com/gogo/protobuf/protoc-gen-gogo/descriptor"
|
||||
"google/protobuf/field_mask.proto" = "github.com/gogo/protobuf/types"
|
||||
"google/protobuf/timestamp.proto" = "github.com/gogo/protobuf/types"
|
||||
|
||||
# Aggregrate the API descriptors to lock down API changes.
|
||||
[[descriptors]]
|
||||
prefix = "github.com/containerd/cgroups/stats/v1"
|
||||
target = "stats/v1/metrics.pb.txt"
|
||||
ignore_files = [
|
||||
"google/protobuf/descriptor.proto",
|
||||
"gogoproto/gogo.proto"
|
||||
]
|
||||
[[descriptors]]
|
||||
prefix = "github.com/containerd/cgroups/v2/stats"
|
||||
target = "v2/stats/metrics.pb.txt"
|
||||
ignore_files = [
|
||||
"google/protobuf/descriptor.proto",
|
||||
"gogoproto/gogo.proto"
|
||||
]
|
46
vendor/github.com/containerd/cgroups/Vagrantfile
generated
vendored
Normal file
46
vendor/github.com/containerd/cgroups/Vagrantfile
generated
vendored
Normal file
|
@ -0,0 +1,46 @@
|
|||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
Vagrant.configure("2") do |config|
|
||||
# Fedora box is used for testing cgroup v2 support
|
||||
config.vm.box = "fedora/32-cloud-base"
|
||||
config.vm.provider :virtualbox do |v|
|
||||
v.memory = 2048
|
||||
v.cpus = 2
|
||||
end
|
||||
config.vm.provider :libvirt do |v|
|
||||
v.memory = 2048
|
||||
v.cpus = 2
|
||||
end
|
||||
config.vm.provision "shell", inline: <<-SHELL
|
||||
set -eux -o pipefail
|
||||
# configuration
|
||||
GO_VERSION="1.15"
|
||||
|
||||
# install gcc and Golang
|
||||
dnf -y install gcc
|
||||
curl -fsSL "https://dl.google.com/go/go${GO_VERSION}.linux-amd64.tar.gz" | tar Cxz /usr/local
|
||||
|
||||
# setup env vars
|
||||
cat >> /etc/profile.d/sh.local <<EOF
|
||||
PATH=/usr/local/go/bin:$PATH
|
||||
GO111MODULE=on
|
||||
export PATH GO111MODULE
|
||||
EOF
|
||||
source /etc/profile.d/sh.local
|
||||
|
||||
# enter /root/go/src/github.com/containerd/cgroups
|
||||
mkdir -p /root/go/src/github.com/containerd
|
||||
ln -s /vagrant /root/go/src/github.com/containerd/cgroups
|
||||
cd /root/go/src/github.com/containerd/cgroups
|
||||
|
||||
# create /test.sh
|
||||
cat > /test.sh <<EOF
|
||||
#!/bin/bash
|
||||
set -eux -o pipefail
|
||||
cd /root/go/src/github.com/containerd/cgroups
|
||||
go test -v ./...
|
||||
EOF
|
||||
chmod +x /test.sh
|
||||
SHELL
|
||||
end
|
18
vendor/github.com/containerd/cgroups/go.mod
generated
vendored
18
vendor/github.com/containerd/cgroups/go.mod
generated
vendored
|
@ -1,18 +0,0 @@
|
|||
module github.com/containerd/cgroups
|
||||
|
||||
go 1.13
|
||||
|
||||
require (
|
||||
github.com/cilium/ebpf v0.4.0
|
||||
github.com/coreos/go-systemd/v22 v22.1.0
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.0 // indirect
|
||||
github.com/docker/go-units v0.4.0
|
||||
github.com/godbus/dbus/v5 v5.0.3
|
||||
github.com/gogo/protobuf v1.3.2
|
||||
github.com/opencontainers/runtime-spec v1.0.2
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/sirupsen/logrus v1.7.0
|
||||
github.com/stretchr/testify v1.6.1
|
||||
github.com/urfave/cli v1.22.2
|
||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c
|
||||
)
|
790
vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.txt
generated
vendored
Normal file
790
vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.txt
generated
vendored
Normal file
|
@ -0,0 +1,790 @@
|
|||
file {
|
||||
name: "github.com/containerd/cgroups/stats/v1/metrics.proto"
|
||||
package: "io.containerd.cgroups.v1"
|
||||
dependency: "gogoproto/gogo.proto"
|
||||
message_type {
|
||||
name: "Metrics"
|
||||
field {
|
||||
name: "hugetlb"
|
||||
number: 1
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.HugetlbStat"
|
||||
json_name: "hugetlb"
|
||||
}
|
||||
field {
|
||||
name: "pids"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.PidsStat"
|
||||
json_name: "pids"
|
||||
}
|
||||
field {
|
||||
name: "cpu"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.CPUStat"
|
||||
options {
|
||||
65004: "CPU"
|
||||
}
|
||||
json_name: "cpu"
|
||||
}
|
||||
field {
|
||||
name: "memory"
|
||||
number: 4
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.MemoryStat"
|
||||
json_name: "memory"
|
||||
}
|
||||
field {
|
||||
name: "blkio"
|
||||
number: 5
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.BlkIOStat"
|
||||
json_name: "blkio"
|
||||
}
|
||||
field {
|
||||
name: "rdma"
|
||||
number: 6
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.RdmaStat"
|
||||
json_name: "rdma"
|
||||
}
|
||||
field {
|
||||
name: "network"
|
||||
number: 7
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.NetworkStat"
|
||||
json_name: "network"
|
||||
}
|
||||
field {
|
||||
name: "cgroup_stats"
|
||||
number: 8
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.CgroupStats"
|
||||
json_name: "cgroupStats"
|
||||
}
|
||||
field {
|
||||
name: "memory_oom_control"
|
||||
number: 9
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.MemoryOomControl"
|
||||
json_name: "memoryOomControl"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "HugetlbStat"
|
||||
field {
|
||||
name: "usage"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "usage"
|
||||
}
|
||||
field {
|
||||
name: "max"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "max"
|
||||
}
|
||||
field {
|
||||
name: "failcnt"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "failcnt"
|
||||
}
|
||||
field {
|
||||
name: "pagesize"
|
||||
number: 4
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_STRING
|
||||
json_name: "pagesize"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "PidsStat"
|
||||
field {
|
||||
name: "current"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "current"
|
||||
}
|
||||
field {
|
||||
name: "limit"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "limit"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "CPUStat"
|
||||
field {
|
||||
name: "usage"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.CPUUsage"
|
||||
json_name: "usage"
|
||||
}
|
||||
field {
|
||||
name: "throttling"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.Throttle"
|
||||
json_name: "throttling"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "CPUUsage"
|
||||
field {
|
||||
name: "total"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "total"
|
||||
}
|
||||
field {
|
||||
name: "kernel"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "kernel"
|
||||
}
|
||||
field {
|
||||
name: "user"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "user"
|
||||
}
|
||||
field {
|
||||
name: "per_cpu"
|
||||
number: 4
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_UINT64
|
||||
options {
|
||||
65004: "PerCPU"
|
||||
}
|
||||
json_name: "perCpu"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "Throttle"
|
||||
field {
|
||||
name: "periods"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "periods"
|
||||
}
|
||||
field {
|
||||
name: "throttled_periods"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "throttledPeriods"
|
||||
}
|
||||
field {
|
||||
name: "throttled_time"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "throttledTime"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "MemoryStat"
|
||||
field {
|
||||
name: "cache"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "cache"
|
||||
}
|
||||
field {
|
||||
name: "rss"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
options {
|
||||
65004: "RSS"
|
||||
}
|
||||
json_name: "rss"
|
||||
}
|
||||
field {
|
||||
name: "rss_huge"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
options {
|
||||
65004: "RSSHuge"
|
||||
}
|
||||
json_name: "rssHuge"
|
||||
}
|
||||
field {
|
||||
name: "mapped_file"
|
||||
number: 4
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "mappedFile"
|
||||
}
|
||||
field {
|
||||
name: "dirty"
|
||||
number: 5
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "dirty"
|
||||
}
|
||||
field {
|
||||
name: "writeback"
|
||||
number: 6
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "writeback"
|
||||
}
|
||||
field {
|
||||
name: "pg_pg_in"
|
||||
number: 7
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "pgPgIn"
|
||||
}
|
||||
field {
|
||||
name: "pg_pg_out"
|
||||
number: 8
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "pgPgOut"
|
||||
}
|
||||
field {
|
||||
name: "pg_fault"
|
||||
number: 9
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "pgFault"
|
||||
}
|
||||
field {
|
||||
name: "pg_maj_fault"
|
||||
number: 10
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "pgMajFault"
|
||||
}
|
||||
field {
|
||||
name: "inactive_anon"
|
||||
number: 11
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "inactiveAnon"
|
||||
}
|
||||
field {
|
||||
name: "active_anon"
|
||||
number: 12
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "activeAnon"
|
||||
}
|
||||
field {
|
||||
name: "inactive_file"
|
||||
number: 13
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "inactiveFile"
|
||||
}
|
||||
field {
|
||||
name: "active_file"
|
||||
number: 14
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "activeFile"
|
||||
}
|
||||
field {
|
||||
name: "unevictable"
|
||||
number: 15
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "unevictable"
|
||||
}
|
||||
field {
|
||||
name: "hierarchical_memory_limit"
|
||||
number: 16
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "hierarchicalMemoryLimit"
|
||||
}
|
||||
field {
|
||||
name: "hierarchical_swap_limit"
|
||||
number: 17
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "hierarchicalSwapLimit"
|
||||
}
|
||||
field {
|
||||
name: "total_cache"
|
||||
number: 18
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "totalCache"
|
||||
}
|
||||
field {
|
||||
name: "total_rss"
|
||||
number: 19
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
options {
|
||||
65004: "TotalRSS"
|
||||
}
|
||||
json_name: "totalRss"
|
||||
}
|
||||
field {
|
||||
name: "total_rss_huge"
|
||||
number: 20
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
options {
|
||||
65004: "TotalRSSHuge"
|
||||
}
|
||||
json_name: "totalRssHuge"
|
||||
}
|
||||
field {
|
||||
name: "total_mapped_file"
|
||||
number: 21
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "totalMappedFile"
|
||||
}
|
||||
field {
|
||||
name: "total_dirty"
|
||||
number: 22
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "totalDirty"
|
||||
}
|
||||
field {
|
||||
name: "total_writeback"
|
||||
number: 23
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "totalWriteback"
|
||||
}
|
||||
field {
|
||||
name: "total_pg_pg_in"
|
||||
number: 24
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "totalPgPgIn"
|
||||
}
|
||||
field {
|
||||
name: "total_pg_pg_out"
|
||||
number: 25
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "totalPgPgOut"
|
||||
}
|
||||
field {
|
||||
name: "total_pg_fault"
|
||||
number: 26
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "totalPgFault"
|
||||
}
|
||||
field {
|
||||
name: "total_pg_maj_fault"
|
||||
number: 27
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "totalPgMajFault"
|
||||
}
|
||||
field {
|
||||
name: "total_inactive_anon"
|
||||
number: 28
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "totalInactiveAnon"
|
||||
}
|
||||
field {
|
||||
name: "total_active_anon"
|
||||
number: 29
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "totalActiveAnon"
|
||||
}
|
||||
field {
|
||||
name: "total_inactive_file"
|
||||
number: 30
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "totalInactiveFile"
|
||||
}
|
||||
field {
|
||||
name: "total_active_file"
|
||||
number: 31
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "totalActiveFile"
|
||||
}
|
||||
field {
|
||||
name: "total_unevictable"
|
||||
number: 32
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "totalUnevictable"
|
||||
}
|
||||
field {
|
||||
name: "usage"
|
||||
number: 33
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.MemoryEntry"
|
||||
json_name: "usage"
|
||||
}
|
||||
field {
|
||||
name: "swap"
|
||||
number: 34
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.MemoryEntry"
|
||||
json_name: "swap"
|
||||
}
|
||||
field {
|
||||
name: "kernel"
|
||||
number: 35
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.MemoryEntry"
|
||||
json_name: "kernel"
|
||||
}
|
||||
field {
|
||||
name: "kernel_tcp"
|
||||
number: 36
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.MemoryEntry"
|
||||
options {
|
||||
65004: "KernelTCP"
|
||||
}
|
||||
json_name: "kernelTcp"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "MemoryEntry"
|
||||
field {
|
||||
name: "limit"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "limit"
|
||||
}
|
||||
field {
|
||||
name: "usage"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "usage"
|
||||
}
|
||||
field {
|
||||
name: "max"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "max"
|
||||
}
|
||||
field {
|
||||
name: "failcnt"
|
||||
number: 4
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "failcnt"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "MemoryOomControl"
|
||||
field {
|
||||
name: "oom_kill_disable"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "oomKillDisable"
|
||||
}
|
||||
field {
|
||||
name: "under_oom"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "underOom"
|
||||
}
|
||||
field {
|
||||
name: "oom_kill"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "oomKill"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "BlkIOStat"
|
||||
field {
|
||||
name: "io_service_bytes_recursive"
|
||||
number: 1
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
|
||||
json_name: "ioServiceBytesRecursive"
|
||||
}
|
||||
field {
|
||||
name: "io_serviced_recursive"
|
||||
number: 2
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
|
||||
json_name: "ioServicedRecursive"
|
||||
}
|
||||
field {
|
||||
name: "io_queued_recursive"
|
||||
number: 3
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
|
||||
json_name: "ioQueuedRecursive"
|
||||
}
|
||||
field {
|
||||
name: "io_service_time_recursive"
|
||||
number: 4
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
|
||||
json_name: "ioServiceTimeRecursive"
|
||||
}
|
||||
field {
|
||||
name: "io_wait_time_recursive"
|
||||
number: 5
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
|
||||
json_name: "ioWaitTimeRecursive"
|
||||
}
|
||||
field {
|
||||
name: "io_merged_recursive"
|
||||
number: 6
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
|
||||
json_name: "ioMergedRecursive"
|
||||
}
|
||||
field {
|
||||
name: "io_time_recursive"
|
||||
number: 7
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
|
||||
json_name: "ioTimeRecursive"
|
||||
}
|
||||
field {
|
||||
name: "sectors_recursive"
|
||||
number: 8
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
|
||||
json_name: "sectorsRecursive"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "BlkIOEntry"
|
||||
field {
|
||||
name: "op"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_STRING
|
||||
json_name: "op"
|
||||
}
|
||||
field {
|
||||
name: "device"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_STRING
|
||||
json_name: "device"
|
||||
}
|
||||
field {
|
||||
name: "major"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "major"
|
||||
}
|
||||
field {
|
||||
name: "minor"
|
||||
number: 4
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "minor"
|
||||
}
|
||||
field {
|
||||
name: "value"
|
||||
number: 5
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "value"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "RdmaStat"
|
||||
field {
|
||||
name: "current"
|
||||
number: 1
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.RdmaEntry"
|
||||
json_name: "current"
|
||||
}
|
||||
field {
|
||||
name: "limit"
|
||||
number: 2
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v1.RdmaEntry"
|
||||
json_name: "limit"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "RdmaEntry"
|
||||
field {
|
||||
name: "device"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_STRING
|
||||
json_name: "device"
|
||||
}
|
||||
field {
|
||||
name: "hca_handles"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT32
|
||||
json_name: "hcaHandles"
|
||||
}
|
||||
field {
|
||||
name: "hca_objects"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT32
|
||||
json_name: "hcaObjects"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "NetworkStat"
|
||||
field {
|
||||
name: "name"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_STRING
|
||||
json_name: "name"
|
||||
}
|
||||
field {
|
||||
name: "rx_bytes"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "rxBytes"
|
||||
}
|
||||
field {
|
||||
name: "rx_packets"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "rxPackets"
|
||||
}
|
||||
field {
|
||||
name: "rx_errors"
|
||||
number: 4
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "rxErrors"
|
||||
}
|
||||
field {
|
||||
name: "rx_dropped"
|
||||
number: 5
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "rxDropped"
|
||||
}
|
||||
field {
|
||||
name: "tx_bytes"
|
||||
number: 6
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "txBytes"
|
||||
}
|
||||
field {
|
||||
name: "tx_packets"
|
||||
number: 7
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "txPackets"
|
||||
}
|
||||
field {
|
||||
name: "tx_errors"
|
||||
number: 8
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "txErrors"
|
||||
}
|
||||
field {
|
||||
name: "tx_dropped"
|
||||
number: 9
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "txDropped"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "CgroupStats"
|
||||
field {
|
||||
name: "nr_sleeping"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "nrSleeping"
|
||||
}
|
||||
field {
|
||||
name: "nr_running"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "nrRunning"
|
||||
}
|
||||
field {
|
||||
name: "nr_stopped"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "nrStopped"
|
||||
}
|
||||
field {
|
||||
name: "nr_uninterruptible"
|
||||
number: 4
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "nrUninterruptible"
|
||||
}
|
||||
field {
|
||||
name: "nr_io_wait"
|
||||
number: 5
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "nrIoWait"
|
||||
}
|
||||
}
|
||||
syntax: "proto3"
|
||||
}
|
539
vendor/github.com/containerd/cgroups/v2/stats/metrics.pb.txt
generated
vendored
Normal file
539
vendor/github.com/containerd/cgroups/v2/stats/metrics.pb.txt
generated
vendored
Normal file
|
@ -0,0 +1,539 @@
|
|||
file {
|
||||
name: "github.com/containerd/cgroups/v2/stats/metrics.proto"
|
||||
package: "io.containerd.cgroups.v2"
|
||||
dependency: "gogoproto/gogo.proto"
|
||||
message_type {
|
||||
name: "Metrics"
|
||||
field {
|
||||
name: "pids"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v2.PidsStat"
|
||||
json_name: "pids"
|
||||
}
|
||||
field {
|
||||
name: "cpu"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v2.CPUStat"
|
||||
options {
|
||||
65004: "CPU"
|
||||
}
|
||||
json_name: "cpu"
|
||||
}
|
||||
field {
|
||||
name: "memory"
|
||||
number: 4
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v2.MemoryStat"
|
||||
json_name: "memory"
|
||||
}
|
||||
field {
|
||||
name: "rdma"
|
||||
number: 5
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v2.RdmaStat"
|
||||
json_name: "rdma"
|
||||
}
|
||||
field {
|
||||
name: "io"
|
||||
number: 6
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v2.IOStat"
|
||||
json_name: "io"
|
||||
}
|
||||
field {
|
||||
name: "hugetlb"
|
||||
number: 7
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v2.HugeTlbStat"
|
||||
json_name: "hugetlb"
|
||||
}
|
||||
field {
|
||||
name: "memory_events"
|
||||
number: 8
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v2.MemoryEvents"
|
||||
json_name: "memoryEvents"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "PidsStat"
|
||||
field {
|
||||
name: "current"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "current"
|
||||
}
|
||||
field {
|
||||
name: "limit"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "limit"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "CPUStat"
|
||||
field {
|
||||
name: "usage_usec"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "usageUsec"
|
||||
}
|
||||
field {
|
||||
name: "user_usec"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "userUsec"
|
||||
}
|
||||
field {
|
||||
name: "system_usec"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "systemUsec"
|
||||
}
|
||||
field {
|
||||
name: "nr_periods"
|
||||
number: 4
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "nrPeriods"
|
||||
}
|
||||
field {
|
||||
name: "nr_throttled"
|
||||
number: 5
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "nrThrottled"
|
||||
}
|
||||
field {
|
||||
name: "throttled_usec"
|
||||
number: 6
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "throttledUsec"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "MemoryStat"
|
||||
field {
|
||||
name: "anon"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "anon"
|
||||
}
|
||||
field {
|
||||
name: "file"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "file"
|
||||
}
|
||||
field {
|
||||
name: "kernel_stack"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "kernelStack"
|
||||
}
|
||||
field {
|
||||
name: "slab"
|
||||
number: 4
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "slab"
|
||||
}
|
||||
field {
|
||||
name: "sock"
|
||||
number: 5
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "sock"
|
||||
}
|
||||
field {
|
||||
name: "shmem"
|
||||
number: 6
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "shmem"
|
||||
}
|
||||
field {
|
||||
name: "file_mapped"
|
||||
number: 7
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "fileMapped"
|
||||
}
|
||||
field {
|
||||
name: "file_dirty"
|
||||
number: 8
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "fileDirty"
|
||||
}
|
||||
field {
|
||||
name: "file_writeback"
|
||||
number: 9
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "fileWriteback"
|
||||
}
|
||||
field {
|
||||
name: "anon_thp"
|
||||
number: 10
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "anonThp"
|
||||
}
|
||||
field {
|
||||
name: "inactive_anon"
|
||||
number: 11
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "inactiveAnon"
|
||||
}
|
||||
field {
|
||||
name: "active_anon"
|
||||
number: 12
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "activeAnon"
|
||||
}
|
||||
field {
|
||||
name: "inactive_file"
|
||||
number: 13
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "inactiveFile"
|
||||
}
|
||||
field {
|
||||
name: "active_file"
|
||||
number: 14
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "activeFile"
|
||||
}
|
||||
field {
|
||||
name: "unevictable"
|
||||
number: 15
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "unevictable"
|
||||
}
|
||||
field {
|
||||
name: "slab_reclaimable"
|
||||
number: 16
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "slabReclaimable"
|
||||
}
|
||||
field {
|
||||
name: "slab_unreclaimable"
|
||||
number: 17
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "slabUnreclaimable"
|
||||
}
|
||||
field {
|
||||
name: "pgfault"
|
||||
number: 18
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "pgfault"
|
||||
}
|
||||
field {
|
||||
name: "pgmajfault"
|
||||
number: 19
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "pgmajfault"
|
||||
}
|
||||
field {
|
||||
name: "workingset_refault"
|
||||
number: 20
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "workingsetRefault"
|
||||
}
|
||||
field {
|
||||
name: "workingset_activate"
|
||||
number: 21
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "workingsetActivate"
|
||||
}
|
||||
field {
|
||||
name: "workingset_nodereclaim"
|
||||
number: 22
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "workingsetNodereclaim"
|
||||
}
|
||||
field {
|
||||
name: "pgrefill"
|
||||
number: 23
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "pgrefill"
|
||||
}
|
||||
field {
|
||||
name: "pgscan"
|
||||
number: 24
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "pgscan"
|
||||
}
|
||||
field {
|
||||
name: "pgsteal"
|
||||
number: 25
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "pgsteal"
|
||||
}
|
||||
field {
|
||||
name: "pgactivate"
|
||||
number: 26
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "pgactivate"
|
||||
}
|
||||
field {
|
||||
name: "pgdeactivate"
|
||||
number: 27
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "pgdeactivate"
|
||||
}
|
||||
field {
|
||||
name: "pglazyfree"
|
||||
number: 28
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "pglazyfree"
|
||||
}
|
||||
field {
|
||||
name: "pglazyfreed"
|
||||
number: 29
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "pglazyfreed"
|
||||
}
|
||||
field {
|
||||
name: "thp_fault_alloc"
|
||||
number: 30
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "thpFaultAlloc"
|
||||
}
|
||||
field {
|
||||
name: "thp_collapse_alloc"
|
||||
number: 31
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "thpCollapseAlloc"
|
||||
}
|
||||
field {
|
||||
name: "usage"
|
||||
number: 32
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "usage"
|
||||
}
|
||||
field {
|
||||
name: "usage_limit"
|
||||
number: 33
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "usageLimit"
|
||||
}
|
||||
field {
|
||||
name: "swap_usage"
|
||||
number: 34
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "swapUsage"
|
||||
}
|
||||
field {
|
||||
name: "swap_limit"
|
||||
number: 35
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "swapLimit"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "MemoryEvents"
|
||||
field {
|
||||
name: "low"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "low"
|
||||
}
|
||||
field {
|
||||
name: "high"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "high"
|
||||
}
|
||||
field {
|
||||
name: "max"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "max"
|
||||
}
|
||||
field {
|
||||
name: "oom"
|
||||
number: 4
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "oom"
|
||||
}
|
||||
field {
|
||||
name: "oom_kill"
|
||||
number: 5
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "oomKill"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "RdmaStat"
|
||||
field {
|
||||
name: "current"
|
||||
number: 1
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v2.RdmaEntry"
|
||||
json_name: "current"
|
||||
}
|
||||
field {
|
||||
name: "limit"
|
||||
number: 2
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v2.RdmaEntry"
|
||||
json_name: "limit"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "RdmaEntry"
|
||||
field {
|
||||
name: "device"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_STRING
|
||||
json_name: "device"
|
||||
}
|
||||
field {
|
||||
name: "hca_handles"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT32
|
||||
json_name: "hcaHandles"
|
||||
}
|
||||
field {
|
||||
name: "hca_objects"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT32
|
||||
json_name: "hcaObjects"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "IOStat"
|
||||
field {
|
||||
name: "usage"
|
||||
number: 1
|
||||
label: LABEL_REPEATED
|
||||
type: TYPE_MESSAGE
|
||||
type_name: ".io.containerd.cgroups.v2.IOEntry"
|
||||
json_name: "usage"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "IOEntry"
|
||||
field {
|
||||
name: "major"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "major"
|
||||
}
|
||||
field {
|
||||
name: "minor"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "minor"
|
||||
}
|
||||
field {
|
||||
name: "rbytes"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "rbytes"
|
||||
}
|
||||
field {
|
||||
name: "wbytes"
|
||||
number: 4
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "wbytes"
|
||||
}
|
||||
field {
|
||||
name: "rios"
|
||||
number: 5
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "rios"
|
||||
}
|
||||
field {
|
||||
name: "wios"
|
||||
number: 6
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "wios"
|
||||
}
|
||||
}
|
||||
message_type {
|
||||
name: "HugeTlbStat"
|
||||
field {
|
||||
name: "current"
|
||||
number: 1
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "current"
|
||||
}
|
||||
field {
|
||||
name: "max"
|
||||
number: 2
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_UINT64
|
||||
json_name: "max"
|
||||
}
|
||||
field {
|
||||
name: "pagesize"
|
||||
number: 3
|
||||
label: LABEL_OPTIONAL
|
||||
type: TYPE_STRING
|
||||
json_name: "pagesize"
|
||||
}
|
||||
}
|
||||
syntax: "proto3"
|
||||
}
|
20
vendor/github.com/containerd/console/.golangci.yml
generated
vendored
Normal file
20
vendor/github.com/containerd/console/.golangci.yml
generated
vendored
Normal file
|
@ -0,0 +1,20 @@
|
|||
linters:
|
||||
enable:
|
||||
- structcheck
|
||||
- varcheck
|
||||
- staticcheck
|
||||
- unconvert
|
||||
- gofmt
|
||||
- goimports
|
||||
- golint
|
||||
- ineffassign
|
||||
- vet
|
||||
- unused
|
||||
- misspell
|
||||
disable:
|
||||
- errcheck
|
||||
|
||||
run:
|
||||
timeout: 3m
|
||||
skip-dirs:
|
||||
- vendor
|
8
vendor/github.com/containerd/console/go.mod
generated
vendored
8
vendor/github.com/containerd/console/go.mod
generated
vendored
|
@ -1,8 +0,0 @@
|
|||
module github.com/containerd/console
|
||||
|
||||
go 1.13
|
||||
|
||||
require (
|
||||
github.com/pkg/errors v0.9.1
|
||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c
|
||||
)
|
1
vendor/github.com/containerd/containerd/.gitattributes
generated
vendored
Normal file
1
vendor/github.com/containerd/containerd/.gitattributes
generated
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
*.go text eol=lf
|
10
vendor/github.com/containerd/containerd/.gitignore
generated
vendored
Normal file
10
vendor/github.com/containerd/containerd/.gitignore
generated
vendored
Normal file
|
@ -0,0 +1,10 @@
|
|||
/bin/
|
||||
/man/
|
||||
coverage.txt
|
||||
profile.out
|
||||
containerd.test
|
||||
_site/
|
||||
releases/*.tar.gz
|
||||
releases/*.tar.gz.sha256sum
|
||||
_output/
|
||||
.vagrant/
|
27
vendor/github.com/containerd/containerd/.golangci.yml
generated
vendored
Normal file
27
vendor/github.com/containerd/containerd/.golangci.yml
generated
vendored
Normal file
|
@ -0,0 +1,27 @@
|
|||
linters:
|
||||
enable:
|
||||
- structcheck
|
||||
- varcheck
|
||||
- staticcheck
|
||||
- unconvert
|
||||
- gofmt
|
||||
- goimports
|
||||
- golint
|
||||
- ineffassign
|
||||
- vet
|
||||
- unused
|
||||
- misspell
|
||||
disable:
|
||||
- errcheck
|
||||
|
||||
issues:
|
||||
include:
|
||||
- EXC0002
|
||||
|
||||
run:
|
||||
timeout: 3m
|
||||
skip-dirs:
|
||||
- api
|
||||
- design
|
||||
- docs
|
||||
- docs/man
|
127
vendor/github.com/containerd/containerd/.mailmap
generated
vendored
Normal file
127
vendor/github.com/containerd/containerd/.mailmap
generated
vendored
Normal file
|
@ -0,0 +1,127 @@
|
|||
Abhinandan Prativadi <abhi@docker.com>
|
||||
Abhinandan Prativadi <abhi@docker.com> <aprativadi@gmail.com>
|
||||
Ace-Tang <aceapril@126.com>
|
||||
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.akihiro@lab.ntt.co.jp>
|
||||
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.kyoto@gmail.com>
|
||||
Allen Sun <shlallen1990@gmail.com> <allensun@AllenSundeMacBook-Pro.local>
|
||||
Alexander Morozov <lk4d4math@gmail.com> <lk4d4@docker.com>
|
||||
Antonio Ojea <antonio.ojea.garcia@gmail.com> <aojea@redhat.com>
|
||||
Amit Krishnan <krish.amit@gmail.com> <amit.krishnan@oracle.com>
|
||||
Andrei Vagin <avagin@virtuozzo.com> <avagin@openvz.org>
|
||||
Andrey Kolomentsev <andrey.kolomentsev@gmail.com>
|
||||
Arnaud Porterie <icecrime@gmail.com>
|
||||
Arnaud Porterie <icecrime@gmail.com> <arnaud.porterie@docker.com>
|
||||
Bob Mader <swapdisk@users.noreply.github.com>
|
||||
Boris Popovschi <zyqsempai@mail.ru>
|
||||
Bowen Yan <loneybw@gmail.com>
|
||||
Brent Baude <bbaude@redhat.com>
|
||||
Cao Zhihao <caozhihao@163.com>
|
||||
Cao Zhihao <caozhihao@163.com> <caozhihao.xd@bytedance.com>
|
||||
Carlos Eduardo <me@carlosedp.com> <me@carlosedp.com>
|
||||
chenxiaoyu <weixian.cxy@alibaba-inc.com>
|
||||
Cory Bennett <cbennett@netflix.com>
|
||||
Cristian Staretu <cristian.staretu@gmail.com>
|
||||
Cristian Staretu <cristian.staretu@gmail.com> <unclejack@users.noreply.github.com>
|
||||
Daniel Dao <dqminh89@gmail.com>
|
||||
Derek McGowan <derek@mcg.dev> <derek@mcgstyle.net>
|
||||
Edgar Lee <edgarl@netflix.com> <edgar.lee@docker.com>
|
||||
Eric Ernst <eric@amperecomputing.com> <eric.ernst@intel.com>
|
||||
Eric Ren <renzhen.rz@linux.alibaba.com> <renzhen@linux.alibaba.com>
|
||||
Eric Ren <renzhen.rz@linux.alibaba.com> <renzhen.rz@alibaba-linux.com>
|
||||
Eric Ren <renzhen.rz@linux.alibaba.com> <renzhen.rz@alibaba-inc.com>
|
||||
Fahed Dorgaa <fahed.dorgaa@gmail.com>
|
||||
Frank Yang <yyb196@gmail.com>
|
||||
Fupan Li <lifupan@gmail.com>
|
||||
Fupan Li <lifupan@gmail.com> <fupan.lfp@antfin.com>
|
||||
Georgia Panoutsakopoulou <gpanoutsak@gmail.com>
|
||||
Guangming Wang <guangming.wang@daocloud.io>
|
||||
Haiyan Meng <haiyanmeng@google.com>
|
||||
Harry Zhang <harryz@hyper.sh> <harryzhang@zju.edu.cn>
|
||||
Hu Shuai <hus.fnst@cn.fujitsu.com>
|
||||
Hu Shuai <hus.fnst@cn.fujitsu.com> <hushuaiia@qq.com>
|
||||
Iceber Gu <wei.cai-nat@daocloud.io>
|
||||
Jaana Burcu Dogan <burcujdogan@gmail.com> <jbd@golang.org>
|
||||
Jess Valarezo <valarezo.jessica@gmail.com>
|
||||
Jess Valarezo <valarezo.jessica@gmail.com> <jessica.valarezo@docker.com>
|
||||
Jian Liao <jliao@alauda.io>
|
||||
Jian Liao <jliao@alauda.io> <liaojian@Dabllo.local>
|
||||
Ji'an Liu <anthonyliu@zju.edu.cn>
|
||||
Jie Zhang <iamkadisi@163.com>
|
||||
John Howard <github@lowenna.com>
|
||||
John Howard <github@lowenna.com> <john.howard@microsoft.com>
|
||||
John Howard <github@lowenna.com> <jhoward@microsoft.com>
|
||||
John Howard <github@lowenna.com> <jhowardmsft@users.noreply.github.com>
|
||||
Lorenz Brun <lorenz@brun.one> <lorenz@nexantic.com>
|
||||
Luc Perkins <lucperkins@gmail.com>
|
||||
Julien Balestra <julien.balestra@datadoghq.com>
|
||||
Jun Lin Chen <webmaster@mc256.com> <1913688+mc256@users.noreply.github.com>
|
||||
Justin Cormack <justin.cormack@docker.com> <justin@specialbusservice.com>
|
||||
Justin Terry <juterry@microsoft.com>
|
||||
Justin Terry <juterry@microsoft.com> <jterry75@users.noreply.github.com>
|
||||
Kenfe-Mickaël Laventure <mickael.laventure@gmail.com>
|
||||
Kevin Kern <kaiwentan@harmonycloud.cn>
|
||||
Kevin Parsons <kevpar@microsoft.com> <kevpar@users.noreply.github.com>
|
||||
Kevin Xu <cming.xu@gmail.com>
|
||||
Kohei Tokunaga <ktokunaga.mail@gmail.com>
|
||||
Krasi Georgiev <krasi.root@gmail.com> <krasi@vip-consult.solutions>
|
||||
Lantao Liu <lantaol@google.com>
|
||||
Lantao Liu <lantaol@google.com> <taotaotheripper@gmail.com>
|
||||
Li Yuxuan <liyuxuan04@baidu.com> <darfux@163.com>
|
||||
Lifubang <lifubang@aliyun.com> <lifubang@acmcoder.com>
|
||||
Lu Jingxiao <lujingxiao@huawei.com>
|
||||
Maksym Pavlenko <pavlenko.maksym@gmail.com> <makpav@amazon.com>
|
||||
Maksym Pavlenko <pavlenko.maksym@gmail.com> <mxpv@apple.com>
|
||||
Mario Hros <spam@k3a.me>
|
||||
Mario Hros <spam@k3a.me> <root@k3a.me>
|
||||
Mario Macias <mariomac@gmail.com> <mmacias@newrelic.com>
|
||||
Mark Gordon <msg555@gmail.com>
|
||||
Michael Crosby <crosbymichael@gmail.com> <michael@thepasture.io>
|
||||
Michael Katsoulis <michaelkatsoulis88@gmail.com>
|
||||
Mike Brown <brownwm@us.ibm.com> <mikebrow@users.noreply.github.com>
|
||||
Mohammad Asif Siddiqui <mohammad.asif.siddiqui1@huawei.com>
|
||||
Nishchay Kumar <mrawesomenix@gmail.com>
|
||||
Oliver Stenbom <oliver@stenbom.eu> <ostenbom@pivotal.io>
|
||||
Phil Estes <estesp@gmail.com> <estesp@linux.vnet.ibm.com>
|
||||
Phil Estes <estesp@gmail.com> <estesp@amazon.com>
|
||||
Reid Li <reid.li@utexas.edu>
|
||||
Robin Winkelewski <w9ncontact@gmail.com>
|
||||
Ross Boucher <rboucher@gmail.com>
|
||||
Ruediger Maass <ruediger.maass@de.ibm.com>
|
||||
Rui Cao <ruicao@alauda.io> <ruicao@alauda.io>
|
||||
Sakeven Jiang <jc5930@sina.cn>
|
||||
Samuel Karp <me@samuelkarp.com> <skarp@amazon.com>
|
||||
Seth Pellegrino <spellegrino@newrelic.com> <30441101+sethp-nr@users.noreply.github.com>
|
||||
Shaobao Feng <shaobao.feng@huawei.com>
|
||||
Shengbo Song <thomassong@tencent.com>
|
||||
Shengjing Zhu <i@zhsj.me> <zhsj@debian.org>
|
||||
Siddharth Yadav <sedflix@gmail.com>
|
||||
SiYu Zhao <d.chaser.zsy@gmail.com>
|
||||
Stefan Berger <stefanb@us.ibm.com> <stefanb@linux.ibm.com>
|
||||
Stefan Berger <stefanb@us.ibm.com> <stefanb@linux.vnet.ibm.com>
|
||||
Stephen J Day <stevvooe@gmail.com> <stephen.day@getcruise.com>
|
||||
Stephen J Day <stevvooe@gmail.com> <stevvooe@users.noreply.github.com>
|
||||
Stephen J Day <stevvooe@gmail.com> <stephen.day@docker.com>
|
||||
Sudeesh John <sudeesh@linux.vnet.ibm.com>
|
||||
Su Fei <fesu@ebay.com> <fesu@ebay.com>
|
||||
Su Xiaolin <linxxnil@126.com>
|
||||
Ted Yu <yuzhihong@gmail.com>
|
||||
Tõnis Tiigi <tonistiigi@gmail.com>
|
||||
Wade Lee <weidonglee27@gmail.com>
|
||||
Wade Lee <weidonglee27@gmail.com> <weidonglee29@gmail.com>
|
||||
Wade Lee <weidonglee27@gmail.com> <21621232@zju.edu.cn>
|
||||
wanglei <wllenyj@linux.alibaba.com>
|
||||
Wei Fu <fuweid89@gmail.com>
|
||||
Wei Fu <fuweid89@gmail.com> <fhfuwei@163.com>
|
||||
Xiaodong Zhang <a4012017@sina.com>
|
||||
Xuean Yan <yan.xuean@zte.com.cn>
|
||||
Yue Zhang <zy675793960@yeah.net>
|
||||
Yuxing Liu <starnop@163.com>
|
||||
Zhang Wei <zhangwei555@huawei.com>
|
||||
zhangyadong <zhangyadong.0808@bytedance.com>
|
||||
Zhenguang Zhu <zhengguang.zhu@daocloud.io>
|
||||
Zhiyu Li <payall4u@qq.com>
|
||||
Zhiyu Li <payall4u@qq.com> <404977848@qq.com>
|
||||
Zhongming Chang<zhongming.chang@daocloud.io>
|
||||
Zhoulin Xie <zhoulin.xie@daocloud.io>
|
||||
Zhoulin Xie <zhoulin.xie@daocloud.io> <42261994+JoeWrightss@users.noreply.github.com>
|
||||
张潇 <xiaozhang0210@hotmail.com>
|
35
vendor/github.com/containerd/containerd/.zuul.yaml
generated
vendored
Normal file
35
vendor/github.com/containerd/containerd/.zuul.yaml
generated
vendored
Normal file
|
@ -0,0 +1,35 @@
|
|||
- project:
|
||||
name: containerd/containerd
|
||||
merge-mode: merge
|
||||
check:
|
||||
jobs:
|
||||
- containerd-build-arm64
|
||||
- containerd-test-arm64
|
||||
- containerd-integration-test-arm64
|
||||
|
||||
- job:
|
||||
name: containerd-build-arm64
|
||||
parent: init-test
|
||||
description: |
|
||||
Containerd build in openlab cluster.
|
||||
run: .zuul/playbooks/containerd-build/run.yaml
|
||||
nodeset: ubuntu-xenial-arm64-openlab
|
||||
voting: false
|
||||
|
||||
- job:
|
||||
name: containerd-test-arm64
|
||||
parent: init-test
|
||||
description: |
|
||||
Containerd unit tests in openlab cluster.
|
||||
run: .zuul/playbooks/containerd-build/unit-test.yaml
|
||||
nodeset: ubuntu-xenial-arm64-openlab
|
||||
voting: false
|
||||
|
||||
- job:
|
||||
name: containerd-integration-test-arm64
|
||||
parent: init-test
|
||||
description: |
|
||||
Containerd unit tests in openlab cluster.
|
||||
run: .zuul/playbooks/containerd-build/integration-test.yaml
|
||||
nodeset: ubuntu-xenial-arm64-openlab
|
||||
voting: false
|
48
vendor/github.com/containerd/containerd/ADOPTERS.md
generated
vendored
Normal file
48
vendor/github.com/containerd/containerd/ADOPTERS.md
generated
vendored
Normal file
|
@ -0,0 +1,48 @@
|
|||
## containerd Adopters
|
||||
|
||||
A non-exhaustive list of containerd adopters is provided below.
|
||||
|
||||
**_Docker/Moby engine_** - Containerd began life prior to its CNCF adoption as a lower-layer
|
||||
runtime manager for `runc` processes below the Docker engine. Continuing today, containerd
|
||||
has extremely broad production usage as a component of the [Docker engine](https://github.com/docker/docker-ce)
|
||||
stack. Note that this includes any use of the open source [Moby engine project](https://github.com/moby/moby);
|
||||
including the Balena project listed below.
|
||||
|
||||
**_[IBM Cloud Kubernetes Service (IKS)](https://www.ibm.com/cloud/container-service)_** - offers containerd as the CRI runtime for v1.11 and higher versions.
|
||||
|
||||
**_[IBM Cloud Private (ICP)](https://www.ibm.com/cloud/private)_** - IBM's on-premises cloud offering has containerd as a "tech preview" CRI runtime for the Kubernetes offered within this product for the past two releases, and plans to fully migrate to containerd in a future release.
|
||||
|
||||
**_[Google Cloud Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/)_** - offers containerd as the CRI runtime in **beta** for recent versions of Kubernetes.
|
||||
|
||||
**_[AWS Fargate](https://aws.amazon.com/fargate)_** - uses containerd + Firecracker (noted below) as the runtime and isolation technology for containers run in the Fargate platform. Fargate is a serverless, container-native compute offering from Amazon Web Services.
|
||||
|
||||
**_Cloud Foundry_** - The [Guardian container manager](https://github.com/cloudfoundry/guardian) for CF has been using OCI runC directly with additional code from CF managing the container image and filesystem interactions, but have recently migrated to use containerd as a replacement for the extra code they had written around runC.
|
||||
|
||||
**_Alibaba's PouchContainer_** - The Alibaba [PouchContainer](https://github.com/alibaba/pouch) project uses containerd as its runtime for a cloud native offering that has unique isolation and image distribution capabilities.
|
||||
|
||||
**_Rancher's k3s project_** - Rancher Labs [k3s](https://github.com/rancher/k3s) is a lightweight Kubernetes distribution; in their words: "Easy to install, half the memory, all in a binary less than 40mb." k8s uses containerd as the embedded runtime for this popular lightweight Kubernetes variant.
|
||||
|
||||
**_Rancher's Rio project_** - Rancher Labs [Rio](https://github.com/rancher/rio) project uses containerd as the runtime for a combined Kubernetes, Istio, and container "Cloud Native Container Distribution" platform.
|
||||
|
||||
**_Eliot_** - The [Eliot](https://github.com/ernoaapa/eliot) container project for IoT device container management uses containerd as the runtime.
|
||||
|
||||
**_Balena_** - Resin's [Balena](https://github.com/resin-os/balena) container engine, based on moby/moby but for edge, embedded, and IoT use cases, uses the containerd and runc stack in the same way that the Docker engine uses containerd.
|
||||
|
||||
**_LinuxKit_** - the Moby project's [LinuxKit](https://github.com/linuxkit/linuxkit) for building secure, minimal Linux OS images in a container-native model uses containerd as the core runtime for system and service containers.
|
||||
|
||||
**_BuildKit_** - The Moby project's [BuildKit](https://github.com/moby/buildkit) can use either runC or containerd as build execution backends for building container images. BuildKit support has also been built into the Docker engine in recent releases, making BuildKit provide the backend to the `docker build` command.
|
||||
|
||||
**_Azure acs-engine_** - Microsoft Azure's [acs-engine](https://github.com/Azure/acs-engine) open source project has customizable deployment of Kubernetes clusters, where containerd is a selectable container runtime. At some point in the future Azure's AKS service will default to use containerd as the CRI runtime for deployed Kubernetes clusters.
|
||||
|
||||
**_Amazon Firecracker_** - The AWS [Firecracker VMM project](http://firecracker-microvm.io/) has extended containerd with a new snapshotter and v2 shim to allow containerd to drive virtualized container processes via their VMM implementation. More details on their containerd integration are available in [their GitHub project](https://github.com/firecracker-microvm/firecracker-containerd).
|
||||
|
||||
**_Kata Containers_** - The [Kata containers](https://katacontainers.io/) lightweight-virtualized container runtime project integrates with containerd via a custom v2 shim implementation that drives the Kata container runtime.
|
||||
|
||||
**_D2iQ Konvoy_** - D2iQ Inc [Konvoy](https://d2iq.com/products/konvoy) product uses containerd as the container runtime for its Kubernetes distribution.
|
||||
|
||||
**_Inclavare Containers_** - [Inclavare Containers](https://github.com/alibaba/inclavare-containers) is an innovation of container runtime with the novel approach for launching protected containers in hardware-assisted Trusted Execution Environment (TEE) technology, aka Enclave, which can prevent the untrusted entity, such as Cloud Service Provider (CSP), from accessing the sensitive and confidential assets in use.
|
||||
|
||||
**_Other Projects_** - While the above list provides a cross-section of well known uses of containerd, the simplicity and clear API layer for containerd has inspired many smaller projects around providing simple container management platforms. Several examples of building higher layer functionality on top of the containerd base have come from various containerd community participants:
|
||||
- Michael Crosby's [boss](https://github.com/crosbymichael/boss) project,
|
||||
- Evan Hazlett's [stellar](https://github.com/ehazlett/stellar) project,
|
||||
- Paul Knopf's immutable Linux image builder project: [darch](https://github.com/godarch/darch).
|
279
vendor/github.com/containerd/containerd/BUILDING.md
generated
vendored
Normal file
279
vendor/github.com/containerd/containerd/BUILDING.md
generated
vendored
Normal file
|
@ -0,0 +1,279 @@
|
|||
# Build containerd from source
|
||||
|
||||
This guide is useful if you intend to contribute on containerd. Thanks for your
|
||||
effort. Every contribution is very appreciated.
|
||||
|
||||
This doc includes:
|
||||
* [Build requirements](#build-requirements)
|
||||
* [Build the development environment](#build-the-development-environment)
|
||||
* [Build containerd](#build-containerd)
|
||||
* [Via docker container](#via-docker-container)
|
||||
* [Testing](#testing-containerd)
|
||||
|
||||
## Build requirements
|
||||
|
||||
To build the `containerd` daemon, and the `ctr` simple test client, the following build system dependencies are required:
|
||||
|
||||
* Go 1.13.x or above except 1.14.x
|
||||
* Protoc 3.x compiler and headers (download at the [Google protobuf releases page](https://github.com/google/protobuf/releases))
|
||||
* Btrfs headers and libraries for your distribution. Note that building the btrfs driver can be disabled via the build tag `no_btrfs`, removing this dependency.
|
||||
|
||||
## Build the development environment
|
||||
|
||||
First you need to setup your Go development environment. You can follow this
|
||||
guideline [How to write go code](https://golang.org/doc/code.html) and at the
|
||||
end you have `go` command in your `PATH`.
|
||||
|
||||
You need `git` to checkout the source code:
|
||||
|
||||
```sh
|
||||
git clone https://github.com/containerd/containerd
|
||||
```
|
||||
|
||||
For proper results, install the `protoc` release into `/usr/local` on your build system. For example, the following commands will download and install the 3.11.4 release for a 64-bit Linux host:
|
||||
|
||||
```
|
||||
$ wget -c https://github.com/google/protobuf/releases/download/v3.11.4/protoc-3.11.4-linux-x86_64.zip
|
||||
$ sudo unzip protoc-3.11.4-linux-x86_64.zip -d /usr/local
|
||||
```
|
||||
|
||||
`containerd` uses [Btrfs](https://en.wikipedia.org/wiki/Btrfs) it means that you
|
||||
need to satisfy these dependencies in your system:
|
||||
|
||||
* CentOS/Fedora: `yum install btrfs-progs-devel`
|
||||
* Debian/Ubuntu: `apt-get install btrfs-progs libbtrfs-dev`
|
||||
* Debian(before Buster)/Ubuntu(before 19.10): `apt-get install btrfs-tools`
|
||||
|
||||
At this point you are ready to build `containerd` yourself!
|
||||
|
||||
## Build runc
|
||||
|
||||
`runc` is the default container runtime used by `containerd` and is required to
|
||||
run containerd. While it is okay to download a runc binary and install that on
|
||||
the system, sometimes it is necessary to build runc directly when working with
|
||||
container runtime development. You can skip this step if you already have the
|
||||
correct version of `runc` installed.
|
||||
|
||||
`runc` requires `libseccomp`. You may need to install the missing dependencies:
|
||||
|
||||
* CentOS/Fedora: `yum install libseccomp libseccomp-devel`
|
||||
* Debian/Ubuntu: `apt-get install libseccomp libseccomp-dev`
|
||||
|
||||
|
||||
For the quick and dirty installation, you can use the following:
|
||||
|
||||
```
|
||||
git clone https://github.com/opencontainers/runc
|
||||
cd runc
|
||||
make
|
||||
sudo make install
|
||||
```
|
||||
|
||||
Make sure to follow the guidelines for versioning in [RUNC.md](/docs/RUNC.md) for the
|
||||
best results.
|
||||
|
||||
## Build containerd
|
||||
|
||||
`containerd` uses `make` to create a repeatable build flow. It means that you
|
||||
can run:
|
||||
|
||||
```
|
||||
cd containerd
|
||||
make
|
||||
```
|
||||
|
||||
This is going to build all the project binaries in the `./bin/` directory.
|
||||
|
||||
You can move them in your global path, `/usr/local/bin` with:
|
||||
|
||||
```sudo
|
||||
sudo make install
|
||||
```
|
||||
|
||||
When making any changes to the gRPC API, you can use the installed `protoc`
|
||||
compiler to regenerate the API generated code packages with:
|
||||
|
||||
```sudo
|
||||
make generate
|
||||
```
|
||||
|
||||
> *Note*: Several build tags are currently available:
|
||||
> * `no_btrfs`: A build tag disables building the btrfs snapshot driver.
|
||||
> * `no_cri`: A build tag disables building Kubernetes [CRI](http://blog.kubernetes.io/2016/12/container-runtime-interface-cri-in-kubernetes.html) support into containerd.
|
||||
> See [here](https://github.com/containerd/cri-containerd#build-tags) for build tags of CRI plugin.
|
||||
> * `no_devmapper`: A build tag disables building the device mapper snapshot driver.
|
||||
>
|
||||
> For example, adding `BUILDTAGS=no_btrfs` to your environment before calling the **binaries**
|
||||
> Makefile target will disable the btrfs driver within the containerd Go build.
|
||||
|
||||
Vendoring of external imports uses the [Go Modules](https://golang.org/ref/mod#vendoring). You need
|
||||
to use `go mod` command to modify the dependencies. After modifition, you should run `go mod tidy`
|
||||
and `go mod vendor` to ensure the `go.mod`, `go.sum` files and `vendor` directory are up to date.
|
||||
Changes to these files should become a single commit for a PR which relies on vendored updates.
|
||||
|
||||
Please refer to [RUNC.md](/docs/RUNC.md) for the currently supported version of `runc` that is used by containerd.
|
||||
|
||||
### Static binaries
|
||||
|
||||
You can build static binaries by providing a few variables to `make`:
|
||||
|
||||
```sudo
|
||||
make EXTRA_FLAGS="-buildmode pie" \
|
||||
EXTRA_LDFLAGS='-linkmode external -extldflags "-fno-PIC -static"' \
|
||||
BUILDTAGS="netgo osusergo static_build"
|
||||
```
|
||||
|
||||
> *Note*:
|
||||
> - static build is discouraged
|
||||
> - static containerd binary does not support loading shared object plugins (`*.so`)
|
||||
|
||||
# Via Docker container
|
||||
|
||||
The following instructions assume you are at the parent directory of containerd source directory.
|
||||
|
||||
## Build containerd
|
||||
|
||||
You can build `containerd` via a Linux-based Docker container.
|
||||
You can build an image from this `Dockerfile`:
|
||||
|
||||
```
|
||||
FROM golang
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y libbtrfs-dev
|
||||
```
|
||||
|
||||
Let's suppose that you built an image called `containerd/build`. From the
|
||||
containerd source root directory you can run the following command:
|
||||
|
||||
```sh
|
||||
docker run -it \
|
||||
-v ${PWD}/containerd:/go/src/github.com/containerd/containerd \
|
||||
-e GOPATH=/go \
|
||||
-w /go/src/github.com/containerd/containerd containerd/build sh
|
||||
```
|
||||
|
||||
This mounts `containerd` repository
|
||||
|
||||
You are now ready to [build](#build-containerd):
|
||||
|
||||
```sh
|
||||
make && make install
|
||||
```
|
||||
|
||||
## Build containerd and runc
|
||||
To have complete core container runtime, you will need both `containerd` and `runc`. It is possible to build both of these via Docker container.
|
||||
|
||||
You can use `git` to checkout `runc`:
|
||||
|
||||
```sh
|
||||
git clone https://github.com/opencontainers/runc
|
||||
```
|
||||
|
||||
We can build an image from this `Dockerfile`:
|
||||
|
||||
```sh
|
||||
FROM golang
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y libbtrfs-dev libseccomp-dev
|
||||
|
||||
```
|
||||
|
||||
In our Docker container we will build `runc` build, which includes
|
||||
[seccomp](https://en.wikipedia.org/wiki/seccomp), [SELinux](https://en.wikipedia.org/wiki/Security-Enhanced_Linux),
|
||||
and [AppArmor](https://en.wikipedia.org/wiki/AppArmor) support. Seccomp support
|
||||
in runc requires `libseccomp-dev` as a dependency (AppArmor and SELinux support
|
||||
do not require external libraries at build time). Refer to [RUNC.md](docs/RUNC.md)
|
||||
in the docs directory to for details about building runc, and to learn about
|
||||
supported versions of `runc` as used by containerd.
|
||||
|
||||
Let's suppose you build an image called `containerd/build` from the above Dockerfile. You can run the following command:
|
||||
|
||||
```sh
|
||||
docker run -it --privileged \
|
||||
-v /var/lib/containerd \
|
||||
-v ${PWD}/runc:/go/src/github.com/opencontainers/runc \
|
||||
-v ${PWD}/containerd:/go/src/github.com/containerd/containerd \
|
||||
-e GOPATH=/go \
|
||||
-w /go/src/github.com/containerd/containerd containerd/build sh
|
||||
```
|
||||
|
||||
This mounts both `runc` and `containerd` repositories in our Docker container.
|
||||
|
||||
From within our Docker container let's build `containerd`:
|
||||
|
||||
```sh
|
||||
cd /go/src/github.com/containerd/containerd
|
||||
make && make install
|
||||
```
|
||||
|
||||
These binaries can be found in the `./bin` directory in your host.
|
||||
`make install` will move the binaries in your `$PATH`.
|
||||
|
||||
Next, let's build `runc`:
|
||||
|
||||
```sh
|
||||
cd /go/src/github.com/opencontainers/runc
|
||||
make && make install
|
||||
```
|
||||
|
||||
For further details about building runc, refer to [RUNC.md](docs/RUNC.md) in the
|
||||
docs directory.
|
||||
|
||||
When working with `ctr`, the simple test client we just built, don't forget to start the daemon!
|
||||
|
||||
```sh
|
||||
containerd --config config.toml
|
||||
```
|
||||
|
||||
# Testing containerd
|
||||
|
||||
During the automated CI the unit tests and integration tests are run as part of the PR validation. As a developer you can run these tests locally by using any of the following `Makefile` targets:
|
||||
- `make test`: run all non-integration tests that do not require `root` privileges
|
||||
- `make root-test`: run all non-integration tests which require `root`
|
||||
- `make integration`: run all tests, including integration tests and those which require `root`. `TESTFLAGS_PARALLEL` can be used to control parallelism. For example, `TESTFLAGS_PARALLEL=1 make integration` will lead a non-parallel execution. The default value of `TESTFLAGS_PARALLEL` is **8**.
|
||||
|
||||
To execute a specific test or set of tests you can use the `go test` capabilities
|
||||
without using the `Makefile` targets. The following examples show how to specify a test
|
||||
name and also how to use the flag directly against `go test` to run root-requiring tests.
|
||||
|
||||
```sh
|
||||
# run the test <TEST_NAME>:
|
||||
go test -v -run "<TEST_NAME>" .
|
||||
# enable the root-requiring tests:
|
||||
go test -v -run . -test.root
|
||||
```
|
||||
|
||||
Example output from directly running `go test` to execute the `TestContainerList` test:
|
||||
```sh
|
||||
sudo go test -v -run "TestContainerList" . -test.root
|
||||
INFO[0000] running tests against containerd revision=f2ae8a020a985a8d9862c9eb5ab66902c2888361 version=v1.0.0-beta.2-49-gf2ae8a0
|
||||
=== RUN TestContainerList
|
||||
--- PASS: TestContainerList (0.00s)
|
||||
PASS
|
||||
ok github.com/containerd/containerd 4.778s
|
||||
```
|
||||
|
||||
## Additional tools
|
||||
|
||||
### containerd-stress
|
||||
In addition to `go test`-based testing executed via the `Makefile` targets, the `containerd-stress` tool is available and built with the `all` or `binaries` targets and installed during `make install`.
|
||||
|
||||
With this tool you can stress a running containerd daemon for a specified period of time, selecting a concurrency level to generate stress against the daemon. The following command is an example of having five workers running for two hours against a default containerd gRPC socket address:
|
||||
|
||||
```sh
|
||||
containerd-stress -c 5 -t 120
|
||||
```
|
||||
|
||||
For more information on this tool's options please run `containerd-stress --help`.
|
||||
|
||||
### bucketbench
|
||||
[Bucketbench](https://github.com/estesp/bucketbench) is an external tool which can be used to drive load against a container runtime, specifying a particular set of lifecycle operations to run with a specified amount of concurrency. Bucketbench is more focused on generating performance details than simply inducing load against containerd.
|
||||
|
||||
Bucketbench differs from the `containerd-stress` tool in a few ways:
|
||||
- Bucketbench has support for testing the Docker engine, the `runc` binary, and containerd 0.2.x (via `ctr`) and 1.0 (via the client library) branches.
|
||||
- Bucketbench is driven via configuration file that allows specifying a list of lifecycle operations to execute. This can be used to generate detailed statistics per-command (e.g. start, stop, pause, delete).
|
||||
- Bucketbench generates detailed reports and timing data at the end of the configured test run.
|
||||
|
||||
More details on how to install and run `bucketbench` are available at the [GitHub project page](https://github.com/estesp/bucketbench).
|
403
vendor/github.com/containerd/containerd/Makefile
generated
vendored
Normal file
403
vendor/github.com/containerd/containerd/Makefile
generated
vendored
Normal file
|
@ -0,0 +1,403 @@
|
|||
# Copyright The containerd Authors.
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
# Go command to use for build
|
||||
GO ?= go
|
||||
|
||||
# Root directory of the project (absolute path).
|
||||
ROOTDIR=$(dir $(abspath $(lastword $(MAKEFILE_LIST))))
|
||||
|
||||
# Base path used to install.
|
||||
DESTDIR ?= /usr/local
|
||||
TEST_IMAGE_LIST ?=
|
||||
|
||||
# Used to populate variables in version package.
|
||||
VERSION=$(shell git describe --match 'v[0-9]*' --dirty='.m' --always)
|
||||
REVISION=$(shell git rev-parse HEAD)$(shell if ! git diff --no-ext-diff --quiet --exit-code; then echo .m; fi)
|
||||
PACKAGE=github.com/containerd/containerd
|
||||
SHIM_CGO_ENABLED ?= 0
|
||||
|
||||
ifneq "$(strip $(shell command -v $(GO) 2>/dev/null))" ""
|
||||
GOOS ?= $(shell $(GO) env GOOS)
|
||||
GOARCH ?= $(shell $(GO) env GOARCH)
|
||||
else
|
||||
ifeq ($(GOOS),)
|
||||
# approximate GOOS for the platform if we don't have Go and GOOS isn't
|
||||
# set. We leave GOARCH unset, so that may need to be fixed.
|
||||
ifeq ($(OS),Windows_NT)
|
||||
GOOS = windows
|
||||
else
|
||||
UNAME_S := $(shell uname -s)
|
||||
ifeq ($(UNAME_S),Linux)
|
||||
GOOS = linux
|
||||
endif
|
||||
ifeq ($(UNAME_S),Darwin)
|
||||
GOOS = darwin
|
||||
endif
|
||||
ifeq ($(UNAME_S),FreeBSD)
|
||||
GOOS = freebsd
|
||||
endif
|
||||
endif
|
||||
else
|
||||
GOOS ?= $$GOOS
|
||||
GOARCH ?= $$GOARCH
|
||||
endif
|
||||
endif
|
||||
|
||||
ifndef GODEBUG
|
||||
EXTRA_LDFLAGS += -s -w
|
||||
DEBUG_GO_GCFLAGS :=
|
||||
DEBUG_TAGS :=
|
||||
else
|
||||
DEBUG_GO_GCFLAGS := -gcflags=all="-N -l"
|
||||
DEBUG_TAGS := static_build
|
||||
endif
|
||||
|
||||
WHALE = "🇩"
|
||||
ONI = "👹"
|
||||
|
||||
RELEASE=containerd-$(VERSION:v%=%).${GOOS}-${GOARCH}
|
||||
CRIRELEASE=cri-containerd-$(VERSION:v%=%)-${GOOS}-${GOARCH}
|
||||
CRICNIRELEASE=cri-containerd-cni-$(VERSION:v%=%)-${GOOS}-${GOARCH}
|
||||
|
||||
PKG=github.com/containerd/containerd
|
||||
|
||||
# Project binaries.
|
||||
COMMANDS=ctr containerd containerd-stress
|
||||
MANPAGES=ctr.8 containerd.8 containerd-config.8 containerd-config.toml.5
|
||||
|
||||
ifdef BUILDTAGS
|
||||
GO_BUILDTAGS = ${BUILDTAGS}
|
||||
endif
|
||||
GO_BUILDTAGS ?=
|
||||
GO_BUILDTAGS += ${DEBUG_TAGS}
|
||||
GO_TAGS=$(if $(GO_BUILDTAGS),-tags "$(GO_BUILDTAGS)",)
|
||||
GO_LDFLAGS=-ldflags '-X $(PKG)/version.Version=$(VERSION) -X $(PKG)/version.Revision=$(REVISION) -X $(PKG)/version.Package=$(PACKAGE) $(EXTRA_LDFLAGS)'
|
||||
SHIM_GO_LDFLAGS=-ldflags '-X $(PKG)/version.Version=$(VERSION) -X $(PKG)/version.Revision=$(REVISION) -X $(PKG)/version.Package=$(PACKAGE) -extldflags "-static" $(EXTRA_LDFLAGS)'
|
||||
|
||||
# Project packages.
|
||||
PACKAGES=$(shell $(GO) list ${GO_TAGS} ./... | grep -v /vendor/ | grep -v /integration)
|
||||
TEST_REQUIRES_ROOT_PACKAGES=$(filter \
|
||||
${PACKAGES}, \
|
||||
$(shell \
|
||||
for f in $$(git grep -l testutil.RequiresRoot | grep -v Makefile); do \
|
||||
d="$$(dirname $$f)"; \
|
||||
[ "$$d" = "." ] && echo "${PKG}" && continue; \
|
||||
echo "${PKG}/$$d"; \
|
||||
done | sort -u) \
|
||||
)
|
||||
|
||||
ifdef SKIPTESTS
|
||||
PACKAGES:=$(filter-out ${SKIPTESTS},${PACKAGES})
|
||||
TEST_REQUIRES_ROOT_PACKAGES:=$(filter-out ${SKIPTESTS},${TEST_REQUIRES_ROOT_PACKAGES})
|
||||
endif
|
||||
|
||||
#Replaces ":" (*nix), ";" (windows) with newline for easy parsing
|
||||
GOPATHS=$(shell echo ${GOPATH} | tr ":" "\n" | tr ";" "\n")
|
||||
|
||||
TESTFLAGS_RACE=
|
||||
GO_BUILD_FLAGS=
|
||||
# See Golang issue re: '-trimpath': https://github.com/golang/go/issues/13809
|
||||
GO_GCFLAGS=$(shell \
|
||||
set -- ${GOPATHS}; \
|
||||
echo "-gcflags=-trimpath=$${1}/src"; \
|
||||
)
|
||||
|
||||
BINARIES=$(addprefix bin/,$(COMMANDS))
|
||||
|
||||
#include platform specific makefile
|
||||
-include Makefile.$(GOOS)
|
||||
|
||||
# Flags passed to `go test`
|
||||
TESTFLAGS ?= $(TESTFLAGS_RACE) $(EXTRA_TESTFLAGS)
|
||||
TESTFLAGS_PARALLEL ?= 8
|
||||
|
||||
# Use this to replace `go test` with, for instance, `gotestsum`
|
||||
GOTEST ?= $(GO) test
|
||||
|
||||
OUTPUTDIR = $(join $(ROOTDIR), _output)
|
||||
CRIDIR=$(OUTPUTDIR)/cri
|
||||
|
||||
.PHONY: clean all AUTHORS build binaries test integration generate protos checkprotos coverage ci check help install uninstall vendor release mandir install-man genman install-cri-deps cri-release cri-cni-release cri-integration install-deps bin/cri-integration.test
|
||||
.DEFAULT: default
|
||||
|
||||
all: binaries
|
||||
|
||||
check: proto-fmt ## run all linters
|
||||
@echo "$(WHALE) $@"
|
||||
GOGC=75 golangci-lint run
|
||||
|
||||
ci: check binaries checkprotos coverage coverage-integration ## to be used by the CI
|
||||
|
||||
AUTHORS: .mailmap .git/HEAD
|
||||
git log --format='%aN <%aE>' | sort -fu > $@
|
||||
|
||||
generate: protos
|
||||
@echo "$(WHALE) $@"
|
||||
@PATH="${ROOTDIR}/bin:${PATH}" $(GO) generate -x ${PACKAGES}
|
||||
|
||||
protos: bin/protoc-gen-gogoctrd ## generate protobuf
|
||||
@echo "$(WHALE) $@"
|
||||
@PATH="${ROOTDIR}/bin:${PATH}" protobuild --quiet ${PACKAGES}
|
||||
|
||||
check-protos: protos ## check if protobufs needs to be generated again
|
||||
@echo "$(WHALE) $@"
|
||||
@test -z "$$(git status --short | grep ".pb.go" | tee /dev/stderr)" || \
|
||||
((git diff | cat) && \
|
||||
(echo "$(ONI) please run 'make protos' when making changes to proto files" && false))
|
||||
|
||||
check-api-descriptors: protos ## check that protobuf changes aren't present.
|
||||
@echo "$(WHALE) $@"
|
||||
@test -z "$$(git status --short | grep ".pb.txt" | tee /dev/stderr)" || \
|
||||
((git diff $$(find . -name '*.pb.txt') | cat) && \
|
||||
(echo "$(ONI) please run 'make protos' when making changes to proto files and check-in the generated descriptor file changes" && false))
|
||||
|
||||
proto-fmt: ## check format of proto files
|
||||
@echo "$(WHALE) $@"
|
||||
@test -z "$$(find . -path ./vendor -prune -o -path ./protobuf/google/rpc -prune -o -name '*.proto' -type f -exec grep -Hn -e "^ " {} \; | tee /dev/stderr)" || \
|
||||
(echo "$(ONI) please indent proto files with tabs only" && false)
|
||||
@test -z "$$(find . -path ./vendor -prune -o -name '*.proto' -type f -exec grep -Hn "Meta meta = " {} \; | grep -v '(gogoproto.nullable) = false' | tee /dev/stderr)" || \
|
||||
(echo "$(ONI) meta fields in proto files must have option (gogoproto.nullable) = false" && false)
|
||||
|
||||
build: ## build the go packages
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GO) build ${DEBUG_GO_GCFLAGS} ${GO_GCFLAGS} ${GO_BUILD_FLAGS} ${EXTRA_FLAGS} ${GO_LDFLAGS} ${PACKAGES}
|
||||
|
||||
test: ## run tests, except integration tests and tests that require root
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GOTEST) ${TESTFLAGS} ${PACKAGES}
|
||||
|
||||
root-test: ## run tests, except integration tests
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GOTEST) ${TESTFLAGS} ${TEST_REQUIRES_ROOT_PACKAGES} -test.root
|
||||
|
||||
integration: ## run integration tests
|
||||
@echo "$(WHALE) $@"
|
||||
@cd "${ROOTDIR}/integration/client" && $(GO) mod download && $(GOTEST) -v ${TESTFLAGS} -test.root -parallel ${TESTFLAGS_PARALLEL} .
|
||||
|
||||
# TODO integrate cri integration bucket with coverage
|
||||
bin/cri-integration.test:
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GO) test -c ./integration -o bin/cri-integration.test
|
||||
|
||||
cri-integration: binaries bin/cri-integration.test ## run cri integration tests
|
||||
@echo "$(WHALE) $@"
|
||||
@./script/test/cri-integration.sh
|
||||
@rm -rf bin/cri-integration.test
|
||||
|
||||
benchmark: ## run benchmarks tests
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GO) test ${TESTFLAGS} -bench . -run Benchmark -test.root
|
||||
|
||||
FORCE:
|
||||
|
||||
define BUILD_BINARY
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GO) build ${DEBUG_GO_GCFLAGS} ${GO_GCFLAGS} ${GO_BUILD_FLAGS} -o $@ ${GO_LDFLAGS} ${GO_TAGS} ./$<
|
||||
endef
|
||||
|
||||
# Build a binary from a cmd.
|
||||
bin/%: cmd/% FORCE
|
||||
$(call BUILD_BINARY)
|
||||
|
||||
bin/containerd-shim: cmd/containerd-shim FORCE # set !cgo and omit pie for a static shim build: https://github.com/golang/go/issues/17789#issuecomment-258542220
|
||||
@echo "$(WHALE) bin/containerd-shim"
|
||||
@CGO_ENABLED=${SHIM_CGO_ENABLED} $(GO) build ${GO_BUILD_FLAGS} -o bin/containerd-shim ${SHIM_GO_LDFLAGS} ${GO_TAGS} ./cmd/containerd-shim
|
||||
|
||||
bin/containerd-shim-runc-v1: cmd/containerd-shim-runc-v1 FORCE # set !cgo and omit pie for a static shim build: https://github.com/golang/go/issues/17789#issuecomment-258542220
|
||||
@echo "$(WHALE) bin/containerd-shim-runc-v1"
|
||||
@CGO_ENABLED=${SHIM_CGO_ENABLED} $(GO) build ${GO_BUILD_FLAGS} -o bin/containerd-shim-runc-v1 ${SHIM_GO_LDFLAGS} ${GO_TAGS} ./cmd/containerd-shim-runc-v1
|
||||
|
||||
bin/containerd-shim-runc-v2: cmd/containerd-shim-runc-v2 FORCE # set !cgo and omit pie for a static shim build: https://github.com/golang/go/issues/17789#issuecomment-258542220
|
||||
@echo "$(WHALE) bin/containerd-shim-runc-v2"
|
||||
@CGO_ENABLED=${SHIM_CGO_ENABLED} $(GO) build ${GO_BUILD_FLAGS} -o bin/containerd-shim-runc-v2 ${SHIM_GO_LDFLAGS} ${GO_TAGS} ./cmd/containerd-shim-runc-v2
|
||||
|
||||
binaries: $(BINARIES) ## build binaries
|
||||
@echo "$(WHALE) $@"
|
||||
|
||||
man: mandir $(addprefix man/,$(MANPAGES))
|
||||
@echo "$(WHALE) $@"
|
||||
|
||||
mandir:
|
||||
@mkdir -p man
|
||||
|
||||
# Kept for backwards compatibility
|
||||
genman: man/containerd.8 man/ctr.8
|
||||
|
||||
man/containerd.8: FORCE
|
||||
@echo "$(WHALE) $@"
|
||||
$(GO) run cmd/gen-manpages/main.go $(@F) $(@D)
|
||||
|
||||
man/ctr.8: FORCE
|
||||
@echo "$(WHALE) $@"
|
||||
$(GO) run cmd/gen-manpages/main.go $(@F) $(@D)
|
||||
|
||||
man/%: docs/man/%.md FORCE
|
||||
@echo "$(WHALE) $@"
|
||||
go-md2man -in "$<" -out "$@"
|
||||
|
||||
define installmanpage
|
||||
mkdir -p $(DESTDIR)/man/man$(2);
|
||||
gzip -c $(1) >$(DESTDIR)/man/man$(2)/$(3).gz;
|
||||
endef
|
||||
|
||||
install-man:
|
||||
@echo "$(WHALE) $@"
|
||||
$(foreach manpage,$(addprefix man/,$(MANPAGES)), $(call installmanpage,$(manpage),$(subst .,,$(suffix $(manpage))),$(notdir $(manpage))))
|
||||
|
||||
releases/$(RELEASE).tar.gz: $(BINARIES)
|
||||
@echo "$(WHALE) $@"
|
||||
@rm -rf releases/$(RELEASE) releases/$(RELEASE).tar.gz
|
||||
@install -d releases/$(RELEASE)/bin
|
||||
@install $(BINARIES) releases/$(RELEASE)/bin
|
||||
@tar -czf releases/$(RELEASE).tar.gz -C releases/$(RELEASE) bin
|
||||
@rm -rf releases/$(RELEASE)
|
||||
|
||||
release: releases/$(RELEASE).tar.gz
|
||||
@echo "$(WHALE) $@"
|
||||
@cd releases && sha256sum $(RELEASE).tar.gz >$(RELEASE).tar.gz.sha256sum
|
||||
|
||||
# install of cri deps into release output directory
|
||||
ifeq ($(GOOS),windows)
|
||||
install-cri-deps: $(BINARIES)
|
||||
mkdir -p $(CRIDIR)
|
||||
DESTDIR=$(CRIDIR) script/setup/install-cni-windows
|
||||
cp bin/* $(CRIDIR)
|
||||
else
|
||||
install-cri-deps: $(BINARIES)
|
||||
@rm -rf ${CRIDIR}
|
||||
@install -d ${CRIDIR}/usr/local/bin
|
||||
@install -D -m 755 bin/* ${CRIDIR}/usr/local/bin
|
||||
@install -d ${CRIDIR}/opt/containerd/cluster
|
||||
@cp -r contrib/gce ${CRIDIR}/opt/containerd/cluster/
|
||||
@install -d ${CRIDIR}/etc/systemd/system
|
||||
@install -m 644 containerd.service ${CRIDIR}/etc/systemd/system
|
||||
echo "CONTAINERD_VERSION: '$(VERSION:v%=%)'" | tee ${CRIDIR}/opt/containerd/cluster/version
|
||||
|
||||
DESTDIR=$(CRIDIR) script/setup/install-runc
|
||||
DESTDIR=$(CRIDIR) script/setup/install-cni
|
||||
DESTDIR=$(CRIDIR) script/setup/install-critools
|
||||
DESTDIR=$(CRIDIR) script/setup/install-imgcrypt
|
||||
|
||||
@install -d $(CRIDIR)/bin
|
||||
@install $(BINARIES) $(CRIDIR)/bin
|
||||
endif
|
||||
|
||||
ifeq ($(GOOS),windows)
|
||||
releases/$(CRIRELEASE).tar.gz: install-cri-deps
|
||||
@echo "$(WHALE) $@"
|
||||
@cd $(CRIDIR) && tar -czf ../../releases/$(CRIRELEASE).tar.gz *
|
||||
|
||||
releases/$(CRICNIRELEASE).tar.gz: install-cri-deps
|
||||
@echo "$(WHALE) $@"
|
||||
@cd $(CRIDIR) && tar -czf ../../releases/$(CRICNIRELEASE).tar.gz *
|
||||
else
|
||||
releases/$(CRIRELEASE).tar.gz: install-cri-deps
|
||||
@echo "$(WHALE) $@"
|
||||
@tar -czf releases/$(CRIRELEASE).tar.gz -C $(CRIDIR) etc/crictl.yaml etc/systemd usr opt/containerd
|
||||
|
||||
releases/$(CRICNIRELEASE).tar.gz: install-cri-deps
|
||||
@echo "$(WHALE) $@"
|
||||
@tar -czf releases/$(CRICNIRELEASE).tar.gz -C $(CRIDIR) etc usr opt
|
||||
endif
|
||||
|
||||
cri-release: releases/$(CRIRELEASE).tar.gz
|
||||
@echo "$(WHALE) $@"
|
||||
@cd releases && sha256sum $(CRIRELEASE).tar.gz >$(CRIRELEASE).tar.gz.sha256sum && ln -sf $(CRIRELEASE).tar.gz cri-containerd.tar.gz
|
||||
|
||||
cri-cni-release: releases/$(CRICNIRELEASE).tar.gz
|
||||
@echo "$(WHALE) $@"
|
||||
@cd releases && sha256sum $(CRICNIRELEASE).tar.gz >$(CRICNIRELEASE).tar.gz.sha256sum && ln -sf $(CRICNIRELEASE).tar.gz cri-cni-containerd.tar.gz
|
||||
|
||||
clean: ## clean up binaries
|
||||
@echo "$(WHALE) $@"
|
||||
@rm -f $(BINARIES)
|
||||
@rm -f releases/*.tar.gz*
|
||||
@rm -rf $(OUTPUTDIR)
|
||||
@rm -rf bin/cri-integration.test
|
||||
|
||||
clean-test: ## clean up debris from previously failed tests
|
||||
@echo "$(WHALE) $@"
|
||||
$(eval containers=$(shell find /run/containerd/runc -mindepth 2 -maxdepth 3 -type d -exec basename {} \;))
|
||||
$(shell pidof containerd containerd-shim runc | xargs -r -n 1 kill -9)
|
||||
@( for container in $(containers); do \
|
||||
grep $$container /proc/self/mountinfo | while read -r mountpoint; do \
|
||||
umount $$(echo $$mountpoint | awk '{print $$5}'); \
|
||||
done; \
|
||||
find /sys/fs/cgroup -name $$container -print0 | xargs -r -0 rmdir; \
|
||||
done )
|
||||
@rm -rf /run/containerd/runc/*
|
||||
@rm -rf /run/containerd/fifo/*
|
||||
@rm -rf /run/containerd-test/*
|
||||
@rm -rf bin/cri-integration.test
|
||||
|
||||
install: ## install binaries
|
||||
@echo "$(WHALE) $@ $(BINARIES)"
|
||||
@mkdir -p $(DESTDIR)/bin
|
||||
@install $(BINARIES) $(DESTDIR)/bin
|
||||
|
||||
uninstall:
|
||||
@echo "$(WHALE) $@"
|
||||
@rm -f $(addprefix $(DESTDIR)/bin/,$(notdir $(BINARIES)))
|
||||
|
||||
ifeq ($(GOOS),windows)
|
||||
install-deps:
|
||||
# TODO: need a script for hcshim something like containerd/cri/hack/install/windows/install-hcsshim.sh
|
||||
script/setup/install-critools
|
||||
script/setup/install-cni-windows
|
||||
else
|
||||
install-deps: ## install cri dependencies
|
||||
script/setup/install-seccomp
|
||||
script/setup/install-runc
|
||||
script/setup/install-critools
|
||||
script/setup/install-cni
|
||||
endif
|
||||
|
||||
coverage: ## generate coverprofiles from the unit tests, except tests that require root
|
||||
@echo "$(WHALE) $@"
|
||||
@rm -f coverage.txt
|
||||
@$(GO) test -i ${TESTFLAGS} ${PACKAGES} 2> /dev/null
|
||||
@( for pkg in ${PACKAGES}; do \
|
||||
$(GO) test ${TESTFLAGS} \
|
||||
-cover \
|
||||
-coverprofile=profile.out \
|
||||
-covermode=atomic $$pkg || exit; \
|
||||
if [ -f profile.out ]; then \
|
||||
cat profile.out >> coverage.txt; \
|
||||
rm profile.out; \
|
||||
fi; \
|
||||
done )
|
||||
|
||||
root-coverage: ## generate coverage profiles for unit tests that require root
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GO) test -i ${TESTFLAGS} ${TEST_REQUIRES_ROOT_PACKAGES} 2> /dev/null
|
||||
@( for pkg in ${TEST_REQUIRES_ROOT_PACKAGES}; do \
|
||||
$(GO) test ${TESTFLAGS} \
|
||||
-cover \
|
||||
-coverprofile=profile.out \
|
||||
-covermode=atomic $$pkg -test.root || exit; \
|
||||
if [ -f profile.out ]; then \
|
||||
cat profile.out >> coverage.txt; \
|
||||
rm profile.out; \
|
||||
fi; \
|
||||
done )
|
||||
|
||||
vendor: ## vendor
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GO) mod tidy
|
||||
@$(GO) mod vendor
|
||||
|
||||
help: ## this help
|
||||
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST) | sort
|
22
vendor/github.com/containerd/containerd/Makefile.darwin
generated
vendored
Normal file
22
vendor/github.com/containerd/containerd/Makefile.darwin
generated
vendored
Normal file
|
@ -0,0 +1,22 @@
|
|||
# Copyright The containerd Authors.
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
#darwin specific settings
|
||||
COMMANDS += containerd-shim
|
||||
|
||||
# amd64 supports go test -race
|
||||
ifeq ($(GOARCH),amd64)
|
||||
TESTFLAGS_RACE= -race
|
||||
endif
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue