Merge branch 'master' of github.com:docker/docker into volumeMan
Signed-off-by: Dan Walsh <dwalsh@redhat.com>
This commit is contained in:
commit
04e9087d5d
845 changed files with 50096 additions and 13216 deletions
51
.github/ISSUE_TEMPLATE.md
vendored
Normal file
51
.github/ISSUE_TEMPLATE.md
vendored
Normal file
|
@ -0,0 +1,51 @@
|
|||
<!--
|
||||
If you are reporting a new issue, make sure that we do not have any duplicates
|
||||
already open. You can ensure this by searching the issue list for this
|
||||
repository. If there is a duplicate, please close your issue and add a comment
|
||||
to the existing issue instead.
|
||||
|
||||
If you suspect your issue is a bug, please edit your issue description to
|
||||
include the BUG REPORT INFORMATION shown below. If you fail to provide this
|
||||
information within 7 days, we cannot debug your issue and will close it. We
|
||||
will, however, reopen it if you later provide the information.
|
||||
|
||||
For more information about reporting issues, see
|
||||
https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues
|
||||
|
||||
---------------------------------------------------
|
||||
BUG REPORT INFORMATION
|
||||
---------------------------------------------------
|
||||
Use the commands below to provide key information from your environment:
|
||||
You do NOT have to include this information if this is a FEATURE REQUEST
|
||||
-->
|
||||
|
||||
Output of `docker version`:
|
||||
|
||||
```
|
||||
(paste your output here)
|
||||
```
|
||||
|
||||
|
||||
Output of `docker info`:
|
||||
|
||||
```
|
||||
(paste your output here)
|
||||
```
|
||||
|
||||
Provide additional environment details (AWS, VirtualBox, physical, etc.):
|
||||
|
||||
|
||||
|
||||
List the steps to reproduce the issue:
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
|
||||
|
||||
Describe the results you received:
|
||||
|
||||
|
||||
Describe the results you expected:
|
||||
|
||||
|
||||
Provide additional info you think is important:
|
23
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
23
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
|
@ -0,0 +1,23 @@
|
|||
<!--
|
||||
Please make sure you've read and understood our contributing guidelines;
|
||||
https://github.com/docker/docker/blob/master/CONTRIBUTING.md
|
||||
|
||||
** Make sure all your commits include a signature generated with `git commit -s` **
|
||||
|
||||
For additional information on our contributing process, read our contributing
|
||||
guide https://docs.docker.com/opensource/code/
|
||||
|
||||
If this is a bug fix, make sure your description includes "fixes #xxxx", or
|
||||
"closes #xxxx"
|
||||
-->
|
||||
|
||||
Please provide the following information:
|
||||
|
||||
- What did you do?
|
||||
|
||||
- How did you do it?
|
||||
|
||||
- How do I see it or verify it?
|
||||
|
||||
- A picture of a cute animal (not mandatory but encouraged)
|
||||
|
69
CHANGELOG.md
69
CHANGELOG.md
|
@ -5,6 +5,73 @@ information on the list of deprecated flags and APIs please have a look at
|
|||
https://docs.docker.com/misc/deprecated/ where target removal dates can also
|
||||
be found.
|
||||
|
||||
## 1.10.2 (2016-02-22)
|
||||
|
||||
### Runtime
|
||||
|
||||
- Prevent systemd from deleting containers' cgroups when its configuration is reloaded [#20518](https://github.com/docker/docker/pull/20518)
|
||||
- Fix SELinux issues by disregarding `--read-only` when mounting `/dev/mqueue` [#20333](https://github.com/docker/docker/pull/20333)
|
||||
- Fix chown permissions used during `docker cp` when userns is used [#20446](https://github.com/docker/docker/pull/20446)
|
||||
- Fix configuration loading issue with all booleans defaulting to `true` [#20471](https://github.com/docker/docker/pull/20471)
|
||||
- Fix occasional panic with `docker logs -f` [#20522](https://github.com/docker/docker/pull/20522)
|
||||
|
||||
### Distribution
|
||||
|
||||
- Keep layer reference if deletion failed to avoid a badly inconsistent state [#20513](https://github.com/docker/docker/pull/20513)
|
||||
- Handle gracefully a corner case when canceling migration [#20372](https://github.com/docker/docker/pull/20372)
|
||||
- Fix docker import on compressed data [#20367](https://github.com/docker/docker/pull/20367)
|
||||
- Fix tar-split files corruption during migration that later cause docker push and docker save to fail [#20458](https://github.com/docker/docker/pull/20458)
|
||||
|
||||
### Networking
|
||||
|
||||
- Fix daemon crash if embedded DNS is sent garbage [#20510](https://github.com/docker/docker/pull/20510)
|
||||
|
||||
### Volumes
|
||||
|
||||
- Fix issue with multiple volume references with same name [#20381](https://github.com/docker/docker/pull/20381)
|
||||
|
||||
### Security
|
||||
|
||||
- Fix potential cache corruption and delegation conflict issues [#20523](https://github.com/docker/docker/pull/20523)
|
||||
|
||||
## 1.10.1 (2016-02-11)
|
||||
|
||||
### Runtime
|
||||
|
||||
* Do not stop daemon on migration hard failure [#20156](https://github.com/docker/docker/pull/20156)
|
||||
- Fix various issues with migration to content-addressable images [#20058](https://github.com/docker/docker/pull/20058)
|
||||
- Fix ZFS permission bug with user namespaces [#20045](https://github.com/docker/docker/pull/20045)
|
||||
- Do not leak /dev/mqueue from the host to all containers, keep it container-specific [#19876](https://github.com/docker/docker/pull/19876) [#20133](https://github.com/docker/docker/pull/20133)
|
||||
- Fix `docker ps --filter before=...` to not show stopped containers without providing `-a` flag [#20135](https://github.com/docker/docker/pull/20135)
|
||||
|
||||
### Security
|
||||
|
||||
- Fix issue preventing docker events to work properly with authorization plugin [#20002](https://github.com/docker/docker/pull/20002)
|
||||
|
||||
### Distribution
|
||||
|
||||
* Add additional verifications and prevent from uploading invalid data to registries [#20164](https://github.com/docker/docker/pull/20164)
|
||||
- Fix regression preventing uppercase characters in image reference hostname [#20175](https://github.com/docker/docker/pull/20175)
|
||||
|
||||
### Networking
|
||||
|
||||
- Fix embedded DNS for user-defined networks in the presence of firewalld [#20060](https://github.com/docker/docker/pull/20060)
|
||||
- Fix issue where removing a network during shutdown left Docker inoperable [#20181](https://github.com/docker/docker/issues/20181) [#20235](https://github.com/docker/docker/issues/20235)
|
||||
- Embedded DNS is now able to return compressed results [#20181](https://github.com/docker/docker/issues/20181)
|
||||
- Fix port-mapping issue with `userland-proxy=false` [#20181](https://github.com/docker/docker/issues/20181)
|
||||
|
||||
### Logging
|
||||
|
||||
- Fix bug where tcp+tls protocol would be rejected [#20109](https://github.com/docker/docker/pull/20109)
|
||||
|
||||
### Volumes
|
||||
|
||||
- Fix issue whereby older volume drivers would not receive volume options [#19983](https://github.com/docker/docker/pull/19983)
|
||||
|
||||
### Misc
|
||||
|
||||
- Remove TasksMax from Docker systemd service [#20167](https://github.com/docker/docker/pull/20167)
|
||||
|
||||
## 1.10.0 (2016-02-04)
|
||||
|
||||
**IMPORTANT**: Docker 1.10 uses a new content-addressable storage for images and layers.
|
||||
|
@ -1771,7 +1838,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
|
|||
+ Containers can expose public UDP ports (eg, '-p 123/udp')
|
||||
+ Optionally specify an exact public port (eg. '-p 80:4500')
|
||||
* 'docker login' supports additional options
|
||||
- Dont save a container`s hostname when committing an image.
|
||||
- Don't save a container`s hostname when committing an image.
|
||||
|
||||
#### Registry
|
||||
|
||||
|
|
|
@ -154,6 +154,8 @@ However, there might be a way to implement that feature *on top of* Docker.
|
|||
The <a href="https://groups.google.com/forum/#!forum/docker-dev" target="_blank">docker-dev</a>
|
||||
group is for contributors and other people contributing to the Docker
|
||||
project.
|
||||
You can join them without an google account by sending an email to e.g. "docker-user+subscribe@googlegroups.com".
|
||||
After receiving the join-request message, you can simply reply to that to confirm the subscribtion.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
|
|
19
Dockerfile
19
Dockerfile
|
@ -23,14 +23,16 @@
|
|||
# the case. Therefore, you don't have to disable it anymore.
|
||||
#
|
||||
|
||||
FROM ubuntu:trusty
|
||||
FROM debian:jessie
|
||||
|
||||
# add zfs ppa
|
||||
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61
|
||||
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61 \
|
||||
|| apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61
|
||||
RUN echo deb http://ppa.launchpad.net/zfs-native/stable/ubuntu trusty main > /etc/apt/sources.list.d/zfs.list
|
||||
|
||||
# add llvm repo
|
||||
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 6084F3CF814B57C1CF12EFD515CF4D18AF4F7421
|
||||
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 6084F3CF814B57C1CF12EFD515CF4D18AF4F7421 \
|
||||
|| apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 6084F3CF814B57C1CF12EFD515CF4D18AF4F7421
|
||||
RUN echo deb http://llvm.org/apt/trusty/ llvm-toolchain-trusty main > /etc/apt/sources.list.d/llvm.list
|
||||
|
||||
# Packaged dependencies
|
||||
|
@ -56,12 +58,13 @@ RUN apt-get update && apt-get install -y \
|
|||
libsystemd-journal-dev \
|
||||
libtool \
|
||||
mercurial \
|
||||
net-tools \
|
||||
pkg-config \
|
||||
python-dev \
|
||||
python-mock \
|
||||
python-pip \
|
||||
python-websocket \
|
||||
s3cmd=1.1.0* \
|
||||
s3cmd=1.5.0* \
|
||||
ubuntu-zfs \
|
||||
xfsprogs \
|
||||
libzfs-dev \
|
||||
|
@ -88,9 +91,11 @@ RUN cd /usr/local/lvm2 \
|
|||
|
||||
# Configure the container for OSX cross compilation
|
||||
ENV OSX_SDK MacOSX10.11.sdk
|
||||
ENV OSX_CROSS_COMMIT 8aa9b71a394905e6c5f4b59e2b97b87a004658a4
|
||||
RUN set -x \
|
||||
&& export OSXCROSS_PATH="/osxcross" \
|
||||
&& git clone --depth 1 https://github.com/tpoechtrager/osxcross.git $OSXCROSS_PATH \
|
||||
&& git clone https://github.com/tpoechtrager/osxcross.git $OSXCROSS_PATH \
|
||||
&& ( cd $OSXCROSS_PATH && git checkout -q $OSX_CROSS_COMMIT) \
|
||||
&& curl -sSL https://s3.dockerproject.org/darwin/${OSX_SDK}.tar.xz -o "${OSXCROSS_PATH}/tarballs/${OSX_SDK}.tar.xz" \
|
||||
&& UNATTENDED=yes OSX_VERSION_MIN=10.6 ${OSXCROSS_PATH}/build.sh
|
||||
ENV PATH /osxcross/target/bin:$PATH
|
||||
|
@ -114,7 +119,7 @@ RUN set -x \
|
|||
# IMPORTANT: If the version of Go is updated, the Windows to Linux CI machines
|
||||
# will need updating, to avoid errors. Ping #docker-maintainers on IRC
|
||||
# with a heads-up.
|
||||
ENV GO_VERSION 1.5.3
|
||||
ENV GO_VERSION 1.6
|
||||
RUN curl -fsSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" \
|
||||
| tar -xzC /usr/local
|
||||
ENV PATH /go/bin:/usr/local/go/bin:$PATH
|
||||
|
@ -166,7 +171,7 @@ RUN set -x \
|
|||
&& rm -rf "$GOPATH"
|
||||
|
||||
# Install notary server
|
||||
ENV NOTARY_VERSION docker-v1.10-5
|
||||
ENV NOTARY_VERSION v0.2.0
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
|
||||
|
|
|
@ -11,19 +11,11 @@
|
|||
# # Run the test suite:
|
||||
# docker run --privileged docker hack/make.sh test
|
||||
#
|
||||
# # Publish a release:
|
||||
# docker run --privileged \
|
||||
# -e AWS_S3_BUCKET=baz \
|
||||
# -e AWS_ACCESS_KEY=foo \
|
||||
# -e AWS_SECRET_KEY=bar \
|
||||
# -e GPG_PASSPHRASE=gloubiboulga \
|
||||
# docker hack/release.sh
|
||||
#
|
||||
# Note: AppArmor used to mess with privileged mode, but this is no longer
|
||||
# the case. Therefore, you don't have to disable it anymore.
|
||||
#
|
||||
|
||||
FROM aarch64/ubuntu:trusty
|
||||
FROM aarch64/ubuntu:wily
|
||||
|
||||
# Packaged dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
|
@ -45,15 +37,16 @@ RUN apt-get update && apt-get install -y \
|
|||
libc6-dev \
|
||||
libcap-dev \
|
||||
libsqlite3-dev \
|
||||
libsystemd-journal-dev \
|
||||
libsystemd-dev \
|
||||
mercurial \
|
||||
net-tools \
|
||||
parallel \
|
||||
pkg-config \
|
||||
python-dev \
|
||||
python-mock \
|
||||
python-pip \
|
||||
python-websocket \
|
||||
s3cmd=1.1.0* \
|
||||
gccgo \
|
||||
--no-install-recommends
|
||||
|
||||
# Install armhf loader to use armv6 binaries on armv8
|
||||
|
@ -103,14 +96,11 @@ RUN set -x \
|
|||
# We don't have official binary tarballs for ARM64, eigher for Go or bootstrap,
|
||||
# so we use the official armv6 released binaries as a GOROOT_BOOTSTRAP, and
|
||||
# build Go from source code.
|
||||
ENV BOOT_STRAP_VERSION 1.6beta1
|
||||
ENV GO_VERSION 1.5.3
|
||||
RUN mkdir -p /usr/src/go-bootstrap \
|
||||
&& curl -fsSL https://storage.googleapis.com/golang/go${BOOT_STRAP_VERSION}.linux-arm6.tar.gz | tar -v -C /usr/src/go-bootstrap -xz --strip-components=1 \
|
||||
&& mkdir /usr/src/go \
|
||||
&& curl -fsSL https://storage.googleapis.com/golang/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
|
||||
RUN mkdir /usr/src/go && curl -fsSL https://storage.googleapis.com/golang/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
|
||||
&& cd /usr/src/go/src \
|
||||
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP=/usr/src/go-bootstrap ./make.bash
|
||||
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash
|
||||
|
||||
ENV PATH /usr/src/go/bin:$PATH
|
||||
ENV GOPATH /go:/go/src/github.com/docker/docker/vendor
|
||||
|
||||
|
@ -127,7 +117,7 @@ RUN set -x \
|
|||
&& rm -rf "$GOPATH"
|
||||
|
||||
# Install notary server
|
||||
ENV NOTARY_VERSION docker-v1.10-5
|
||||
ENV NOTARY_VERSION v0.2.0
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
|
||||
|
|
|
@ -11,19 +11,11 @@
|
|||
# # Run the test suite:
|
||||
# docker run --privileged docker hack/make.sh test
|
||||
#
|
||||
# # Publish a release:
|
||||
# docker run --privileged \
|
||||
# -e AWS_S3_BUCKET=baz \
|
||||
# -e AWS_ACCESS_KEY=foo \
|
||||
# -e AWS_SECRET_KEY=bar \
|
||||
# -e GPG_PASSPHRASE=gloubiboulga \
|
||||
# docker hack/release.sh
|
||||
#
|
||||
# Note: AppArmor used to mess with privileged mode, but this is no longer
|
||||
# the case. Therefore, you don't have to disable it anymore.
|
||||
#
|
||||
|
||||
FROM armhf/ubuntu:trusty
|
||||
FROM armhf/debian:jessie
|
||||
|
||||
# Packaged dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
|
@ -143,7 +135,7 @@ RUN set -x \
|
|||
&& rm -rf "$GOPATH"
|
||||
|
||||
# Install notary server
|
||||
ENV NOTARY_VERSION docker-v1.10-5
|
||||
ENV NOTARY_VERSION v0.2.0
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
|
||||
|
|
|
@ -23,6 +23,7 @@ RUN apt-get update && apt-get install -y \
|
|||
libcap-dev \
|
||||
libsqlite3-dev \
|
||||
mercurial \
|
||||
net-tools \
|
||||
parallel \
|
||||
python-dev \
|
||||
python-mock \
|
||||
|
|
|
@ -11,14 +11,6 @@
|
|||
# # Run the test suite:
|
||||
# docker run --privileged docker hack/make.sh test
|
||||
#
|
||||
# # Publish a release:
|
||||
# docker run --privileged \
|
||||
# -e AWS_S3_BUCKET=baz \
|
||||
# -e AWS_ACCESS_KEY=foo \
|
||||
# -e AWS_SECRET_KEY=bar \
|
||||
# -e GPG_PASSPHRASE=gloubiboulga \
|
||||
# docker hack/release.sh
|
||||
#
|
||||
# Note: AppArmor used to mess with privileged mode, but this is no longer
|
||||
# the case. Therefore, you don't have to disable it anymore.
|
||||
#
|
||||
|
@ -82,7 +74,21 @@ RUN cd /usr/local/lvm2 \
|
|||
# TODO install Go, using gccgo as GOROOT_BOOTSTRAP (Go 1.5+ supports ppc64le properly)
|
||||
# possibly a ppc64le/golang image?
|
||||
|
||||
ENV PATH /go/bin:$PATH
|
||||
## BUILD GOLANG 1.6
|
||||
ENV GO_VERSION 1.6
|
||||
ENV GO_DOWNLOAD_URL https://golang.org/dl/go${GO_VERSION}.src.tar.gz
|
||||
ENV GO_DOWNLOAD_SHA256 a96cce8ce43a9bf9b2a4c7d470bc7ee0cb00410da815980681c8353218dcf146
|
||||
ENV GOROOT_BOOTSTRAP /usr/local
|
||||
|
||||
RUN curl -fsSL "$GO_DOWNLOAD_URL" -o golang.tar.gz \
|
||||
&& echo "$GO_DOWNLOAD_SHA256 golang.tar.gz" | sha256sum -c - \
|
||||
&& tar -C /usr/src -xzf golang.tar.gz \
|
||||
&& rm golang.tar.gz \
|
||||
&& cd /usr/src/go/src && ./make.bash 2>&1
|
||||
|
||||
ENV GOROOT_BOOTSTRAP /usr/src/
|
||||
|
||||
ENV PATH /usr/src/go/bin/:/go/bin:$PATH
|
||||
ENV GOPATH /go:/go/src/github.com/docker/docker/vendor
|
||||
|
||||
# This has been commented out and kept as reference because we don't support compiling with older Go anymore.
|
||||
|
@ -90,7 +96,7 @@ ENV GOPATH /go:/go/src/github.com/docker/docker/vendor
|
|||
# RUN curl -sSL https://storage.googleapis.com/golang/go${GOFMT_VERSION}.$(go env GOOS)-$(go env GOARCH).tar.gz | tar -C /go/bin -xz --strip-components=2 go/bin/gofmt
|
||||
|
||||
# TODO update this sha when we upgrade to Go 1.5+
|
||||
ENV GO_TOOLS_COMMIT 069d2f3bcb68257b627205f0486d6cc69a231ff9
|
||||
ENV GO_TOOLS_COMMIT d02228d1857b9f49cd0252788516ff5584266eb6
|
||||
# Grab Go's cover tool for dead-simple code coverage testing
|
||||
# Grab Go's vet tool for examining go code to find suspicious constructs
|
||||
# and help prevent errors that the compiler might not catch
|
||||
|
@ -99,7 +105,7 @@ RUN git clone https://github.com/golang/tools.git /go/src/golang.org/x/tools \
|
|||
&& go install -v golang.org/x/tools/cmd/cover \
|
||||
&& go install -v golang.org/x/tools/cmd/vet
|
||||
# Grab Go's lint tool
|
||||
ENV GO_LINT_COMMIT f42f5c1c440621302702cb0741e9d2ca547ae80f
|
||||
ENV GO_LINT_COMMIT 32a87160691b3c96046c0c678fe57c5bef761456
|
||||
RUN git clone https://github.com/golang/lint.git /go/src/github.com/golang/lint \
|
||||
&& (cd /go/src/github.com/golang/lint && git checkout -q $GO_LINT_COMMIT) \
|
||||
&& go install -v github.com/golang/lint/golint
|
||||
|
@ -121,16 +127,16 @@ RUN set -x \
|
|||
go build -o /usr/local/bin/registry-v2-schema1 github.com/docker/distribution/cmd/registry \
|
||||
&& rm -rf "$GOPATH"
|
||||
|
||||
# TODO update this when we upgrade to Go 1.5.1+
|
||||
|
||||
# Install notary server
|
||||
#ENV NOTARY_VERSION docker-v1.10-5
|
||||
#RUN set -x \
|
||||
# && export GOPATH="$(mktemp -d)" \
|
||||
# && git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
|
||||
# && (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
|
||||
# && GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
|
||||
# go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
|
||||
# && rm -rf "$GOPATH"
|
||||
ENV NOTARY_VERSION v0.2.0
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
|
||||
&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
|
||||
&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
|
||||
go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
|
||||
&& rm -rf "$GOPATH"
|
||||
|
||||
# Get the "docker-py" source so we can run their integration tests
|
||||
ENV DOCKER_PY_COMMIT e2878cbcc3a7eef99917adc1be252800b0e41ece
|
||||
|
|
|
@ -11,14 +11,6 @@
|
|||
# # Run the test suite:
|
||||
# docker run --privileged docker hack/make.sh test
|
||||
#
|
||||
# # Publish a release:
|
||||
# docker run --privileged \
|
||||
# -e AWS_S3_BUCKET=baz \
|
||||
# -e AWS_ACCESS_KEY=foo \
|
||||
# -e AWS_SECRET_KEY=bar \
|
||||
# -e GPG_PASSPHRASE=gloubiboulga \
|
||||
# docker hack/release.sh
|
||||
#
|
||||
# Note: AppArmor used to mess with privileged mode, but this is no longer
|
||||
# the case. Therefore, you don't have to disable it anymore.
|
||||
#
|
||||
|
@ -116,7 +108,7 @@ RUN set -x \
|
|||
&& rm -rf "$GOPATH"
|
||||
|
||||
# Install notary server
|
||||
ENV NOTARY_VERSION docker-v1.10-5
|
||||
ENV NOTARY_VERSION v0.2.0
|
||||
RUN set -x \
|
||||
&& export GOPATH="$(mktemp -d)" \
|
||||
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
|
||||
|
|
|
@ -19,14 +19,7 @@
|
|||
# Important notes:
|
||||
# ---------------
|
||||
#
|
||||
# Multiple commands in a single powershell RUN command are deliberately not done. This is
|
||||
# because PS doesn't have a concept quite like set -e in bash. It would be possible to use
|
||||
# try-catch script blocks, but that would make this file unreadable. The problem is that
|
||||
# if there are two commands eg "RUN powershell -command fail; succeed", as far as docker
|
||||
# would be concerned, the return code from the overall RUN is succeed. This doesn't apply to
|
||||
# RUN which uses cmd as the command interpreter such as "RUN fail; succeed".
|
||||
#
|
||||
# 'sleep 5' is a deliberate workaround for a current problem on containers in Windows
|
||||
# 'Start-Sleep' is a deliberate workaround for a current problem on containers in Windows
|
||||
# Server 2016. It ensures that the network is up and available for when the command is
|
||||
# network related. This bug is being tracked internally at Microsoft and exists in TP4.
|
||||
# Generally sleep 1 or 2 is probably enough, but making it 5 to make the build file
|
||||
|
@ -39,55 +32,70 @@
|
|||
# Don't try to use a volume for passing the source through. The cygwin posix utilities will
|
||||
# balk at reparse points. Again, see the example at the top of this file on how use a volume
|
||||
# to get the built binary out of the container.
|
||||
#
|
||||
# The steps are minimised dramatically to improve performance (TP4 is slow on commit)
|
||||
|
||||
FROM windowsservercore
|
||||
|
||||
# Environment variable notes:
|
||||
# - GOLANG_VERSION should be updated to be consistent with the Linux dockerfile.
|
||||
# - GOLANG_VERSION must consistent with 'Dockerfile' used by Linux'.
|
||||
# - FROM_DOCKERFILE is used for detection of building within a container.
|
||||
ENV GOLANG_VERSION=1.5.3 \
|
||||
GIT_VERSION=2.7.0 \
|
||||
ENV GOLANG_VERSION=1.6 \
|
||||
GIT_LOCATION=https://github.com/git-for-windows/git/releases/download/v2.7.2.windows.1/Git-2.7.2-64-bit.exe \
|
||||
RSRC_COMMIT=ba14da1f827188454a4591717fff29999010887f \
|
||||
GOPATH=C:/go;C:/go/src/github.com/docker/docker/vendor \
|
||||
FROM_DOCKERFILE=1
|
||||
|
||||
# Make sure we're in temp for the downloads
|
||||
WORKDIR c:/windows/temp
|
||||
|
||||
# Download everything else we need to install
|
||||
# We want a 64-bit make.exe, not 16 or 32-bit. This was hard to find, so documenting the links
|
||||
# - http://sourceforge.net/p/mingw-w64/wiki2/Make/ -->
|
||||
# - http://sourceforge.net/projects/mingw-w64/files/External%20binary%20packages%20%28Win64%20hosted%29/ -->
|
||||
# - http://sourceforge.net/projects/mingw-w64/files/External binary packages %28Win64 hosted%29/make/
|
||||
RUN powershell -command sleep 5; Invoke-WebRequest -UserAgent 'DockerCI' -outfile make.zip http://downloads.sourceforge.net/project/mingw-w64/External%20binary%20packages%20%28Win64%20hosted%29/make/make-3.82.90-20111115.zip
|
||||
RUN powershell -command sleep 5; Invoke-WebRequest -UserAgent 'DockerCI' -outfile gcc.zip http://downloads.sourceforge.net/project/tdm-gcc/TDM-GCC%205%20series/5.1.0-tdm64-1/gcc-5.1.0-tdm64-1-core.zip
|
||||
RUN powershell -command sleep 5; Invoke-WebRequest -UserAgent 'DockerCI' -outfile runtime.zip http://downloads.sourceforge.net/project/tdm-gcc/MinGW-w64%20runtime/GCC%205%20series/mingw64runtime-v4-git20150618-gcc5-tdm64-1.zip
|
||||
RUN powershell -command sleep 5; Invoke-WebRequest -UserAgent 'DockerCI' -outfile binutils.zip http://downloads.sourceforge.net/project/tdm-gcc/GNU%20binutils/binutils-2.25-tdm64-1.zip
|
||||
RUN powershell -command sleep 5; Invoke-WebRequest -UserAgent 'DockerCI' -outfile 7zsetup.exe http://www.7-zip.org/a/7z1514-x64.exe
|
||||
RUN powershell -command sleep 5; Invoke-WebRequest -UserAgent 'DockerCI' -outfile lzma.7z http://www.7-zip.org/a/lzma1514.7z
|
||||
RUN powershell -command sleep 5; Invoke-WebRequest -UserAgent 'DockerCI' -outfile gitsetup.exe https://github.com/git-for-windows/git/releases/download/v%GIT_VERSION%.windows.1/Git-%GIT_VERSION%-64-bit.exe
|
||||
RUN powershell -command sleep 5; Invoke-WebRequest -UserAgent 'DockerCI' -outfile go.msi https://storage.googleapis.com/golang/go%GOLANG_VERSION%.windows-amd64.msi
|
||||
|
||||
# Path
|
||||
RUN setx /M Path "c:\git\cmd;c:\git\bin;c:\git\usr\bin;%Path%;c:\gcc\bin;c:\7zip"
|
||||
|
||||
# Install and expand the bits we downloaded.
|
||||
# Note: The git, 7z and go.msi installers execute asynchronously.
|
||||
RUN powershell -command start-process .\gitsetup.exe -ArgumentList '/VERYSILENT /SUPPRESSMSGBOXES /CLOSEAPPLICATIONS /DIR=c:\git' -Wait
|
||||
RUN powershell -command start-process .\7zsetup -ArgumentList '/S /D=c:/7zip' -Wait
|
||||
RUN powershell -command start-process .\go.msi -ArgumentList '/quiet' -Wait
|
||||
RUN powershell -command Expand-Archive gcc.zip \gcc -Force
|
||||
RUN powershell -command Expand-Archive runtime.zip \gcc -Force
|
||||
RUN powershell -command Expand-Archive binutils.zip \gcc -Force
|
||||
RUN powershell -command 7z e lzma.7z bin/lzma.exe
|
||||
RUN powershell -command 7z x make.zip make-3.82.90-20111115/bin_amd64/make.exe
|
||||
RUN powershell -command mv make-3.82.90-20111115/bin_amd64/make.exe /gcc/bin/
|
||||
|
||||
# RSRC for manifest and icon
|
||||
RUN powershell -command sleep 5 ; git clone https://github.com/akavel/rsrc.git c:\go\src\github.com\akavel\rsrc
|
||||
RUN cd c:/go/src/github.com/akavel/rsrc && git checkout -q %RSRC_COMMIT% && go install -v
|
||||
|
||||
# Prepare for building
|
||||
WORKDIR c:/
|
||||
|
||||
# Everything downloaded/installed in one go (better performance, esp on TP4)
|
||||
RUN \
|
||||
setx /M Path "c:\git\cmd;c:\git\bin;c:\git\usr\bin;%Path%;c:\gcc\bin;c:\go\bin" && \
|
||||
setx GOROOT "c:\go" && \
|
||||
powershell -command \
|
||||
$ErrorActionPreference = 'Stop'; \
|
||||
Start-Sleep -Seconds 5; \
|
||||
Function Download-File([string] $source, [string] $target) { \
|
||||
$wc = New-Object net.webclient; $wc.Downloadfile($source, $target) \
|
||||
} \
|
||||
\
|
||||
Write-Host INFO: Downloading git...; \
|
||||
Download-File %GIT_LOCATION% gitsetup.exe; \
|
||||
\
|
||||
Write-Host INFO: Downloading go...; \
|
||||
Download-File https://storage.googleapis.com/golang/go%GOLANG_VERSION%.windows-amd64.msi go.msi; \
|
||||
\
|
||||
Write-Host INFO: Downloading compiler 1 of 3...; \
|
||||
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/gcc.zip gcc.zip; \
|
||||
\
|
||||
Write-Host INFO: Downloading compiler 2 of 3...; \
|
||||
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/runtime.zip runtime.zip; \
|
||||
\
|
||||
Write-Host INFO: Downloading compiler 3 of 3...; \
|
||||
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/binutils.zip binutils.zip; \
|
||||
\
|
||||
Write-Host INFO: Installing git...; \
|
||||
Start-Process gitsetup.exe -ArgumentList '/VERYSILENT /SUPPRESSMSGBOXES /CLOSEAPPLICATIONS /DIR=c:\git\' -Wait; \
|
||||
\
|
||||
Write-Host INFO: Installing go..."; \
|
||||
Start-Process msiexec -ArgumentList '-i go.msi -quiet' -Wait; \
|
||||
\
|
||||
Write-Host INFO: Unzipping compiler...; \
|
||||
c:\git\usr\bin\unzip.exe -q -o gcc.zip -d /c/gcc; \
|
||||
c:\git\usr\bin\unzip.exe -q -o runtime.zip -d /c/gcc; \
|
||||
c:\git\usr\bin\unzip.exe -q -o binutils.zip -d /c/gcc"; \
|
||||
\
|
||||
Write-Host INFO: Removing interim files; \
|
||||
Remove-Item *.zip; \
|
||||
Remove-Item go.msi; \
|
||||
Remove-Item gitsetup.exe; \
|
||||
\
|
||||
Write-Host INFO: Cloning and installing RSRC; \
|
||||
c:\git\bin\git.exe clone https://github.com/akavel/rsrc.git c:\go\src\github.com\akavel\rsrc; \
|
||||
cd \go\src\github.com\akavel\rsrc; c:\git\bin\git.exe checkout -q %RSRC_COMMIT%; c:\go\bin\go.exe install -v; \
|
||||
\
|
||||
Write-Host INFO: Completed
|
||||
|
||||
# Prepare for building
|
||||
COPY . /go/src/github.com/docker/docker
|
||||
|
||||
|
|
11
MAINTAINERS
11
MAINTAINERS
|
@ -26,6 +26,7 @@
|
|||
# the release process is clear and up-to-date.
|
||||
|
||||
people = [
|
||||
"aaronlehmann",
|
||||
"calavera",
|
||||
"coolljt0725",
|
||||
"cpuguy83",
|
||||
|
@ -111,6 +112,11 @@
|
|||
|
||||
# ADD YOURSELF HERE IN ALPHABETICAL ORDER
|
||||
|
||||
[people.aaronlehmann]
|
||||
Name = "Aaron Lehmann"
|
||||
Email = "aaron.lehmann@docker.com"
|
||||
GitHub = "aaronlehmann"
|
||||
|
||||
[people.calavera]
|
||||
Name = "David Calavera"
|
||||
Email = "david.calavera@gmail.com"
|
||||
|
@ -196,11 +202,6 @@
|
|||
Email = "github@gone.nl"
|
||||
GitHub = "thaJeztah"
|
||||
|
||||
[people.theadactyl]
|
||||
Name = "Thea Lamkin"
|
||||
Email = "thea@docker.com"
|
||||
GitHub = "theadactyl"
|
||||
|
||||
[people.tianon]
|
||||
Name = "Tianon Gravi"
|
||||
Email = "admwiggin@gmail.com"
|
||||
|
|
14
Makefile
14
Makefile
|
@ -54,6 +54,10 @@ DOCKER_ENVS := \
|
|||
BIND_DIR := $(if $(BINDDIR),$(BINDDIR),$(if $(DOCKER_HOST),,bundles))
|
||||
DOCKER_MOUNT := $(if $(BIND_DIR),-v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/docker/docker/$(BIND_DIR)")
|
||||
|
||||
# This allows the test suite to be able to run without worrying about the underlying fs used by the container running the daemon (e.g. aufs-on-aufs), so long as the host running the container is running a supported fs.
|
||||
# The volume will be cleaned up when the container is removed due to `--rm`.
|
||||
# Note that `BIND_DIR` will already be set to `bundles` if `DOCKER_HOST` is not set (see above BIND_DIR line), in such case this will do nothing since `DOCKER_MOUNT` will already be set.
|
||||
DOCKER_MOUNT := $(if $(DOCKER_MOUNT),$(DOCKER_MOUNT),-v "/go/src/github.com/docker/docker/bundles")
|
||||
|
||||
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
|
||||
DOCKER_IMAGE := docker-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
|
||||
|
@ -80,6 +84,16 @@ binary: build
|
|||
$(DOCKER_RUN_DOCKER) hack/make.sh binary
|
||||
|
||||
build: bundles
|
||||
ifeq ($(DOCKER_OSARCH), linux/arm)
|
||||
# A few libnetwork integration tests require that the kernel be
|
||||
# configured with "dummy" network interface and has the module
|
||||
# loaded. However, the dummy module is not available by default
|
||||
# on arm images. This ensures that it's built and loaded.
|
||||
echo "Syncing kernel modules"
|
||||
oc-sync-kernel-modules
|
||||
depmod
|
||||
modprobe dummy
|
||||
endif
|
||||
docker build ${DOCKER_BUILD_ARGS} -t "$(DOCKER_IMAGE)" -f "$(DOCKERFILE)" .
|
||||
|
||||
bundles:
|
||||
|
|
|
@ -216,7 +216,7 @@ We are always open to suggestions on process improvements, and are always lookin
|
|||
<td>Internet Relay Chat (IRC)</td>
|
||||
<td>
|
||||
<p>
|
||||
IRC a direct line to our most knowledgeable Docker users; we have
|
||||
IRC is a direct line to our most knowledgeable Docker users; we have
|
||||
both the <code>#docker</code> and <code>#docker-dev</code> group on
|
||||
<strong>irc.freenode.net</strong>.
|
||||
IRC is a rich chat protocol but it can overwhelm new users. You can search
|
||||
|
@ -234,6 +234,8 @@ We are always open to suggestions on process improvements, and are always lookin
|
|||
The <a href="https://groups.google.com/forum/#!forum/docker-dev" target="_blank">docker-dev</a>
|
||||
group is for contributors and other people contributing to the Docker
|
||||
project.
|
||||
You can join them without an google account by sending an email to e.g. "docker-user+subscribe@googlegroups.com".
|
||||
After receiving the join-request message, you can simply reply to that to confirm the subscribtion.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
|
|
179
ROADMAP.md
179
ROADMAP.md
|
@ -33,97 +33,58 @@ won't be accepting pull requests adding or removing items from this file.
|
|||
|
||||
# 1. Features and refactoring
|
||||
|
||||
## 1.1 Security
|
||||
## 1.1 Runtime improvements
|
||||
|
||||
Security is a top objective for the Docker Engine. The most notable items we intend to provide in
|
||||
the near future are:
|
||||
We recently introduced [`runC`](https://runc.io) as a standalone low-level tool for container
|
||||
execution. The initial goal was to integrate runC as a replacement in the Engine for the traditional
|
||||
default libcontainer `execdriver`, but the Engine internals were not ready for this.
|
||||
|
||||
- Trusted distribution of images: the effort is driven by the [distribution](https://github.com/docker/distribution)
|
||||
group but will have significant impact on the Engine
|
||||
- [User namespaces](https://github.com/docker/docker/pull/12648)
|
||||
- [Seccomp support](https://github.com/docker/libcontainer/pull/613)
|
||||
As runC continued evolving, and the OCI specification along with it, we created
|
||||
[`containerd`](https://containerd.tools/), a daemon to control and monitor multiple `runC`. This is
|
||||
the new target for Engine integration, as it can entirely replace the whole `execdriver`
|
||||
architecture, and container monitoring along with it.
|
||||
|
||||
## 1.2 Plumbing project
|
||||
Docker Engine will rely on a long-running `containerd` companion daemon for all container execution
|
||||
related operations. This could open the door in the future for Engine restarts without interrupting
|
||||
running containers.
|
||||
|
||||
We define a plumbing tool as a standalone piece of software usable and meaningful on its own. In
|
||||
the current state of the Docker Engine, most subsystems provide independent functionalities (such
|
||||
the builder, pushing and pulling images, running applications in a containerized environment, etc)
|
||||
but all are coupled in a single binary. We want to offer the users to flexibility to use only the
|
||||
pieces they need, and we will also gain in maintainability by splitting the project among multiple
|
||||
repositories.
|
||||
## 1.2 Plugins improvements
|
||||
|
||||
As it currently stands, the rough design outlines is to have:
|
||||
- Low level plumbing tools, each dealing with one responsibility (e.g., [runC](https://runc.io))
|
||||
- Docker subsystems services, each exposing an elementary concept over an API, and relying on one or
|
||||
multiple lower level plumbing tools for their implementation (e.g., network management)
|
||||
- Docker Engine to expose higher level actions (e.g., create a container with volume `V` and network
|
||||
`N`), while still providing pass-through access to the individual subsystems.
|
||||
Docker Engine 1.7.0 introduced plugin support, initially for the use cases of volumes and networks
|
||||
extensions. The plugin infrastructure was kept minimal as we were collecting use cases and real
|
||||
world feedback before optimizing for any particular workflow.
|
||||
|
||||
The architectural details are still being worked on, but one thing we know for sure is that we need
|
||||
to technically decouple the pieces.
|
||||
In the future, we'd like plugins to become first class citizens, and encourage an ecosystem of
|
||||
plugins. This implies in particular making it trivially easy to distribute plugins as containers
|
||||
through any Registry instance, as well as solving the commonly heard pain points of plugins needing
|
||||
to be treated as somewhat special (being active at all time, started before any other user
|
||||
containers, and not as easily dismissed).
|
||||
|
||||
### 1.2.1 Runtime
|
||||
## 1.3 Internal decoupling
|
||||
|
||||
A Runtime tool already exists today in the form of [runC](https://github.com/opencontainers/runc).
|
||||
We intend to modify the Engine to directly call out to a binary implementing the Open Containers
|
||||
Specification such as runC rather than relying on libcontainer to set the container runtime up.
|
||||
A lot of work has been done in trying to decouple the Docker Engine's internals. In particular, the
|
||||
API implementation has been refactored and ongoing work is happening to move the code to a separate
|
||||
repository ([`docker/engine-api`](https://github.com/docker/engine-api)), and the Builder side of
|
||||
the daemon is now [fully independent](https://github.com/docker/docker/tree/master/builder) while
|
||||
still residing in the same repository.
|
||||
|
||||
This plan will deprecate the existing [`execdriver`](https://github.com/docker/docker/tree/master/daemon/execdriver)
|
||||
as different runtime backends will be implemented as separated binaries instead of being compiled
|
||||
into the Engine.
|
||||
We are exploring ways to go further with that decoupling, capitalizing on the work introduced by the
|
||||
runtime renovation and plugins improvement efforts. Indeed, the combination of `containerd` support
|
||||
with the concept of "special" containers opens the door for bootstrapping more Engine internals
|
||||
using the same facilities.
|
||||
|
||||
### 1.2.2 Builder
|
||||
## 1.4 Cluster capable Engine
|
||||
|
||||
The Builder (i.e., the ability to build an image from a Dockerfile) is already nicely decoupled,
|
||||
but would benefit from being entirely separated from the Engine, and rely on the standard Engine
|
||||
API for its operations.
|
||||
The community has been pushing for a more cluster capable Docker Engine, and a huge effort was spent
|
||||
adding features such as multihost networking, and node discovery down at the Engine level. Yet, the
|
||||
Engine is currently incapable of taking scheduling decisions alone, and continues relying on Swarm
|
||||
for that.
|
||||
|
||||
### 1.2.3 Distribution
|
||||
|
||||
Distribution already has a [dedicated repository](https://github.com/docker/distribution) which
|
||||
holds the implementation for Registry v2 and client libraries. We could imagine going further by
|
||||
having the Engine call out to a binary providing image distribution related functionalities.
|
||||
|
||||
There are two short term goals related to image distribution. The first is stabilize and simplify
|
||||
the push/pull code. Following that is the conversion to the more secure Registry V2 protocol.
|
||||
|
||||
### 1.2.4 Networking
|
||||
|
||||
Most of networking related code was already decoupled today in [libnetwork](https://github.com/docker/libnetwork).
|
||||
As with other ingredients, we might want to take it a step further and make it a meaningful utility
|
||||
that the Engine would call out to instead of a library.
|
||||
|
||||
## 1.3 Plugins
|
||||
|
||||
An initiative around plugins started with Docker 1.7.0, with the goal of allowing for out of
|
||||
process extensibility of some Docker functionalities, starting with volumes and networking. The
|
||||
approach is to provide specific extension points rather than generic hooking facilities. We also
|
||||
deliberately keep the extensions API the simplest possible, expanding as we discover valid use
|
||||
cases that cannot be implemented.
|
||||
|
||||
At the time of writing:
|
||||
|
||||
- Plugin support is merged as an experimental feature: real world use cases and user feedback will
|
||||
help us refine the UX to make the feature more user friendly.
|
||||
- There are no immediate plans to expand on the number of pluggable subsystems.
|
||||
- Golang 1.5 might add language support for [plugins](https://docs.google.com/document/d/1nr-TQHw_er6GOQRsF6T43GGhFDelrAP0NqSS_00RgZQ)
|
||||
which we consider supporting as an alternative to JSON/HTTP.
|
||||
|
||||
## 1.4 Volume management
|
||||
|
||||
Volumes are not a first class citizen in the Engine today: we would like better volume management,
|
||||
similar to the way network are managed in the new [CNM](https://github.com/docker/docker/issues/9983).
|
||||
|
||||
## 1.5 Better API implementation
|
||||
|
||||
The current Engine API is insufficiently typed, versioned, and ultimately hard to maintain. We
|
||||
also suffer from the lack of a common implementation with [Swarm](https://github.com/docker/swarm).
|
||||
|
||||
## 1.6 Checkpoint/restore
|
||||
|
||||
Support for checkpoint/restore was [merged](https://github.com/docker/libcontainer/pull/479) in
|
||||
[libcontainer](https://github.com/docker/libcontainer) and made available through [runC](https://runc.io):
|
||||
we intend to take advantage of it in the Engine.
|
||||
We plan to complete this effort and make Engine fully cluster capable. Multiple instances of the
|
||||
Docker Engine being already capable of discovering each other and establish overlay networking for
|
||||
their container to communicate, the next step is for a given Engine to gain ability to dispatch work
|
||||
to another node in the cluster. This will be introduced in a backward compatible way, such that a
|
||||
`docker run` invocation on a particular node remains fully deterministic.
|
||||
|
||||
# 2 Frozen features
|
||||
|
||||
|
@ -139,45 +100,41 @@ The Dockerfile syntax as we know it is simple, and has proven successful in supp
|
|||
definitive move, we temporarily won't accept more patches to the Dockerfile syntax for several
|
||||
reasons:
|
||||
|
||||
- Long term impact of syntax changes is a sensitive matter that require an amount of attention
|
||||
the volume of Engine codebase and activity today doesn't allow us to provide.
|
||||
- Allowing the Builder to be implemented as a separate utility consuming the Engine's API will
|
||||
open the door for many possibilities, such as offering alternate syntaxes or DSL for existing
|
||||
languages without cluttering the Engine's codebase.
|
||||
- A standalone Builder will also offer the opportunity for a better dedicated group of maintainers
|
||||
to own the Dockerfile syntax and decide collectively on the direction to give it.
|
||||
- Our experience with official images tend to show that no new instruction or syntax expansion is
|
||||
*strictly* necessary for the majority of use cases, and although we are aware many things are still
|
||||
lacking for many, we cannot make it a priority yet for the above reasons.
|
||||
- Long term impact of syntax changes is a sensitive matter that require an amount of attention the
|
||||
volume of Engine codebase and activity today doesn't allow us to provide.
|
||||
- Allowing the Builder to be implemented as a separate utility consuming the Engine's API will
|
||||
open the door for many possibilities, such as offering alternate syntaxes or DSL for existing
|
||||
languages without cluttering the Engine's codebase.
|
||||
- A standalone Builder will also offer the opportunity for a better dedicated group of maintainers
|
||||
to own the Dockerfile syntax and decide collectively on the direction to give it.
|
||||
- Our experience with official images tend to show that no new instruction or syntax expansion is
|
||||
*strictly* necessary for the majority of use cases, and although we are aware many things are
|
||||
still lacking for many, we cannot make it a priority yet for the above reasons.
|
||||
|
||||
Again, this is not about saying that the Dockerfile syntax is done, it's about making choices about
|
||||
what we want to do first!
|
||||
|
||||
## 2.3 Remote Registry Operations
|
||||
|
||||
A large amount of work is ongoing in the area of image distribution and
|
||||
provenance. This includes moving to the V2 Registry API and heavily
|
||||
refactoring the code that powers these features. The desired result is more
|
||||
secure, reliable and easier to use image distribution.
|
||||
A large amount of work is ongoing in the area of image distribution and provenance. This includes
|
||||
moving to the V2 Registry API and heavily refactoring the code that powers these features. The
|
||||
desired result is more secure, reliable and easier to use image distribution.
|
||||
|
||||
Part of the problem with this part of the code base is the lack of a stable
|
||||
and flexible interface. If new features are added that access the registry
|
||||
without solidifying these interfaces, achieving feature parity will continue
|
||||
to be elusive. While we get a handle on this situation, we are imposing a
|
||||
moratorium on new code that accesses the Registry API in commands that don't
|
||||
already make remote calls.
|
||||
Part of the problem with this part of the code base is the lack of a stable and flexible interface.
|
||||
If new features are added that access the registry without solidifying these interfaces, achieving
|
||||
feature parity will continue to be elusive. While we get a handle on this situation, we are imposing
|
||||
a moratorium on new code that accesses the Registry API in commands that don't already make remote
|
||||
calls.
|
||||
|
||||
Currently, only the following commands cause interaction with a remote
|
||||
registry:
|
||||
Currently, only the following commands cause interaction with a remote registry:
|
||||
|
||||
- push
|
||||
- pull
|
||||
- run
|
||||
- build
|
||||
- search
|
||||
- login
|
||||
- push
|
||||
- pull
|
||||
- run
|
||||
- build
|
||||
- search
|
||||
- login
|
||||
|
||||
In the interest of stabilizing the registry access model during this ongoing
|
||||
work, we are not accepting additions to other commands that will cause remote
|
||||
interaction with the Registry API. This moratorium will lift when the goals of
|
||||
the distribution project have been met.
|
||||
In the interest of stabilizing the registry access model during this ongoing work, we are not
|
||||
accepting additions to other commands that will cause remote interaction with the Registry API. This
|
||||
moratorium will lift when the goals of the distribution project have been met.
|
||||
|
|
|
@ -41,12 +41,6 @@ func (cli *DockerCli) CmdAttach(args ...string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
if c.Config.Tty && cli.isTerminalOut {
|
||||
if err := cli.monitorTtySize(cmd.Arg(0), false); err != nil {
|
||||
logrus.Debugf("Error monitoring TTY size: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
if *detachKeys != "" {
|
||||
cli.configFile.DetachKeys = *detachKeys
|
||||
}
|
||||
|
@ -82,6 +76,21 @@ func (cli *DockerCli) CmdAttach(args ...string) error {
|
|||
defer cli.restoreTerminal(in)
|
||||
}
|
||||
|
||||
if c.Config.Tty && cli.isTerminalOut {
|
||||
height, width := cli.getTtySize()
|
||||
// To handle the case where a user repeatedly attaches/detaches without resizing their
|
||||
// terminal, the only way to get the shell prompt to display for attaches 2+ is to artificially
|
||||
// resize it, then go back to normal. Without this, every attach after the first will
|
||||
// require the user to manually resize or hit enter.
|
||||
cli.resizeTtyTo(cmd.Arg(0), height+1, width+1, false)
|
||||
|
||||
// After the above resizing occurs, the call to monitorTtySize below will handle resetting back
|
||||
// to the actual size.
|
||||
if err := cli.monitorTtySize(cmd.Arg(0), false); err != nil {
|
||||
logrus.Debugf("Error monitoring TTY size: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
if err := cli.holdHijackedConnection(c.Config.Tty, in, cli.out, cli.err, resp); err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -6,25 +6,20 @@ import (
|
|||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"github.com/docker/docker/api"
|
||||
"github.com/docker/docker/builder"
|
||||
"github.com/docker/docker/builder/dockerignore"
|
||||
Cli "github.com/docker/docker/cli"
|
||||
"github.com/docker/docker/opts"
|
||||
"github.com/docker/docker/pkg/archive"
|
||||
"github.com/docker/docker/pkg/fileutils"
|
||||
"github.com/docker/docker/pkg/gitutils"
|
||||
"github.com/docker/docker/pkg/httputils"
|
||||
"github.com/docker/docker/pkg/ioutils"
|
||||
"github.com/docker/docker/pkg/jsonmessage"
|
||||
flag "github.com/docker/docker/pkg/mflag"
|
||||
"github.com/docker/docker/pkg/progress"
|
||||
|
@ -65,7 +60,7 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
|
|||
flCgroupParent := cmd.String([]string{"-cgroup-parent"}, "", "Optional parent cgroup for the container")
|
||||
flBuildArg := opts.NewListOpts(runconfigopts.ValidateEnv)
|
||||
cmd.Var(&flBuildArg, []string{"-build-arg"}, "Set build-time variables")
|
||||
isolation := cmd.String([]string{"-isolation"}, "", "Container isolation level")
|
||||
isolation := cmd.String([]string{"-isolation"}, "", "Container isolation technology")
|
||||
|
||||
ulimits := make(map[string]*units.Ulimit)
|
||||
flUlimits := runconfigopts.NewUlimitOpt(&ulimits)
|
||||
|
@ -102,13 +97,13 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
|
|||
|
||||
switch {
|
||||
case specifiedContext == "-":
|
||||
ctx, relDockerfile, err = getContextFromReader(cli.in, *dockerfileName)
|
||||
ctx, relDockerfile, err = builder.GetContextFromReader(cli.in, *dockerfileName)
|
||||
case urlutil.IsGitURL(specifiedContext):
|
||||
tempDir, relDockerfile, err = getContextFromGitURL(specifiedContext, *dockerfileName)
|
||||
tempDir, relDockerfile, err = builder.GetContextFromGitURL(specifiedContext, *dockerfileName)
|
||||
case urlutil.IsURL(specifiedContext):
|
||||
ctx, relDockerfile, err = getContextFromURL(progBuff, specifiedContext, *dockerfileName)
|
||||
ctx, relDockerfile, err = builder.GetContextFromURL(progBuff, specifiedContext, *dockerfileName)
|
||||
default:
|
||||
contextDir, relDockerfile, err = getContextFromLocalDir(specifiedContext, *dockerfileName)
|
||||
contextDir, relDockerfile, err = builder.GetContextFromLocalDir(specifiedContext, *dockerfileName)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
|
@ -143,7 +138,7 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
|
|||
}
|
||||
}
|
||||
|
||||
if err := validateContextDirectory(contextDir, excludes); err != nil {
|
||||
if err := builder.ValidateContextDirectory(contextDir, excludes); err != nil {
|
||||
return fmt.Errorf("Error checking context: '%s'.", err)
|
||||
}
|
||||
|
||||
|
@ -223,7 +218,7 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
|
|||
Remove: *rm,
|
||||
ForceRemove: *forceRm,
|
||||
PullParent: *pull,
|
||||
IsolationLevel: container.IsolationLevel(*isolation),
|
||||
Isolation: container.Isolation(*isolation),
|
||||
CPUSetCPUs: *flCPUSetCpus,
|
||||
CPUSetMems: *flCPUSetMems,
|
||||
CPUShares: *flCPUShares,
|
||||
|
@ -234,7 +229,7 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
|
|||
ShmSize: shmSize,
|
||||
Ulimits: flUlimits.GetList(),
|
||||
BuildArgs: runconfigopts.ConvertKVStringsToMap(flBuildArg.GetAll()),
|
||||
AuthConfigs: cli.configFile.AuthConfigs,
|
||||
AuthConfigs: cli.retrieveAuthConfigs(),
|
||||
}
|
||||
|
||||
response, err := cli.client.ImageBuild(context.Background(), options)
|
||||
|
@ -281,54 +276,6 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// validateContextDirectory checks if all the contents of the directory
|
||||
// can be read and returns an error if some files can't be read
|
||||
// symlinks which point to non-existing files don't trigger an error
|
||||
func validateContextDirectory(srcPath string, excludes []string) error {
|
||||
contextRoot, err := getContextRoot(srcPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return filepath.Walk(contextRoot, func(filePath string, f os.FileInfo, err error) error {
|
||||
// skip this directory/file if it's not in the path, it won't get added to the context
|
||||
if relFilePath, err := filepath.Rel(contextRoot, filePath); err != nil {
|
||||
return err
|
||||
} else if skip, err := fileutils.Matches(relFilePath, excludes); err != nil {
|
||||
return err
|
||||
} else if skip {
|
||||
if f.IsDir() {
|
||||
return filepath.SkipDir
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
if os.IsPermission(err) {
|
||||
return fmt.Errorf("can't stat '%s'", filePath)
|
||||
}
|
||||
if os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// skip checking if symlinks point to non-existing files, such symlinks can be useful
|
||||
// also skip named pipes, because they hanging on open
|
||||
if f.Mode()&(os.ModeSymlink|os.ModeNamedPipe) != 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
if !f.IsDir() {
|
||||
currentFile, err := os.Open(filePath)
|
||||
if err != nil && os.IsPermission(err) {
|
||||
return fmt.Errorf("no permission to read from '%s'", filePath)
|
||||
}
|
||||
currentFile.Close()
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// validateTag checks if the given image name can be resolved.
|
||||
func validateTag(rawRepo string) (string, error) {
|
||||
_, err := reference.ParseNamed(rawRepo)
|
||||
|
@ -339,96 +286,6 @@ func validateTag(rawRepo string) (string, error) {
|
|||
return rawRepo, nil
|
||||
}
|
||||
|
||||
// isUNC returns true if the path is UNC (one starting \\). It always returns
|
||||
// false on Linux.
|
||||
func isUNC(path string) bool {
|
||||
return runtime.GOOS == "windows" && strings.HasPrefix(path, `\\`)
|
||||
}
|
||||
|
||||
// getDockerfileRelPath uses the given context directory for a `docker build`
|
||||
// and returns the absolute path to the context directory, the relative path of
|
||||
// the dockerfile in that context directory, and a non-nil error on success.
|
||||
func getDockerfileRelPath(givenContextDir, givenDockerfile string) (absContextDir, relDockerfile string, err error) {
|
||||
if absContextDir, err = filepath.Abs(givenContextDir); err != nil {
|
||||
return "", "", fmt.Errorf("unable to get absolute context directory: %v", err)
|
||||
}
|
||||
|
||||
// The context dir might be a symbolic link, so follow it to the actual
|
||||
// target directory.
|
||||
//
|
||||
// FIXME. We use isUNC (always false on non-Windows platforms) to workaround
|
||||
// an issue in golang. On Windows, EvalSymLinks does not work on UNC file
|
||||
// paths (those starting with \\). This hack means that when using links
|
||||
// on UNC paths, they will not be followed.
|
||||
if !isUNC(absContextDir) {
|
||||
absContextDir, err = filepath.EvalSymlinks(absContextDir)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("unable to evaluate symlinks in context path: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
stat, err := os.Lstat(absContextDir)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("unable to stat context directory %q: %v", absContextDir, err)
|
||||
}
|
||||
|
||||
if !stat.IsDir() {
|
||||
return "", "", fmt.Errorf("context must be a directory: %s", absContextDir)
|
||||
}
|
||||
|
||||
absDockerfile := givenDockerfile
|
||||
if absDockerfile == "" {
|
||||
// No -f/--file was specified so use the default relative to the
|
||||
// context directory.
|
||||
absDockerfile = filepath.Join(absContextDir, api.DefaultDockerfileName)
|
||||
|
||||
// Just to be nice ;-) look for 'dockerfile' too but only
|
||||
// use it if we found it, otherwise ignore this check
|
||||
if _, err = os.Lstat(absDockerfile); os.IsNotExist(err) {
|
||||
altPath := filepath.Join(absContextDir, strings.ToLower(api.DefaultDockerfileName))
|
||||
if _, err = os.Lstat(altPath); err == nil {
|
||||
absDockerfile = altPath
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If not already an absolute path, the Dockerfile path should be joined to
|
||||
// the base directory.
|
||||
if !filepath.IsAbs(absDockerfile) {
|
||||
absDockerfile = filepath.Join(absContextDir, absDockerfile)
|
||||
}
|
||||
|
||||
// Evaluate symlinks in the path to the Dockerfile too.
|
||||
//
|
||||
// FIXME. We use isUNC (always false on non-Windows platforms) to workaround
|
||||
// an issue in golang. On Windows, EvalSymLinks does not work on UNC file
|
||||
// paths (those starting with \\). This hack means that when using links
|
||||
// on UNC paths, they will not be followed.
|
||||
if !isUNC(absDockerfile) {
|
||||
absDockerfile, err = filepath.EvalSymlinks(absDockerfile)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("unable to evaluate symlinks in Dockerfile path: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
if _, err := os.Lstat(absDockerfile); err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return "", "", fmt.Errorf("Cannot locate Dockerfile: %q", absDockerfile)
|
||||
}
|
||||
return "", "", fmt.Errorf("unable to stat Dockerfile: %v", err)
|
||||
}
|
||||
|
||||
if relDockerfile, err = filepath.Rel(absContextDir, absDockerfile); err != nil {
|
||||
return "", "", fmt.Errorf("unable to get relative Dockerfile path: %v", err)
|
||||
}
|
||||
|
||||
if strings.HasPrefix(relDockerfile, ".."+string(filepath.Separator)) {
|
||||
return "", "", fmt.Errorf("The Dockerfile (%s) must be within the build context (%s)", givenDockerfile, givenContextDir)
|
||||
}
|
||||
|
||||
return absContextDir, relDockerfile, nil
|
||||
}
|
||||
|
||||
// writeToFile copies from the given reader and writes it to a file with the
|
||||
// given filename.
|
||||
func writeToFile(r io.Reader, filename string) error {
|
||||
|
@ -445,107 +302,6 @@ func writeToFile(r io.Reader, filename string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// getContextFromReader will read the contents of the given reader as either a
|
||||
// Dockerfile or tar archive. Returns a tar archive used as a context and a
|
||||
// path to the Dockerfile inside the tar.
|
||||
func getContextFromReader(r io.ReadCloser, dockerfileName string) (out io.ReadCloser, relDockerfile string, err error) {
|
||||
buf := bufio.NewReader(r)
|
||||
|
||||
magic, err := buf.Peek(archive.HeaderSize)
|
||||
if err != nil && err != io.EOF {
|
||||
return nil, "", fmt.Errorf("failed to peek context header from STDIN: %v", err)
|
||||
}
|
||||
|
||||
if archive.IsArchive(magic) {
|
||||
return ioutils.NewReadCloserWrapper(buf, func() error { return r.Close() }), dockerfileName, nil
|
||||
}
|
||||
|
||||
// Input should be read as a Dockerfile.
|
||||
tmpDir, err := ioutil.TempDir("", "docker-build-context-")
|
||||
if err != nil {
|
||||
return nil, "", fmt.Errorf("unbale to create temporary context directory: %v", err)
|
||||
}
|
||||
|
||||
f, err := os.Create(filepath.Join(tmpDir, api.DefaultDockerfileName))
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
_, err = io.Copy(f, buf)
|
||||
if err != nil {
|
||||
f.Close()
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
if err := f.Close(); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
if err := r.Close(); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
tar, err := archive.Tar(tmpDir, archive.Uncompressed)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
return ioutils.NewReadCloserWrapper(tar, func() error {
|
||||
err := tar.Close()
|
||||
os.RemoveAll(tmpDir)
|
||||
return err
|
||||
}), api.DefaultDockerfileName, nil
|
||||
|
||||
}
|
||||
|
||||
// getContextFromGitURL uses a Git URL as context for a `docker build`. The
|
||||
// git repo is cloned into a temporary directory used as the context directory.
|
||||
// Returns the absolute path to the temporary context directory, the relative
|
||||
// path of the dockerfile in that context directory, and a non-nil error on
|
||||
// success.
|
||||
func getContextFromGitURL(gitURL, dockerfileName string) (absContextDir, relDockerfile string, err error) {
|
||||
if _, err := exec.LookPath("git"); err != nil {
|
||||
return "", "", fmt.Errorf("unable to find 'git': %v", err)
|
||||
}
|
||||
if absContextDir, err = gitutils.Clone(gitURL); err != nil {
|
||||
return "", "", fmt.Errorf("unable to 'git clone' to temporary context directory: %v", err)
|
||||
}
|
||||
|
||||
return getDockerfileRelPath(absContextDir, dockerfileName)
|
||||
}
|
||||
|
||||
// getContextFromURL uses a remote URL as context for a `docker build`. The
|
||||
// remote resource is downloaded as either a Dockerfile or a tar archive.
|
||||
// Returns the tar archive used for the context and a path of the
|
||||
// dockerfile inside the tar.
|
||||
func getContextFromURL(out io.Writer, remoteURL, dockerfileName string) (io.ReadCloser, string, error) {
|
||||
response, err := httputils.Download(remoteURL)
|
||||
if err != nil {
|
||||
return nil, "", fmt.Errorf("unable to download remote context %s: %v", remoteURL, err)
|
||||
}
|
||||
progressOutput := streamformatter.NewStreamFormatter().NewProgressOutput(out, true)
|
||||
|
||||
// Pass the response body through a progress reader.
|
||||
progReader := progress.NewProgressReader(response.Body, progressOutput, response.ContentLength, "", fmt.Sprintf("Downloading build context from remote url: %s", remoteURL))
|
||||
|
||||
return getContextFromReader(ioutils.NewReadCloserWrapper(progReader, func() error { return response.Body.Close() }), dockerfileName)
|
||||
}
|
||||
|
||||
// getContextFromLocalDir uses the given local directory as context for a
|
||||
// `docker build`. Returns the absolute path to the local context directory,
|
||||
// the relative path of the dockerfile in that context directory, and a non-nil
|
||||
// error on success.
|
||||
func getContextFromLocalDir(localDir, dockerfileName string) (absContextDir, relDockerfile string, err error) {
|
||||
// When using a local context directory, when the Dockerfile is specified
|
||||
// with the `-f/--file` option then it is considered relative to the
|
||||
// current directory and not the context directory.
|
||||
if dockerfileName != "" {
|
||||
if dockerfileName, err = filepath.Abs(dockerfileName); err != nil {
|
||||
return "", "", fmt.Errorf("unable to get absolute path to Dockerfile: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
return getDockerfileRelPath(localDir, dockerfileName)
|
||||
}
|
||||
|
||||
var dockerfileFromLinePattern = regexp.MustCompile(`(?i)^[\s]*FROM[ \f\r\t\v]+(?P<image>[^ \f\r\t\v\n#]+)`)
|
||||
|
||||
// resolvedTag records the repository, tag, and resolved digest reference
|
||||
|
|
|
@ -11,6 +11,7 @@ import (
|
|||
"github.com/docker/docker/api"
|
||||
"github.com/docker/docker/cli"
|
||||
"github.com/docker/docker/cliconfig"
|
||||
"github.com/docker/docker/cliconfig/credentials"
|
||||
"github.com/docker/docker/dockerversion"
|
||||
"github.com/docker/docker/opts"
|
||||
"github.com/docker/docker/pkg/term"
|
||||
|
@ -125,6 +126,9 @@ func NewDockerCli(in io.ReadCloser, out, err io.Writer, clientFlags *cli.ClientF
|
|||
if e != nil {
|
||||
fmt.Fprintf(cli.err, "WARNING: Error loading config file:%v\n", e)
|
||||
}
|
||||
if !configFile.ContainsAuth() {
|
||||
credentials.DetectDefaultStore(configFile)
|
||||
}
|
||||
cli.configFile = configFile
|
||||
|
||||
host, err := getServerHost(clientFlags.Common.Hosts, clientFlags.Common.TLSOptions)
|
||||
|
|
|
@ -42,7 +42,7 @@ func (cli *DockerCli) pullImageCustomOut(image string, out io.Writer) error {
|
|||
return err
|
||||
}
|
||||
|
||||
authConfig := cli.resolveAuthConfig(cli.configFile.AuthConfigs, repoInfo.Index)
|
||||
authConfig := cli.resolveAuthConfig(repoInfo.Index)
|
||||
encodedAuth, err := encodeAuthToBase64(authConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
|
|
|
@ -6,10 +6,12 @@ import (
|
|||
"io"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
Cli "github.com/docker/docker/cli"
|
||||
"github.com/docker/docker/opts"
|
||||
"github.com/docker/docker/pkg/jsonlog"
|
||||
|
@ -115,3 +117,30 @@ func printOutput(event eventtypes.Message, output io.Writer) {
|
|||
}
|
||||
fmt.Fprint(output, "\n")
|
||||
}
|
||||
|
||||
type eventHandler struct {
|
||||
handlers map[string]func(eventtypes.Message)
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
func (w *eventHandler) Handle(action string, h func(eventtypes.Message)) {
|
||||
w.mu.Lock()
|
||||
w.handlers[action] = h
|
||||
w.mu.Unlock()
|
||||
}
|
||||
|
||||
// Watch ranges over the passed in event chan and processes the events based on the
|
||||
// handlers created for a given action.
|
||||
// To stop watching, close the event chan.
|
||||
func (w *eventHandler) Watch(c <-chan eventtypes.Message) {
|
||||
for e := range c {
|
||||
w.mu.Lock()
|
||||
h, exists := w.handlers[e.Action]
|
||||
w.mu.Unlock()
|
||||
if !exists {
|
||||
continue
|
||||
}
|
||||
logrus.Debugf("event handler: received event: %v", e)
|
||||
go h(e)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -31,6 +31,7 @@ const (
|
|||
repositoryHeader = "REPOSITORY"
|
||||
tagHeader = "TAG"
|
||||
digestHeader = "DIGEST"
|
||||
mountsHeader = "MOUNTS"
|
||||
)
|
||||
|
||||
type containerContext struct {
|
||||
|
@ -142,6 +143,20 @@ func (c *containerContext) Label(name string) string {
|
|||
return c.c.Labels[name]
|
||||
}
|
||||
|
||||
func (c *containerContext) Mounts() string {
|
||||
c.addHeader(mountsHeader)
|
||||
|
||||
var mounts []string
|
||||
for _, m := range c.c.Mounts {
|
||||
name := m.Name
|
||||
if c.trunc {
|
||||
name = stringutils.Truncate(name, 15)
|
||||
}
|
||||
mounts = append(mounts, name)
|
||||
}
|
||||
return strings.Join(mounts, ",")
|
||||
}
|
||||
|
||||
type imageContext struct {
|
||||
baseSubContext
|
||||
trunc bool
|
||||
|
|
|
@ -12,7 +12,7 @@ import (
|
|||
|
||||
func TestContainerPsContext(t *testing.T) {
|
||||
containerID := stringid.GenerateRandomID()
|
||||
unix := time.Now().Unix()
|
||||
unix := time.Now().Add(-65 * time.Second).Unix()
|
||||
|
||||
var ctx containerContext
|
||||
cases := []struct {
|
||||
|
@ -55,7 +55,7 @@ func TestContainerPsContext(t *testing.T) {
|
|||
{types.Container{SizeRw: 10, SizeRootFs: 20}, true, "10 B (virtual 20 B)", sizeHeader, ctx.Size},
|
||||
{types.Container{}, true, "", labelsHeader, ctx.Labels},
|
||||
{types.Container{Labels: map[string]string{"cpu": "6", "storage": "ssd"}}, true, "cpu=6,storage=ssd", labelsHeader, ctx.Labels},
|
||||
{types.Container{Created: unix}, true, "Less than a second", runningForHeader, ctx.RunningFor},
|
||||
{types.Container{Created: unix}, true, "About a minute", runningForHeader, ctx.RunningFor},
|
||||
}
|
||||
|
||||
for _, c := range cases {
|
||||
|
|
|
@ -50,6 +50,7 @@ func (cli *DockerCli) CmdInfo(args ...string) error {
|
|||
}
|
||||
ioutils.FprintfIfNotEmpty(cli.out, "Execution Driver: %s\n", info.ExecutionDriver)
|
||||
ioutils.FprintfIfNotEmpty(cli.out, "Logging Driver: %s\n", info.LoggingDriver)
|
||||
ioutils.FprintfIfNotEmpty(cli.out, "Cgroup Driver: %s\n", info.CgroupDriver)
|
||||
|
||||
fmt.Fprintf(cli.out, "Plugins: \n")
|
||||
fmt.Fprintf(cli.out, " Volume:")
|
||||
|
@ -73,7 +74,7 @@ func (cli *DockerCli) CmdInfo(args ...string) error {
|
|||
fmt.Fprintf(cli.out, "Total Memory: %s\n", units.BytesSize(float64(info.MemTotal)))
|
||||
ioutils.FprintfIfNotEmpty(cli.out, "Name: %s\n", info.Name)
|
||||
ioutils.FprintfIfNotEmpty(cli.out, "ID: %s\n", info.ID)
|
||||
|
||||
fmt.Fprintf(cli.out, "Docker Root Dir: %s\n", info.DockerRootDir)
|
||||
fmt.Fprintf(cli.out, "Debug mode (client): %v\n", utils.IsDebugEnabled())
|
||||
fmt.Fprintf(cli.out, "Debug mode (server): %v\n", info.Debug)
|
||||
|
||||
|
@ -82,7 +83,6 @@ func (cli *DockerCli) CmdInfo(args ...string) error {
|
|||
fmt.Fprintf(cli.out, " Goroutines: %d\n", info.NGoroutines)
|
||||
fmt.Fprintf(cli.out, " System Time: %s\n", info.SystemTime)
|
||||
fmt.Fprintf(cli.out, " EventsListeners: %d\n", info.NEventsListener)
|
||||
fmt.Fprintf(cli.out, " Docker Root Dir: %s\n", info.DockerRootDir)
|
||||
}
|
||||
|
||||
ioutils.FprintfIfNotEmpty(cli.out, "Http Proxy: %s\n", info.HTTPProxy)
|
||||
|
@ -105,6 +105,9 @@ func (cli *DockerCli) CmdInfo(args ...string) error {
|
|||
if !info.SwapLimit {
|
||||
fmt.Fprintln(cli.err, "WARNING: No swap limit support")
|
||||
}
|
||||
if !info.KernelMemory {
|
||||
fmt.Fprintln(cli.err, "WARNING: No kernel memory limit support")
|
||||
}
|
||||
if !info.OomKillDisable {
|
||||
fmt.Fprintln(cli.err, "WARNING: No oom kill disable support")
|
||||
}
|
||||
|
|
|
@ -9,13 +9,14 @@ import (
|
|||
"strings"
|
||||
|
||||
Cli "github.com/docker/docker/cli"
|
||||
"github.com/docker/docker/cliconfig"
|
||||
"github.com/docker/docker/cliconfig/credentials"
|
||||
flag "github.com/docker/docker/pkg/mflag"
|
||||
"github.com/docker/docker/pkg/term"
|
||||
"github.com/docker/engine-api/client"
|
||||
"github.com/docker/engine-api/types"
|
||||
)
|
||||
|
||||
// CmdLogin logs in or registers a user to a Docker registry service.
|
||||
// CmdLogin logs in a user to a Docker registry service.
|
||||
//
|
||||
// If no server is specified, the user will be logged into or registered to the registry's index server.
|
||||
//
|
||||
|
@ -26,7 +27,9 @@ func (cli *DockerCli) CmdLogin(args ...string) error {
|
|||
|
||||
flUser := cmd.String([]string{"u", "-username"}, "", "Username")
|
||||
flPassword := cmd.String([]string{"p", "-password"}, "", "Password")
|
||||
flEmail := cmd.String([]string{"e", "-email"}, "", "Email")
|
||||
|
||||
// Deprecated in 1.11: Should be removed in docker 1.13
|
||||
cmd.String([]string{"#e", "#-email"}, "", "Email")
|
||||
|
||||
cmd.ParseFlags(args, true)
|
||||
|
||||
|
@ -36,32 +39,27 @@ func (cli *DockerCli) CmdLogin(args ...string) error {
|
|||
}
|
||||
|
||||
var serverAddress string
|
||||
var isDefaultRegistry bool
|
||||
if len(cmd.Args()) > 0 {
|
||||
serverAddress = cmd.Arg(0)
|
||||
} else {
|
||||
serverAddress = cli.electAuthServer()
|
||||
isDefaultRegistry = true
|
||||
}
|
||||
|
||||
authConfig, err := cli.configureAuth(*flUser, *flPassword, *flEmail, serverAddress)
|
||||
authConfig, err := cli.configureAuth(*flUser, *flPassword, serverAddress, isDefaultRegistry)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
response, err := cli.client.RegistryLogin(authConfig)
|
||||
if err != nil {
|
||||
if client.IsErrUnauthorized(err) {
|
||||
delete(cli.configFile.AuthConfigs, serverAddress)
|
||||
if err2 := cli.configFile.Save(); err2 != nil {
|
||||
fmt.Fprintf(cli.out, "WARNING: could not save config file: %v\n", err2)
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if err := cli.configFile.Save(); err != nil {
|
||||
return fmt.Errorf("Error saving config file: %v", err)
|
||||
if err := storeCredentials(cli.configFile, authConfig); err != nil {
|
||||
return fmt.Errorf("Error saving credentials: %v", err)
|
||||
}
|
||||
fmt.Fprintf(cli.out, "WARNING: login credentials saved in %s\n", cli.configFile.Filename())
|
||||
|
||||
if response.Status != "" {
|
||||
fmt.Fprintf(cli.out, "%s\n", response.Status)
|
||||
|
@ -77,14 +75,19 @@ func (cli *DockerCli) promptWithDefault(prompt string, configDefault string) {
|
|||
}
|
||||
}
|
||||
|
||||
func (cli *DockerCli) configureAuth(flUser, flPassword, flEmail, serverAddress string) (types.AuthConfig, error) {
|
||||
authconfig, ok := cli.configFile.AuthConfigs[serverAddress]
|
||||
if !ok {
|
||||
authconfig = types.AuthConfig{}
|
||||
func (cli *DockerCli) configureAuth(flUser, flPassword, serverAddress string, isDefaultRegistry bool) (types.AuthConfig, error) {
|
||||
authconfig, err := getCredentials(cli.configFile, serverAddress)
|
||||
if err != nil {
|
||||
return authconfig, err
|
||||
}
|
||||
|
||||
authconfig.Username = strings.TrimSpace(authconfig.Username)
|
||||
|
||||
if flUser = strings.TrimSpace(flUser); flUser == "" {
|
||||
if isDefaultRegistry {
|
||||
// if this is a defauly registry (docker hub), then display the following message.
|
||||
fmt.Fprintln(cli.out, "Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.")
|
||||
}
|
||||
cli.promptWithDefault("Username", authconfig.Username)
|
||||
flUser = readInput(cli.in, cli.out)
|
||||
flUser = strings.TrimSpace(flUser)
|
||||
|
@ -114,30 +117,10 @@ func (cli *DockerCli) configureAuth(flUser, flPassword, flEmail, serverAddress s
|
|||
}
|
||||
}
|
||||
|
||||
// Assume that a different username means they may not want to use
|
||||
// the email from the config file, so prompt it
|
||||
if flUser != authconfig.Username {
|
||||
if flEmail == "" {
|
||||
cli.promptWithDefault("Email", authconfig.Email)
|
||||
flEmail = readInput(cli.in, cli.out)
|
||||
if flEmail == "" {
|
||||
flEmail = authconfig.Email
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// However, if they don't override the username use the
|
||||
// email from the cmd line if specified. IOW, allow
|
||||
// then to change/override them. And if not specified, just
|
||||
// use what's in the config file
|
||||
if flEmail == "" {
|
||||
flEmail = authconfig.Email
|
||||
}
|
||||
}
|
||||
authconfig.Username = flUser
|
||||
authconfig.Password = flPassword
|
||||
authconfig.Email = flEmail
|
||||
authconfig.ServerAddress = serverAddress
|
||||
cli.configFile.AuthConfigs[serverAddress] = authconfig
|
||||
|
||||
return authconfig, nil
|
||||
}
|
||||
|
||||
|
@ -150,3 +133,38 @@ func readInput(in io.Reader, out io.Writer) string {
|
|||
}
|
||||
return string(line)
|
||||
}
|
||||
|
||||
// getCredentials loads the user credentials from a credentials store.
|
||||
// The store is determined by the config file settings.
|
||||
func getCredentials(c *cliconfig.ConfigFile, serverAddress string) (types.AuthConfig, error) {
|
||||
s := loadCredentialsStore(c)
|
||||
return s.Get(serverAddress)
|
||||
}
|
||||
|
||||
func getAllCredentials(c *cliconfig.ConfigFile) (map[string]types.AuthConfig, error) {
|
||||
s := loadCredentialsStore(c)
|
||||
return s.GetAll()
|
||||
}
|
||||
|
||||
// storeCredentials saves the user credentials in a credentials store.
|
||||
// The store is determined by the config file settings.
|
||||
func storeCredentials(c *cliconfig.ConfigFile, auth types.AuthConfig) error {
|
||||
s := loadCredentialsStore(c)
|
||||
return s.Store(auth)
|
||||
}
|
||||
|
||||
// eraseCredentials removes the user credentials from a credentials store.
|
||||
// The store is determined by the config file settings.
|
||||
func eraseCredentials(c *cliconfig.ConfigFile, serverAddress string) error {
|
||||
s := loadCredentialsStore(c)
|
||||
return s.Erase(serverAddress)
|
||||
}
|
||||
|
||||
// loadCredentialsStore initializes a new credentials store based
|
||||
// in the settings provided in the configuration file.
|
||||
func loadCredentialsStore(c *cliconfig.ConfigFile) credentials.Store {
|
||||
if c.CredentialsStore != "" {
|
||||
return credentials.NewNativeStore(c)
|
||||
}
|
||||
return credentials.NewFileStore(c)
|
||||
}
|
||||
|
|
|
@ -25,15 +25,16 @@ func (cli *DockerCli) CmdLogout(args ...string) error {
|
|||
serverAddress = cli.electAuthServer()
|
||||
}
|
||||
|
||||
// check if we're logged in based on the records in the config file
|
||||
// which means it couldn't have user/pass cause they may be in the creds store
|
||||
if _, ok := cli.configFile.AuthConfigs[serverAddress]; !ok {
|
||||
fmt.Fprintf(cli.out, "Not logged in to %s\n", serverAddress)
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Fprintf(cli.out, "Remove login credentials for %s\n", serverAddress)
|
||||
delete(cli.configFile.AuthConfigs, serverAddress)
|
||||
if err := cli.configFile.Save(); err != nil {
|
||||
return fmt.Errorf("Failed to save docker config: %v", err)
|
||||
if err := eraseCredentials(cli.configFile, serverAddress); err != nil {
|
||||
fmt.Fprintf(cli.out, "WARNING: could not erase credentials: %v\n", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
|
@ -3,6 +3,7 @@ package client
|
|||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"sort"
|
||||
"strings"
|
||||
"text/tabwriter"
|
||||
|
||||
|
@ -50,6 +51,7 @@ func (cli *DockerCli) CmdNetworkCreate(args ...string) error {
|
|||
cmd.Var(flIpamOpt, []string{"-ipam-opt"}, "set IPAM driver specific options")
|
||||
|
||||
flInternal := cmd.Bool([]string{"-internal"}, false, "restricts external access to the network")
|
||||
flIPv6 := cmd.Bool([]string{"-ipv6"}, false, "enable IPv6 networking")
|
||||
|
||||
cmd.Require(flag.Exact, 1)
|
||||
err := cmd.ParseFlags(args, true)
|
||||
|
@ -77,6 +79,7 @@ func (cli *DockerCli) CmdNetworkCreate(args ...string) error {
|
|||
Options: flOpts.GetAll(),
|
||||
CheckDuplicate: true,
|
||||
Internal: *flInternal,
|
||||
EnableIPv6: *flIPv6,
|
||||
}
|
||||
|
||||
resp, err := cli.client.NetworkCreate(nc)
|
||||
|
@ -192,7 +195,7 @@ func (cli *DockerCli) CmdNetworkLs(args ...string) error {
|
|||
if !*quiet {
|
||||
fmt.Fprintln(wr, "NETWORK ID\tNAME\tDRIVER")
|
||||
}
|
||||
|
||||
sort.Sort(byNetworkName(networkResources))
|
||||
for _, networkResource := range networkResources {
|
||||
ID := networkResource.ID
|
||||
netName := networkResource.Name
|
||||
|
@ -214,6 +217,12 @@ func (cli *DockerCli) CmdNetworkLs(args ...string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
type byNetworkName []types.NetworkResource
|
||||
|
||||
func (r byNetworkName) Len() int { return len(r) }
|
||||
func (r byNetworkName) Swap(i, j int) { r[i], r[j] = r[j], r[i] }
|
||||
func (r byNetworkName) Less(i, j int) bool { return r[i].Name < r[j].Name }
|
||||
|
||||
// CmdNetworkInspect inspects the network object for more details
|
||||
//
|
||||
// Usage: docker network inspect [OPTIONS] <NETWORK> [NETWORK...]
|
||||
|
|
|
@ -56,7 +56,7 @@ func (cli *DockerCli) CmdPull(args ...string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
authConfig := cli.resolveAuthConfig(cli.configFile.AuthConfigs, repoInfo.Index)
|
||||
authConfig := cli.resolveAuthConfig(repoInfo.Index)
|
||||
requestPrivilege := cli.registryAuthenticationPrivilegedFunc(repoInfo.Index, "pull")
|
||||
|
||||
if isTrusted() && !ref.HasDigest() {
|
||||
|
|
|
@ -44,7 +44,7 @@ func (cli *DockerCli) CmdPush(args ...string) error {
|
|||
return err
|
||||
}
|
||||
// Resolve the Auth config relevant for this server
|
||||
authConfig := cli.resolveAuthConfig(cli.configFile.AuthConfigs, repoInfo.Index)
|
||||
authConfig := cli.resolveAuthConfig(repoInfo.Index)
|
||||
|
||||
requestPrivilege := cli.registryAuthenticationPrivilegedFunc(repoInfo.Index, "push")
|
||||
if isTrusted() {
|
||||
|
|
|
@ -11,7 +11,6 @@ import (
|
|||
|
||||
"github.com/Sirupsen/logrus"
|
||||
Cli "github.com/docker/docker/cli"
|
||||
derr "github.com/docker/docker/errors"
|
||||
"github.com/docker/docker/opts"
|
||||
"github.com/docker/docker/pkg/promise"
|
||||
"github.com/docker/docker/pkg/signal"
|
||||
|
@ -21,6 +20,11 @@ import (
|
|||
"github.com/docker/libnetwork/resolvconf/dns"
|
||||
)
|
||||
|
||||
const (
|
||||
errCmdNotFound = "Container command not found or does not exist."
|
||||
errCmdCouldNotBeInvoked = "Container command could not be invoked."
|
||||
)
|
||||
|
||||
func (cid *cidFile) Close() error {
|
||||
cid.file.Close()
|
||||
|
||||
|
@ -46,20 +50,13 @@ func (cid *cidFile) Write(id string) error {
|
|||
// return 125 for generic docker daemon failures
|
||||
func runStartContainerErr(err error) error {
|
||||
trimmedErr := strings.Trim(err.Error(), "Error response from daemon: ")
|
||||
statusError := Cli.StatusError{}
|
||||
derrCmdNotFound := derr.ErrorCodeCmdNotFound.Message()
|
||||
derrCouldNotInvoke := derr.ErrorCodeCmdCouldNotBeInvoked.Message()
|
||||
derrNoSuchImage := derr.ErrorCodeNoSuchImageHash.Message()
|
||||
derrNoSuchImageTag := derr.ErrorCodeNoSuchImageTag.Message()
|
||||
statusError := Cli.StatusError{StatusCode: 125}
|
||||
|
||||
switch trimmedErr {
|
||||
case derrCmdNotFound:
|
||||
case errCmdNotFound:
|
||||
statusError = Cli.StatusError{StatusCode: 127}
|
||||
case derrCouldNotInvoke:
|
||||
case errCmdCouldNotBeInvoked:
|
||||
statusError = Cli.StatusError{StatusCode: 126}
|
||||
case derrNoSuchImage, derrNoSuchImageTag:
|
||||
statusError = Cli.StatusError{StatusCode: 125}
|
||||
default:
|
||||
statusError = Cli.StatusError{StatusCode: 125}
|
||||
}
|
||||
return statusError
|
||||
}
|
||||
|
|
|
@ -36,7 +36,7 @@ func (cli *DockerCli) CmdSearch(args ...string) error {
|
|||
return err
|
||||
}
|
||||
|
||||
authConfig := cli.resolveAuthConfig(cli.configFile.AuthConfigs, indexInfo)
|
||||
authConfig := cli.resolveAuthConfig(indexInfo)
|
||||
requestPrivilege := cli.registryAuthenticationPrivilegedFunc(indexInfo, "search")
|
||||
|
||||
encodedAuth, err := encodeAuthToBase64(authConfig)
|
||||
|
|
|
@ -1,10 +1,8 @@
|
|||
package client
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"text/tabwriter"
|
||||
|
@ -16,125 +14,8 @@ import (
|
|||
"github.com/docker/engine-api/types"
|
||||
"github.com/docker/engine-api/types/events"
|
||||
"github.com/docker/engine-api/types/filters"
|
||||
"github.com/docker/go-units"
|
||||
)
|
||||
|
||||
type containerStats struct {
|
||||
Name string
|
||||
CPUPercentage float64
|
||||
Memory float64
|
||||
MemoryLimit float64
|
||||
MemoryPercentage float64
|
||||
NetworkRx float64
|
||||
NetworkTx float64
|
||||
BlockRead float64
|
||||
BlockWrite float64
|
||||
mu sync.RWMutex
|
||||
err error
|
||||
}
|
||||
|
||||
type stats struct {
|
||||
mu sync.Mutex
|
||||
cs []*containerStats
|
||||
}
|
||||
|
||||
func (s *containerStats) Collect(cli *DockerCli, streamStats bool) {
|
||||
responseBody, err := cli.client.ContainerStats(context.Background(), s.Name, streamStats)
|
||||
if err != nil {
|
||||
s.mu.Lock()
|
||||
s.err = err
|
||||
s.mu.Unlock()
|
||||
return
|
||||
}
|
||||
defer responseBody.Close()
|
||||
|
||||
var (
|
||||
previousCPU uint64
|
||||
previousSystem uint64
|
||||
dec = json.NewDecoder(responseBody)
|
||||
u = make(chan error, 1)
|
||||
)
|
||||
go func() {
|
||||
for {
|
||||
var v *types.StatsJSON
|
||||
if err := dec.Decode(&v); err != nil {
|
||||
u <- err
|
||||
return
|
||||
}
|
||||
|
||||
var memPercent = 0.0
|
||||
var cpuPercent = 0.0
|
||||
|
||||
// MemoryStats.Limit will never be 0 unless the container is not running and we haven't
|
||||
// got any data from cgroup
|
||||
if v.MemoryStats.Limit != 0 {
|
||||
memPercent = float64(v.MemoryStats.Usage) / float64(v.MemoryStats.Limit) * 100.0
|
||||
}
|
||||
|
||||
previousCPU = v.PreCPUStats.CPUUsage.TotalUsage
|
||||
previousSystem = v.PreCPUStats.SystemUsage
|
||||
cpuPercent = calculateCPUPercent(previousCPU, previousSystem, v)
|
||||
blkRead, blkWrite := calculateBlockIO(v.BlkioStats)
|
||||
s.mu.Lock()
|
||||
s.CPUPercentage = cpuPercent
|
||||
s.Memory = float64(v.MemoryStats.Usage)
|
||||
s.MemoryLimit = float64(v.MemoryStats.Limit)
|
||||
s.MemoryPercentage = memPercent
|
||||
s.NetworkRx, s.NetworkTx = calculateNetwork(v.Networks)
|
||||
s.BlockRead = float64(blkRead)
|
||||
s.BlockWrite = float64(blkWrite)
|
||||
s.mu.Unlock()
|
||||
u <- nil
|
||||
if !streamStats {
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
for {
|
||||
select {
|
||||
case <-time.After(2 * time.Second):
|
||||
// zero out the values if we have not received an update within
|
||||
// the specified duration.
|
||||
s.mu.Lock()
|
||||
s.CPUPercentage = 0
|
||||
s.Memory = 0
|
||||
s.MemoryPercentage = 0
|
||||
s.MemoryLimit = 0
|
||||
s.NetworkRx = 0
|
||||
s.NetworkTx = 0
|
||||
s.BlockRead = 0
|
||||
s.BlockWrite = 0
|
||||
s.mu.Unlock()
|
||||
case err := <-u:
|
||||
if err != nil {
|
||||
s.mu.Lock()
|
||||
s.err = err
|
||||
s.mu.Unlock()
|
||||
return
|
||||
}
|
||||
}
|
||||
if !streamStats {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *containerStats) Display(w io.Writer) error {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
if s.err != nil {
|
||||
return s.err
|
||||
}
|
||||
fmt.Fprintf(w, "%s\t%.2f%%\t%s / %s\t%.2f%%\t%s / %s\t%s / %s\n",
|
||||
s.Name,
|
||||
s.CPUPercentage,
|
||||
units.HumanSize(s.Memory), units.HumanSize(s.MemoryLimit),
|
||||
s.MemoryPercentage,
|
||||
units.HumanSize(s.NetworkRx), units.HumanSize(s.NetworkTx),
|
||||
units.HumanSize(s.BlockRead), units.HumanSize(s.BlockWrite))
|
||||
return nil
|
||||
}
|
||||
|
||||
// CmdStats displays a live stream of resource usage statistics for one or more containers.
|
||||
//
|
||||
// This shows real-time information on CPU usage, memory usage, and network I/O.
|
||||
|
@ -149,125 +30,143 @@ func (cli *DockerCli) CmdStats(args ...string) error {
|
|||
|
||||
names := cmd.Args()
|
||||
showAll := len(names) == 0
|
||||
closeChan := make(chan error)
|
||||
|
||||
if showAll {
|
||||
// monitorContainerEvents watches for container creation and removal (only
|
||||
// used when calling `docker stats` without arguments).
|
||||
monitorContainerEvents := func(started chan<- struct{}, c chan events.Message) {
|
||||
f := filters.NewArgs()
|
||||
f.Add("type", "container")
|
||||
options := types.EventsOptions{
|
||||
Filters: f,
|
||||
}
|
||||
resBody, err := cli.client.Events(context.Background(), options)
|
||||
// Whether we successfully subscribed to events or not, we can now
|
||||
// unblock the main goroutine.
|
||||
close(started)
|
||||
if err != nil {
|
||||
closeChan <- err
|
||||
return
|
||||
}
|
||||
defer resBody.Close()
|
||||
|
||||
decodeEvents(resBody, func(event events.Message, err error) error {
|
||||
if err != nil {
|
||||
closeChan <- err
|
||||
return nil
|
||||
}
|
||||
c <- event
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// waitFirst is a WaitGroup to wait first stat data's reach for each container
|
||||
waitFirst := &sync.WaitGroup{}
|
||||
|
||||
cStats := stats{}
|
||||
// getContainerList simulates creation event for all previously existing
|
||||
// containers (only used when calling `docker stats` without arguments).
|
||||
getContainerList := func() {
|
||||
options := types.ContainerListOptions{
|
||||
All: *all,
|
||||
}
|
||||
cs, err := cli.client.ContainerList(options)
|
||||
if err != nil {
|
||||
return err
|
||||
closeChan <- err
|
||||
}
|
||||
for _, c := range cs {
|
||||
names = append(names, c.ID[:12])
|
||||
for _, container := range cs {
|
||||
s := &containerStats{Name: container.ID[:12]}
|
||||
if cStats.add(s) {
|
||||
waitFirst.Add(1)
|
||||
go s.Collect(cli.client, !*noStream, waitFirst)
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(names) == 0 && !showAll {
|
||||
return fmt.Errorf("No containers found")
|
||||
}
|
||||
sort.Strings(names)
|
||||
|
||||
var (
|
||||
cStats = stats{}
|
||||
w = tabwriter.NewWriter(cli.out, 20, 1, 3, ' ', 0)
|
||||
)
|
||||
if showAll {
|
||||
// If no names were specified, start a long running goroutine which
|
||||
// monitors container events. We make sure we're subscribed before
|
||||
// retrieving the list of running containers to avoid a race where we
|
||||
// would "miss" a creation.
|
||||
started := make(chan struct{})
|
||||
eh := eventHandler{handlers: make(map[string]func(events.Message))}
|
||||
eh.Handle("create", func(e events.Message) {
|
||||
if *all {
|
||||
s := &containerStats{Name: e.ID[:12]}
|
||||
if cStats.add(s) {
|
||||
waitFirst.Add(1)
|
||||
go s.Collect(cli.client, !*noStream, waitFirst)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
eh.Handle("start", func(e events.Message) {
|
||||
s := &containerStats{Name: e.ID[:12]}
|
||||
if cStats.add(s) {
|
||||
waitFirst.Add(1)
|
||||
go s.Collect(cli.client, !*noStream, waitFirst)
|
||||
}
|
||||
})
|
||||
|
||||
eh.Handle("die", func(e events.Message) {
|
||||
if !*all {
|
||||
cStats.remove(e.ID[:12])
|
||||
}
|
||||
})
|
||||
|
||||
eventChan := make(chan events.Message)
|
||||
go eh.Watch(eventChan)
|
||||
go monitorContainerEvents(started, eventChan)
|
||||
defer close(eventChan)
|
||||
<-started
|
||||
|
||||
// Start a short-lived goroutine to retrieve the initial list of
|
||||
// containers.
|
||||
getContainerList()
|
||||
} else {
|
||||
// Artificially send creation events for the containers we were asked to
|
||||
// monitor (same code path than we use when monitoring all containers).
|
||||
for _, name := range names {
|
||||
s := &containerStats{Name: name}
|
||||
if cStats.add(s) {
|
||||
waitFirst.Add(1)
|
||||
go s.Collect(cli.client, !*noStream, waitFirst)
|
||||
}
|
||||
}
|
||||
|
||||
// We don't expect any asynchronous errors: closeChan can be closed.
|
||||
close(closeChan)
|
||||
|
||||
// Do a quick pause to detect any error with the provided list of
|
||||
// container names.
|
||||
time.Sleep(1500 * time.Millisecond)
|
||||
var errs []string
|
||||
cStats.mu.Lock()
|
||||
for _, c := range cStats.cs {
|
||||
c.mu.Lock()
|
||||
if c.err != nil {
|
||||
errs = append(errs, fmt.Sprintf("%s: %v", c.Name, c.err))
|
||||
}
|
||||
c.mu.Unlock()
|
||||
}
|
||||
cStats.mu.Unlock()
|
||||
if len(errs) > 0 {
|
||||
return fmt.Errorf("%s", strings.Join(errs, ", "))
|
||||
}
|
||||
}
|
||||
|
||||
// before print to screen, make sure each container get at least one valid stat data
|
||||
waitFirst.Wait()
|
||||
|
||||
w := tabwriter.NewWriter(cli.out, 20, 1, 3, ' ', 0)
|
||||
printHeader := func() {
|
||||
if !*noStream {
|
||||
fmt.Fprint(cli.out, "\033[2J")
|
||||
fmt.Fprint(cli.out, "\033[H")
|
||||
}
|
||||
io.WriteString(w, "CONTAINER\tCPU %\tMEM USAGE / LIMIT\tMEM %\tNET I/O\tBLOCK I/O\n")
|
||||
io.WriteString(w, "CONTAINER\tCPU %\tMEM USAGE / LIMIT\tMEM %\tNET I/O\tBLOCK I/O\tPIDS\n")
|
||||
}
|
||||
for _, n := range names {
|
||||
s := &containerStats{Name: n}
|
||||
// no need to lock here since only the main goroutine is running here
|
||||
cStats.cs = append(cStats.cs, s)
|
||||
go s.Collect(cli, !*noStream)
|
||||
}
|
||||
closeChan := make(chan error)
|
||||
if showAll {
|
||||
type watch struct {
|
||||
cid string
|
||||
event string
|
||||
err error
|
||||
}
|
||||
getNewContainers := func(c chan<- watch) {
|
||||
f := filters.NewArgs()
|
||||
f.Add("type", "container")
|
||||
options := types.EventsOptions{
|
||||
Filters: f,
|
||||
}
|
||||
resBody, err := cli.client.Events(context.Background(), options)
|
||||
if err != nil {
|
||||
c <- watch{err: err}
|
||||
return
|
||||
}
|
||||
defer resBody.Close()
|
||||
|
||||
decodeEvents(resBody, func(event events.Message, err error) error {
|
||||
if err != nil {
|
||||
c <- watch{err: err}
|
||||
return nil
|
||||
}
|
||||
|
||||
c <- watch{event.ID[:12], event.Action, nil}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
go func(stopChan chan<- error) {
|
||||
cChan := make(chan watch)
|
||||
go getNewContainers(cChan)
|
||||
for {
|
||||
c := <-cChan
|
||||
if c.err != nil {
|
||||
stopChan <- c.err
|
||||
return
|
||||
}
|
||||
switch c.event {
|
||||
case "create":
|
||||
s := &containerStats{Name: c.cid}
|
||||
cStats.mu.Lock()
|
||||
cStats.cs = append(cStats.cs, s)
|
||||
cStats.mu.Unlock()
|
||||
go s.Collect(cli, !*noStream)
|
||||
case "stop":
|
||||
case "die":
|
||||
if !*all {
|
||||
var remove int
|
||||
// cStats cannot be O(1) with a map cause ranging over it would cause
|
||||
// containers in stats to move up and down in the list...:(
|
||||
cStats.mu.Lock()
|
||||
for i, s := range cStats.cs {
|
||||
if s.Name == c.cid {
|
||||
remove = i
|
||||
break
|
||||
}
|
||||
}
|
||||
cStats.cs = append(cStats.cs[:remove], cStats.cs[remove+1:]...)
|
||||
cStats.mu.Unlock()
|
||||
}
|
||||
}
|
||||
}
|
||||
}(closeChan)
|
||||
} else {
|
||||
close(closeChan)
|
||||
}
|
||||
// do a quick pause so that any failed connections for containers that do not exist are able to be
|
||||
// evicted before we display the initial or default values.
|
||||
time.Sleep(1500 * time.Millisecond)
|
||||
var errs []string
|
||||
cStats.mu.Lock()
|
||||
for _, c := range cStats.cs {
|
||||
c.mu.Lock()
|
||||
if c.err != nil {
|
||||
errs = append(errs, fmt.Sprintf("%s: %v", c.Name, c.err))
|
||||
}
|
||||
c.mu.Unlock()
|
||||
}
|
||||
cStats.mu.Unlock()
|
||||
if len(errs) > 0 {
|
||||
return fmt.Errorf("%s", strings.Join(errs, ", "))
|
||||
}
|
||||
for range time.Tick(500 * time.Millisecond) {
|
||||
printHeader()
|
||||
toRemove := []int{}
|
||||
|
@ -307,40 +206,3 @@ func (cli *DockerCli) CmdStats(args ...string) error {
|
|||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func calculateCPUPercent(previousCPU, previousSystem uint64, v *types.StatsJSON) float64 {
|
||||
var (
|
||||
cpuPercent = 0.0
|
||||
// calculate the change for the cpu usage of the container in between readings
|
||||
cpuDelta = float64(v.CPUStats.CPUUsage.TotalUsage) - float64(previousCPU)
|
||||
// calculate the change for the entire system between readings
|
||||
systemDelta = float64(v.CPUStats.SystemUsage) - float64(previousSystem)
|
||||
)
|
||||
|
||||
if systemDelta > 0.0 && cpuDelta > 0.0 {
|
||||
cpuPercent = (cpuDelta / systemDelta) * float64(len(v.CPUStats.CPUUsage.PercpuUsage)) * 100.0
|
||||
}
|
||||
return cpuPercent
|
||||
}
|
||||
|
||||
func calculateBlockIO(blkio types.BlkioStats) (blkRead uint64, blkWrite uint64) {
|
||||
for _, bioEntry := range blkio.IoServiceBytesRecursive {
|
||||
switch strings.ToLower(bioEntry.Op) {
|
||||
case "read":
|
||||
blkRead = blkRead + bioEntry.Value
|
||||
case "write":
|
||||
blkWrite = blkWrite + bioEntry.Value
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func calculateNetwork(network map[string]types.NetworkStats) (float64, float64) {
|
||||
var rx, tx float64
|
||||
|
||||
for _, v := range network {
|
||||
rx += float64(v.RxBytes)
|
||||
tx += float64(v.TxBytes)
|
||||
}
|
||||
return rx, tx
|
||||
}
|
||||
|
|
217
api/client/stats_helpers.go
Normal file
217
api/client/stats_helpers.go
Normal file
|
@ -0,0 +1,217 @@
|
|||
package client
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/docker/engine-api/client"
|
||||
"github.com/docker/engine-api/types"
|
||||
"github.com/docker/go-units"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
type containerStats struct {
|
||||
Name string
|
||||
CPUPercentage float64
|
||||
Memory float64
|
||||
MemoryLimit float64
|
||||
MemoryPercentage float64
|
||||
NetworkRx float64
|
||||
NetworkTx float64
|
||||
BlockRead float64
|
||||
BlockWrite float64
|
||||
PidsCurrent uint64
|
||||
mu sync.RWMutex
|
||||
err error
|
||||
}
|
||||
|
||||
type stats struct {
|
||||
mu sync.Mutex
|
||||
cs []*containerStats
|
||||
}
|
||||
|
||||
func (s *stats) add(cs *containerStats) bool {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
if _, exists := s.isKnownContainer(cs.Name); !exists {
|
||||
s.cs = append(s.cs, cs)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (s *stats) remove(id string) {
|
||||
s.mu.Lock()
|
||||
if i, exists := s.isKnownContainer(id); exists {
|
||||
s.cs = append(s.cs[:i], s.cs[i+1:]...)
|
||||
}
|
||||
s.mu.Unlock()
|
||||
}
|
||||
|
||||
func (s *stats) isKnownContainer(cid string) (int, bool) {
|
||||
for i, c := range s.cs {
|
||||
if c.Name == cid {
|
||||
return i, true
|
||||
}
|
||||
}
|
||||
return -1, false
|
||||
}
|
||||
|
||||
func (s *containerStats) Collect(cli client.APIClient, streamStats bool, waitFirst *sync.WaitGroup) {
|
||||
var (
|
||||
getFirst bool
|
||||
previousCPU uint64
|
||||
previousSystem uint64
|
||||
u = make(chan error, 1)
|
||||
)
|
||||
|
||||
defer func() {
|
||||
// if error happens and we get nothing of stats, release wait group whatever
|
||||
if !getFirst {
|
||||
getFirst = true
|
||||
waitFirst.Done()
|
||||
}
|
||||
}()
|
||||
|
||||
responseBody, err := cli.ContainerStats(context.Background(), s.Name, streamStats)
|
||||
if err != nil {
|
||||
s.mu.Lock()
|
||||
s.err = err
|
||||
s.mu.Unlock()
|
||||
return
|
||||
}
|
||||
defer responseBody.Close()
|
||||
|
||||
dec := json.NewDecoder(responseBody)
|
||||
go func() {
|
||||
for {
|
||||
var v *types.StatsJSON
|
||||
if err := dec.Decode(&v); err != nil {
|
||||
u <- err
|
||||
return
|
||||
}
|
||||
|
||||
var memPercent = 0.0
|
||||
var cpuPercent = 0.0
|
||||
|
||||
// MemoryStats.Limit will never be 0 unless the container is not running and we haven't
|
||||
// got any data from cgroup
|
||||
if v.MemoryStats.Limit != 0 {
|
||||
memPercent = float64(v.MemoryStats.Usage) / float64(v.MemoryStats.Limit) * 100.0
|
||||
}
|
||||
|
||||
previousCPU = v.PreCPUStats.CPUUsage.TotalUsage
|
||||
previousSystem = v.PreCPUStats.SystemUsage
|
||||
cpuPercent = calculateCPUPercent(previousCPU, previousSystem, v)
|
||||
blkRead, blkWrite := calculateBlockIO(v.BlkioStats)
|
||||
s.mu.Lock()
|
||||
s.CPUPercentage = cpuPercent
|
||||
s.Memory = float64(v.MemoryStats.Usage)
|
||||
s.MemoryLimit = float64(v.MemoryStats.Limit)
|
||||
s.MemoryPercentage = memPercent
|
||||
s.NetworkRx, s.NetworkTx = calculateNetwork(v.Networks)
|
||||
s.BlockRead = float64(blkRead)
|
||||
s.BlockWrite = float64(blkWrite)
|
||||
s.mu.Unlock()
|
||||
u <- nil
|
||||
if !streamStats {
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
for {
|
||||
select {
|
||||
case <-time.After(2 * time.Second):
|
||||
// zero out the values if we have not received an update within
|
||||
// the specified duration.
|
||||
s.mu.Lock()
|
||||
s.CPUPercentage = 0
|
||||
s.Memory = 0
|
||||
s.MemoryPercentage = 0
|
||||
s.MemoryLimit = 0
|
||||
s.NetworkRx = 0
|
||||
s.NetworkTx = 0
|
||||
s.BlockRead = 0
|
||||
s.BlockWrite = 0
|
||||
s.mu.Unlock()
|
||||
// if this is the first stat you get, release WaitGroup
|
||||
if !getFirst {
|
||||
getFirst = true
|
||||
waitFirst.Done()
|
||||
}
|
||||
case err := <-u:
|
||||
if err != nil {
|
||||
s.mu.Lock()
|
||||
s.err = err
|
||||
s.mu.Unlock()
|
||||
return
|
||||
}
|
||||
// if this is the first stat you get, release WaitGroup
|
||||
if !getFirst {
|
||||
getFirst = true
|
||||
waitFirst.Done()
|
||||
}
|
||||
}
|
||||
if !streamStats {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *containerStats) Display(w io.Writer) error {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
if s.err != nil {
|
||||
return s.err
|
||||
}
|
||||
fmt.Fprintf(w, "%s\t%.2f%%\t%s / %s\t%.2f%%\t%s / %s\t%s / %s\t%d\n",
|
||||
s.Name,
|
||||
s.CPUPercentage,
|
||||
units.HumanSize(s.Memory), units.HumanSize(s.MemoryLimit),
|
||||
s.MemoryPercentage,
|
||||
units.HumanSize(s.NetworkRx), units.HumanSize(s.NetworkTx),
|
||||
units.HumanSize(s.BlockRead), units.HumanSize(s.BlockWrite),
|
||||
s.PidsCurrent)
|
||||
return nil
|
||||
}
|
||||
|
||||
func calculateCPUPercent(previousCPU, previousSystem uint64, v *types.StatsJSON) float64 {
|
||||
var (
|
||||
cpuPercent = 0.0
|
||||
// calculate the change for the cpu usage of the container in between readings
|
||||
cpuDelta = float64(v.CPUStats.CPUUsage.TotalUsage) - float64(previousCPU)
|
||||
// calculate the change for the entire system between readings
|
||||
systemDelta = float64(v.CPUStats.SystemUsage) - float64(previousSystem)
|
||||
)
|
||||
|
||||
if systemDelta > 0.0 && cpuDelta > 0.0 {
|
||||
cpuPercent = (cpuDelta / systemDelta) * float64(len(v.CPUStats.CPUUsage.PercpuUsage)) * 100.0
|
||||
}
|
||||
return cpuPercent
|
||||
}
|
||||
|
||||
func calculateBlockIO(blkio types.BlkioStats) (blkRead uint64, blkWrite uint64) {
|
||||
for _, bioEntry := range blkio.IoServiceBytesRecursive {
|
||||
switch strings.ToLower(bioEntry.Op) {
|
||||
case "read":
|
||||
blkRead = blkRead + bioEntry.Value
|
||||
case "write":
|
||||
blkWrite = blkWrite + bioEntry.Value
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func calculateNetwork(network map[string]types.NetworkStats) (float64, float64) {
|
||||
var rx, tx float64
|
||||
|
||||
for _, v := range network {
|
||||
rx += float64(v.RxBytes)
|
||||
tx += float64(v.TxBytes)
|
||||
}
|
||||
return rx, tx
|
||||
}
|
|
@ -19,6 +19,7 @@ func TestDisplay(t *testing.T) {
|
|||
NetworkTx: 800 * 1024 * 1024,
|
||||
BlockRead: 100 * 1024 * 1024,
|
||||
BlockWrite: 800 * 1024 * 1024,
|
||||
PidsCurrent: 1,
|
||||
mu: sync.RWMutex{},
|
||||
}
|
||||
var b bytes.Buffer
|
||||
|
@ -26,7 +27,7 @@ func TestDisplay(t *testing.T) {
|
|||
t.Fatalf("c.Display() gave error: %s", err)
|
||||
}
|
||||
got := b.String()
|
||||
want := "app\t30.00%\t104.9 MB / 2.147 GB\t4.88%\t104.9 MB / 838.9 MB\t104.9 MB / 838.9 MB\n"
|
||||
want := "app\t30.00%\t104.9 MB / 2.147 GB\t4.88%\t104.9 MB / 838.9 MB\t104.9 MB / 838.9 MB\t1\n"
|
||||
if got != want {
|
||||
t.Fatalf("c.Display() = %q, want %q", got, want)
|
||||
}
|
||||
|
|
|
@ -107,7 +107,10 @@ func (scs simpleCredentialStore) Basic(u *url.URL) (string, string) {
|
|||
return scs.auth.Username, scs.auth.Password
|
||||
}
|
||||
|
||||
func (cli *DockerCli) getNotaryRepository(repoInfo *registry.RepositoryInfo, authConfig types.AuthConfig) (*client.NotaryRepository, error) {
|
||||
// getNotaryRepository returns a NotaryRepository which stores all the
|
||||
// information needed to operate on a notary repository.
|
||||
// It creates a HTTP transport providing authentication support.
|
||||
func (cli *DockerCli) getNotaryRepository(repoInfo *registry.RepositoryInfo, authConfig types.AuthConfig, actions ...string) (*client.NotaryRepository, error) {
|
||||
server, err := trustServer(repoInfo.Index)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -169,7 +172,7 @@ func (cli *DockerCli) getNotaryRepository(repoInfo *registry.RepositoryInfo, aut
|
|||
}
|
||||
|
||||
creds := simpleCredentialStore{auth: authConfig}
|
||||
tokenHandler := auth.NewTokenHandler(authTransport, creds, repoInfo.FullName(), "push", "pull")
|
||||
tokenHandler := auth.NewTokenHandler(authTransport, creds, repoInfo.FullName(), actions...)
|
||||
basicHandler := auth.NewBasicHandler(creds)
|
||||
modifiers = append(modifiers, transport.RequestModifier(auth.NewAuthorizer(challengeManager, tokenHandler, basicHandler)))
|
||||
tr := transport.NewTransport(base, modifiers...)
|
||||
|
@ -235,7 +238,7 @@ func (cli *DockerCli) trustedReference(ref reference.NamedTagged) (reference.Can
|
|||
}
|
||||
|
||||
// Resolve the Auth config relevant for this server
|
||||
authConfig := cli.resolveAuthConfig(cli.configFile.AuthConfigs, repoInfo.Index)
|
||||
authConfig := cli.resolveAuthConfig(repoInfo.Index)
|
||||
|
||||
notaryRepo, err := cli.getNotaryRepository(repoInfo, authConfig)
|
||||
if err != nil {
|
||||
|
@ -302,7 +305,7 @@ func notaryError(repoName string, err error) error {
|
|||
func (cli *DockerCli) trustedPull(repoInfo *registry.RepositoryInfo, ref registry.Reference, authConfig types.AuthConfig, requestPrivilege apiclient.RequestPrivilegeFunc) error {
|
||||
var refs []target
|
||||
|
||||
notaryRepo, err := cli.getNotaryRepository(repoInfo, authConfig)
|
||||
notaryRepo, err := cli.getNotaryRepository(repoInfo, authConfig, "pull")
|
||||
if err != nil {
|
||||
fmt.Fprintf(cli.out, "Error establishing connection to trust repository: %s\n", err)
|
||||
return err
|
||||
|
@ -372,60 +375,74 @@ func (cli *DockerCli) trustedPush(repoInfo *registry.RepositoryInfo, tag string,
|
|||
|
||||
defer responseBody.Close()
|
||||
|
||||
targets := []target{}
|
||||
// If it is a trusted push we would like to find the target entry which match the
|
||||
// tag provided in the function and then do an AddTarget later.
|
||||
target := &client.Target{}
|
||||
// Count the times of calling for handleTarget,
|
||||
// if it is called more that once, that should be considered an error in a trusted push.
|
||||
cnt := 0
|
||||
handleTarget := func(aux *json.RawMessage) {
|
||||
cnt++
|
||||
if cnt > 1 {
|
||||
// handleTarget should only be called one. This will be treated as an error.
|
||||
return
|
||||
}
|
||||
|
||||
var pushResult distribution.PushResult
|
||||
err := json.Unmarshal(*aux, &pushResult)
|
||||
if err == nil && pushResult.Tag != "" && pushResult.Digest.Validate() == nil {
|
||||
targets = append(targets, target{
|
||||
reference: registry.ParseReference(pushResult.Tag),
|
||||
digest: pushResult.Digest,
|
||||
size: int64(pushResult.Size),
|
||||
})
|
||||
h, err := hex.DecodeString(pushResult.Digest.Hex())
|
||||
if err != nil {
|
||||
target = nil
|
||||
return
|
||||
}
|
||||
target.Name = registry.ParseReference(pushResult.Tag).String()
|
||||
target.Hashes = data.Hashes{string(pushResult.Digest.Algorithm()): h}
|
||||
target.Length = int64(pushResult.Size)
|
||||
}
|
||||
}
|
||||
|
||||
err = jsonmessage.DisplayJSONMessagesStream(responseBody, cli.out, cli.outFd, cli.isTerminalOut, handleTarget)
|
||||
if err != nil {
|
||||
// We want trust signatures to always take an explicit tag,
|
||||
// otherwise it will act as an untrusted push.
|
||||
if tag == "" {
|
||||
if err = jsonmessage.DisplayJSONMessagesStream(responseBody, cli.out, cli.outFd, cli.isTerminalOut, nil); err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintln(cli.out, "No tag specified, skipping trust metadata push")
|
||||
return nil
|
||||
}
|
||||
|
||||
if err = jsonmessage.DisplayJSONMessagesStream(responseBody, cli.out, cli.outFd, cli.isTerminalOut, handleTarget); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if tag == "" {
|
||||
fmt.Fprintf(cli.out, "No tag specified, skipping trust metadata push\n")
|
||||
return nil
|
||||
if cnt > 1 {
|
||||
return fmt.Errorf("internal error: only one call to handleTarget expected")
|
||||
}
|
||||
if len(targets) == 0 {
|
||||
fmt.Fprintf(cli.out, "No targets found, skipping trust metadata push\n")
|
||||
|
||||
if target == nil {
|
||||
fmt.Fprintln(cli.out, "No targets found, please provide a specific tag in order to sign it")
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Fprintf(cli.out, "Signing and pushing trust metadata\n")
|
||||
fmt.Fprintln(cli.out, "Signing and pushing trust metadata")
|
||||
|
||||
repo, err := cli.getNotaryRepository(repoInfo, authConfig)
|
||||
repo, err := cli.getNotaryRepository(repoInfo, authConfig, "push", "pull")
|
||||
if err != nil {
|
||||
fmt.Fprintf(cli.out, "Error establishing connection to notary repository: %s\n", err)
|
||||
return err
|
||||
}
|
||||
|
||||
for _, target := range targets {
|
||||
h, err := hex.DecodeString(target.digest.Hex())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
t := &client.Target{
|
||||
Name: target.reference.String(),
|
||||
Hashes: data.Hashes{
|
||||
string(target.digest.Algorithm()): h,
|
||||
},
|
||||
Length: int64(target.size),
|
||||
}
|
||||
if err := repo.AddTarget(t, releasesRole); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := repo.AddTarget(target, releasesRole); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = repo.Publish()
|
||||
if _, ok := err.(client.ErrRepoNotInitialized); !ok {
|
||||
if err == nil {
|
||||
fmt.Fprintf(cli.out, "Successfully signed %q:%s\n", repoInfo.FullName(), tag)
|
||||
return nil
|
||||
} else if _, ok := err.(client.ErrRepoNotInitialized); !ok {
|
||||
fmt.Fprintf(cli.out, "Failed to sign %q:%s - %s\n", repoInfo.FullName(), tag, err.Error())
|
||||
return notaryError(repoInfo.FullName(), err)
|
||||
}
|
||||
|
||||
|
@ -444,7 +461,8 @@ func (cli *DockerCli) trustedPush(repoInfo *registry.RepositoryInfo, tag string,
|
|||
rootKeyID = rootPublicKey.ID()
|
||||
}
|
||||
|
||||
if err := repo.Initialize(rootKeyID); err != nil {
|
||||
// Initialize the notary repository with a remotely managed snapshot key
|
||||
if err := repo.Initialize(rootKeyID, data.CanonicalSnapshotRole); err != nil {
|
||||
return notaryError(repoInfo.FullName(), err)
|
||||
}
|
||||
fmt.Fprintf(cli.out, "Finished initializing %q\n", repoInfo.FullName())
|
||||
|
|
|
@ -6,6 +6,7 @@ import (
|
|||
|
||||
Cli "github.com/docker/docker/cli"
|
||||
flag "github.com/docker/docker/pkg/mflag"
|
||||
"github.com/docker/docker/runconfig/opts"
|
||||
"github.com/docker/engine-api/types/container"
|
||||
"github.com/docker/go-units"
|
||||
)
|
||||
|
@ -25,6 +26,7 @@ func (cli *DockerCli) CmdUpdate(args ...string) error {
|
|||
flMemoryReservation := cmd.String([]string{"-memory-reservation"}, "", "Memory soft limit")
|
||||
flMemorySwap := cmd.String([]string{"-memory-swap"}, "", "Swap limit equal to memory plus swap: '-1' to enable unlimited swap")
|
||||
flKernelMemory := cmd.String([]string{"-kernel-memory"}, "", "Kernel memory limit")
|
||||
flRestartPolicy := cmd.String([]string{"-restart"}, "", "Restart policy to apply when a container exits")
|
||||
|
||||
cmd.Require(flag.Min, 1)
|
||||
cmd.ParseFlags(args, true)
|
||||
|
@ -69,6 +71,14 @@ func (cli *DockerCli) CmdUpdate(args ...string) error {
|
|||
}
|
||||
}
|
||||
|
||||
var restartPolicy container.RestartPolicy
|
||||
if *flRestartPolicy != "" {
|
||||
restartPolicy, err = opts.ParseRestartPolicy(*flRestartPolicy)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
resources := container.Resources{
|
||||
BlkioWeight: *flBlkioWeight,
|
||||
CpusetCpus: *flCpusetCpus,
|
||||
|
@ -83,7 +93,8 @@ func (cli *DockerCli) CmdUpdate(args ...string) error {
|
|||
}
|
||||
|
||||
updateConfig := container.UpdateConfig{
|
||||
Resources: resources,
|
||||
Resources: resources,
|
||||
RestartPolicy: restartPolicy,
|
||||
}
|
||||
|
||||
names := cmd.Args()
|
||||
|
|
|
@ -10,7 +10,6 @@ import (
|
|||
gosignal "os/signal"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
|
@ -49,7 +48,7 @@ func (cli *DockerCli) registryAuthenticationPrivilegedFunc(index *registrytypes.
|
|||
return func() (string, error) {
|
||||
fmt.Fprintf(cli.out, "\nPlease login prior to %s:\n", cmdName)
|
||||
indexServer := registry.GetAuthConfigKey(index)
|
||||
authConfig, err := cli.configureAuth("", "", "", indexServer)
|
||||
authConfig, err := cli.configureAuth("", "", indexServer, false)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
@ -59,6 +58,10 @@ func (cli *DockerCli) registryAuthenticationPrivilegedFunc(index *registrytypes.
|
|||
|
||||
func (cli *DockerCli) resizeTty(id string, isExec bool) {
|
||||
height, width := cli.getTtySize()
|
||||
cli.resizeTtyTo(id, height, width, isExec)
|
||||
}
|
||||
|
||||
func (cli *DockerCli) resizeTtyTo(id string, height, width int, isExec bool) {
|
||||
if height == 0 && width == 0 {
|
||||
return
|
||||
}
|
||||
|
@ -181,38 +184,17 @@ func copyToFile(outfile string, r io.Reader) error {
|
|||
// resolveAuthConfig is like registry.ResolveAuthConfig, but if using the
|
||||
// default index, it uses the default index name for the daemon's platform,
|
||||
// not the client's platform.
|
||||
func (cli *DockerCli) resolveAuthConfig(authConfigs map[string]types.AuthConfig, index *registrytypes.IndexInfo) types.AuthConfig {
|
||||
func (cli *DockerCli) resolveAuthConfig(index *registrytypes.IndexInfo) types.AuthConfig {
|
||||
configKey := index.Name
|
||||
if index.Official {
|
||||
configKey = cli.electAuthServer()
|
||||
}
|
||||
|
||||
// First try the happy case
|
||||
if c, found := authConfigs[configKey]; found || index.Official {
|
||||
return c
|
||||
}
|
||||
|
||||
convertToHostname := func(url string) string {
|
||||
stripped := url
|
||||
if strings.HasPrefix(url, "http://") {
|
||||
stripped = strings.Replace(url, "http://", "", 1)
|
||||
} else if strings.HasPrefix(url, "https://") {
|
||||
stripped = strings.Replace(url, "https://", "", 1)
|
||||
}
|
||||
|
||||
nameParts := strings.SplitN(stripped, "/", 2)
|
||||
|
||||
return nameParts[0]
|
||||
}
|
||||
|
||||
// Maybe they have a legacy config file, we will iterate the keys converting
|
||||
// them to the new format and testing
|
||||
for registry, ac := range authConfigs {
|
||||
if configKey == convertToHostname(registry) {
|
||||
return ac
|
||||
}
|
||||
}
|
||||
|
||||
// When all else fails, return an empty auth config
|
||||
return types.AuthConfig{}
|
||||
a, _ := getCredentials(cli.configFile, configKey)
|
||||
return a
|
||||
}
|
||||
|
||||
func (cli *DockerCli) retrieveAuthConfigs() map[string]types.AuthConfig {
|
||||
acs, _ := getAllCredentials(cli.configFile)
|
||||
return acs
|
||||
}
|
||||
|
|
|
@ -2,6 +2,7 @@ package client
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"text/tabwriter"
|
||||
|
||||
Cli "github.com/docker/docker/cli"
|
||||
|
@ -72,6 +73,7 @@ func (cli *DockerCli) CmdVolumeLs(args ...string) error {
|
|||
fmt.Fprintf(w, "\n")
|
||||
}
|
||||
|
||||
sort.Sort(byVolumeName(volumes.Volumes))
|
||||
for _, vol := range volumes.Volumes {
|
||||
if *quiet {
|
||||
fmt.Fprintln(w, vol.Name)
|
||||
|
@ -83,6 +85,14 @@ func (cli *DockerCli) CmdVolumeLs(args ...string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
type byVolumeName []*types.Volume
|
||||
|
||||
func (r byVolumeName) Len() int { return len(r) }
|
||||
func (r byVolumeName) Swap(i, j int) { r[i], r[j] = r[j], r[i] }
|
||||
func (r byVolumeName) Less(i, j int) bool {
|
||||
return r[i].Name < r[j].Name
|
||||
}
|
||||
|
||||
// CmdVolumeInspect displays low-level information on one or more volumes.
|
||||
//
|
||||
// Usage: docker volume inspect [OPTIONS] VOLUME [VOLUME...]
|
||||
|
|
|
@ -23,9 +23,6 @@ const (
|
|||
// MinVersion represents Minimum REST API version supported
|
||||
MinVersion version.Version = "1.12"
|
||||
|
||||
// DefaultDockerfileName is the Default filename with Docker commands, read by docker build
|
||||
DefaultDockerfileName string = "Dockerfile"
|
||||
|
||||
// NoBaseImageSpecifier is the symbol used by the FROM
|
||||
// command to specify that no base image is to be used.
|
||||
NoBaseImageSpecifier string = "scratch"
|
||||
|
|
|
@ -310,7 +310,7 @@ func TestLoadOrCreateTrustKeyCreateKey(t *testing.T) {
|
|||
}
|
||||
|
||||
// With the need to create the folder hierarchy as tmpKeyFie is in a path
|
||||
// where some folder do not exists.
|
||||
// where some folders do not exist.
|
||||
tmpKeyFile = filepath.Join(tmpKeyFolderPath, "folder/hierarchy/keyfile")
|
||||
|
||||
if key, err := LoadOrCreateTrustKey(tmpKeyFile); err != nil || key == nil {
|
||||
|
|
69
api/server/httputils/errors.go
Normal file
69
api/server/httputils/errors.go
Normal file
|
@ -0,0 +1,69 @@
|
|||
package httputils
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
)
|
||||
|
||||
// httpStatusError is an interface
|
||||
// that errors with custom status codes
|
||||
// implement to tell the api layer
|
||||
// which response status to set.
|
||||
type httpStatusError interface {
|
||||
HTTPErrorStatusCode() int
|
||||
}
|
||||
|
||||
// inputValidationError is an interface
|
||||
// that errors generated by invalid
|
||||
// inputs can implement to tell the
|
||||
// api layer to set a 400 status code
|
||||
// in the response.
|
||||
type inputValidationError interface {
|
||||
IsValidationError() bool
|
||||
}
|
||||
|
||||
// WriteError decodes a specific docker error and sends it in the response.
|
||||
func WriteError(w http.ResponseWriter, err error) {
|
||||
if err == nil || w == nil {
|
||||
logrus.WithFields(logrus.Fields{"error": err, "writer": w}).Error("unexpected HTTP error handling")
|
||||
return
|
||||
}
|
||||
|
||||
var statusCode int
|
||||
errMsg := err.Error()
|
||||
|
||||
switch e := err.(type) {
|
||||
case httpStatusError:
|
||||
statusCode = e.HTTPErrorStatusCode()
|
||||
case inputValidationError:
|
||||
statusCode = http.StatusBadRequest
|
||||
default:
|
||||
// FIXME: this is brittle and should not be necessary, but we still need to identify if
|
||||
// there are errors falling back into this logic.
|
||||
// If we need to differentiate between different possible error types,
|
||||
// we should create appropriate error types that implement the httpStatusError interface.
|
||||
errStr := strings.ToLower(errMsg)
|
||||
for keyword, status := range map[string]int{
|
||||
"not found": http.StatusNotFound,
|
||||
"no such": http.StatusNotFound,
|
||||
"bad parameter": http.StatusBadRequest,
|
||||
"conflict": http.StatusConflict,
|
||||
"impossible": http.StatusNotAcceptable,
|
||||
"wrong login/password": http.StatusUnauthorized,
|
||||
"hasn't been activated": http.StatusForbidden,
|
||||
} {
|
||||
if strings.Contains(errStr, keyword) {
|
||||
statusCode = status
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if statusCode == 0 {
|
||||
statusCode = http.StatusInternalServerError
|
||||
}
|
||||
|
||||
http.Error(w, errMsg, statusCode)
|
||||
}
|
|
@ -9,8 +9,6 @@ import (
|
|||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/distribution/registry/api/errcode"
|
||||
"github.com/docker/docker/api"
|
||||
"github.com/docker/docker/pkg/version"
|
||||
)
|
||||
|
@ -19,7 +17,7 @@ import (
|
|||
const APIVersionKey = "api-version"
|
||||
|
||||
// APIFunc is an adapter to allow the use of ordinary functions as Docker API endpoints.
|
||||
// Any function that has the appropriate signature can be register as a API endpoint (e.g. getVersion).
|
||||
// Any function that has the appropriate signature can be registered as a API endpoint (e.g. getVersion).
|
||||
type APIFunc func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error
|
||||
|
||||
// HijackConnection interrupts the http response writer to get the
|
||||
|
@ -77,7 +75,7 @@ func ParseForm(r *http.Request) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// ParseMultipartForm ensure the request form is parsed, even with invalid content types.
|
||||
// ParseMultipartForm ensures the request form is parsed, even with invalid content types.
|
||||
func ParseMultipartForm(r *http.Request) error {
|
||||
if err := r.ParseMultipartForm(4096); err != nil && !strings.HasPrefix(err.Error(), "mime:") {
|
||||
return err
|
||||
|
@ -85,78 +83,6 @@ func ParseMultipartForm(r *http.Request) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// WriteError decodes a specific docker error and sends it in the response.
|
||||
func WriteError(w http.ResponseWriter, err error) {
|
||||
if err == nil || w == nil {
|
||||
logrus.WithFields(logrus.Fields{"error": err, "writer": w}).Error("unexpected HTTP error handling")
|
||||
return
|
||||
}
|
||||
|
||||
statusCode := http.StatusInternalServerError
|
||||
errMsg := err.Error()
|
||||
|
||||
// Based on the type of error we get we need to process things
|
||||
// slightly differently to extract the error message.
|
||||
// In the 'errcode.*' cases there are two different type of
|
||||
// error that could be returned. errocode.ErrorCode is the base
|
||||
// type of error object - it is just an 'int' that can then be
|
||||
// used as the look-up key to find the message. errorcode.Error
|
||||
// extends errorcode.Error by adding error-instance specific
|
||||
// data, like 'details' or variable strings to be inserted into
|
||||
// the message.
|
||||
//
|
||||
// Ideally, we should just be able to call err.Error() for all
|
||||
// cases but the errcode package doesn't support that yet.
|
||||
//
|
||||
// Additionally, in both errcode cases, there might be an http
|
||||
// status code associated with it, and if so use it.
|
||||
switch err.(type) {
|
||||
case errcode.ErrorCode:
|
||||
daError, _ := err.(errcode.ErrorCode)
|
||||
statusCode = daError.Descriptor().HTTPStatusCode
|
||||
errMsg = daError.Message()
|
||||
|
||||
case errcode.Error:
|
||||
// For reference, if you're looking for a particular error
|
||||
// then you can do something like :
|
||||
// import ( derr "github.com/docker/docker/errors" )
|
||||
// if daError.ErrorCode() == derr.ErrorCodeNoSuchContainer { ... }
|
||||
|
||||
daError, _ := err.(errcode.Error)
|
||||
statusCode = daError.ErrorCode().Descriptor().HTTPStatusCode
|
||||
errMsg = daError.Message
|
||||
|
||||
default:
|
||||
// This part of will be removed once we've
|
||||
// converted everything over to use the errcode package
|
||||
|
||||
// FIXME: this is brittle and should not be necessary.
|
||||
// If we need to differentiate between different possible error types,
|
||||
// we should create appropriate error types with clearly defined meaning
|
||||
errStr := strings.ToLower(err.Error())
|
||||
for keyword, status := range map[string]int{
|
||||
"not found": http.StatusNotFound,
|
||||
"no such": http.StatusNotFound,
|
||||
"bad parameter": http.StatusBadRequest,
|
||||
"conflict": http.StatusConflict,
|
||||
"impossible": http.StatusNotAcceptable,
|
||||
"wrong login/password": http.StatusUnauthorized,
|
||||
"hasn't been activated": http.StatusForbidden,
|
||||
} {
|
||||
if strings.Contains(errStr, keyword) {
|
||||
statusCode = status
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if statusCode == 0 {
|
||||
statusCode = http.StatusInternalServerError
|
||||
}
|
||||
|
||||
http.Error(w, errMsg, statusCode)
|
||||
}
|
||||
|
||||
// WriteJSON writes the value v to the http response stream as json with standard json encoding.
|
||||
func WriteJSON(w http.ResponseWriter, code int, v interface{}) error {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
|
|
|
@ -1,195 +1,41 @@
|
|||
package server
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"net/http"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/api"
|
||||
"github.com/docker/docker/api/server/httputils"
|
||||
"github.com/docker/docker/api/server/middleware"
|
||||
"github.com/docker/docker/dockerversion"
|
||||
"github.com/docker/docker/errors"
|
||||
"github.com/docker/docker/pkg/authorization"
|
||||
"github.com/docker/docker/pkg/ioutils"
|
||||
"github.com/docker/docker/pkg/version"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
// middleware is an adapter to allow the use of ordinary functions as Docker API filters.
|
||||
// Any function that has the appropriate signature can be register as a middleware.
|
||||
type middleware func(handler httputils.APIFunc) httputils.APIFunc
|
||||
|
||||
// debugRequestMiddleware dumps the request to logger
|
||||
func debugRequestMiddleware(handler httputils.APIFunc) httputils.APIFunc {
|
||||
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
logrus.Debugf("%s %s", r.Method, r.RequestURI)
|
||||
|
||||
if r.Method != "POST" {
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
if err := httputils.CheckForJSON(r); err != nil {
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
maxBodySize := 4096 // 4KB
|
||||
if r.ContentLength > int64(maxBodySize) {
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
|
||||
body := r.Body
|
||||
bufReader := bufio.NewReaderSize(body, maxBodySize)
|
||||
r.Body = ioutils.NewReadCloserWrapper(bufReader, func() error { return body.Close() })
|
||||
|
||||
b, err := bufReader.Peek(maxBodySize)
|
||||
if err != io.EOF {
|
||||
// either there was an error reading, or the buffer is full (in which case the request is too large)
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
|
||||
var postForm map[string]interface{}
|
||||
if err := json.Unmarshal(b, &postForm); err == nil {
|
||||
if _, exists := postForm["password"]; exists {
|
||||
postForm["password"] = "*****"
|
||||
}
|
||||
formStr, errMarshal := json.Marshal(postForm)
|
||||
if errMarshal == nil {
|
||||
logrus.Debugf("form data: %s", string(formStr))
|
||||
} else {
|
||||
logrus.Debugf("form data: %q", postForm)
|
||||
}
|
||||
}
|
||||
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
}
|
||||
|
||||
// authorizationMiddleware perform authorization on the request.
|
||||
func (s *Server) authorizationMiddleware(handler httputils.APIFunc) httputils.APIFunc {
|
||||
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
// FIXME: fill when authN gets in
|
||||
// User and UserAuthNMethod are taken from AuthN plugins
|
||||
// Currently tracked in https://github.com/docker/docker/pull/13994
|
||||
user := ""
|
||||
userAuthNMethod := ""
|
||||
authCtx := authorization.NewCtx(s.authZPlugins, user, userAuthNMethod, r.Method, r.RequestURI)
|
||||
|
||||
if err := authCtx.AuthZRequest(w, r); err != nil {
|
||||
logrus.Errorf("AuthZRequest for %s %s returned error: %s", r.Method, r.RequestURI, err)
|
||||
return err
|
||||
}
|
||||
|
||||
rw := authorization.NewResponseModifier(w)
|
||||
|
||||
if err := handler(ctx, rw, r, vars); err != nil {
|
||||
logrus.Errorf("Handler for %s %s returned error: %s", r.Method, r.RequestURI, err)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := authCtx.AuthZResponse(rw, r); err != nil {
|
||||
logrus.Errorf("AuthZResponse for %s %s returned error: %s", r.Method, r.RequestURI, err)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// userAgentMiddleware checks the User-Agent header looking for a valid docker client spec.
|
||||
func (s *Server) userAgentMiddleware(handler httputils.APIFunc) httputils.APIFunc {
|
||||
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if strings.Contains(r.Header.Get("User-Agent"), "Docker-Client/") {
|
||||
dockerVersion := version.Version(s.cfg.Version)
|
||||
|
||||
userAgent := strings.Split(r.Header.Get("User-Agent"), "/")
|
||||
|
||||
// v1.20 onwards includes the GOOS of the client after the version
|
||||
// such as Docker/1.7.0 (linux)
|
||||
if len(userAgent) == 2 && strings.Contains(userAgent[1], " ") {
|
||||
userAgent[1] = strings.Split(userAgent[1], " ")[0]
|
||||
}
|
||||
|
||||
if len(userAgent) == 2 && !dockerVersion.Equal(version.Version(userAgent[1])) {
|
||||
logrus.Warnf("Warning: client and server don't have the same version (client: %s, server: %s)", userAgent[1], dockerVersion)
|
||||
}
|
||||
}
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
}
|
||||
|
||||
// corsMiddleware sets the CORS header expectations in the server.
|
||||
func (s *Server) corsMiddleware(handler httputils.APIFunc) httputils.APIFunc {
|
||||
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
// If "api-cors-header" is not given, but "api-enable-cors" is true, we set cors to "*"
|
||||
// otherwise, all head values will be passed to HTTP handler
|
||||
corsHeaders := s.cfg.CorsHeaders
|
||||
if corsHeaders == "" && s.cfg.EnableCors {
|
||||
corsHeaders = "*"
|
||||
}
|
||||
|
||||
if corsHeaders != "" {
|
||||
writeCorsHeaders(w, r, corsHeaders)
|
||||
}
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
}
|
||||
|
||||
// versionMiddleware checks the api version requirements before passing the request to the server handler.
|
||||
func versionMiddleware(handler httputils.APIFunc) httputils.APIFunc {
|
||||
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
apiVersion := version.Version(vars["version"])
|
||||
if apiVersion == "" {
|
||||
apiVersion = api.DefaultVersion
|
||||
}
|
||||
|
||||
if apiVersion.GreaterThan(api.DefaultVersion) {
|
||||
return errors.ErrorCodeNewerClientVersion.WithArgs(apiVersion, api.DefaultVersion)
|
||||
}
|
||||
if apiVersion.LessThan(api.MinVersion) {
|
||||
return errors.ErrorCodeOldClientVersion.WithArgs(apiVersion, api.MinVersion)
|
||||
}
|
||||
|
||||
w.Header().Set("Server", "Docker/"+dockerversion.Version+" ("+runtime.GOOS+")")
|
||||
ctx = context.WithValue(ctx, httputils.APIVersionKey, apiVersion)
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
}
|
||||
|
||||
// handleWithGlobalMiddlwares wraps the handler function for a request with
|
||||
// the server's global middlewares. The order of the middlewares is backwards,
|
||||
// meaning that the first in the list will be evaluated last.
|
||||
//
|
||||
// Example: handleWithGlobalMiddlewares(s.getContainersName)
|
||||
//
|
||||
// s.loggingMiddleware(
|
||||
// s.userAgentMiddleware(
|
||||
// s.corsMiddleware(
|
||||
// versionMiddleware(s.getContainersName)
|
||||
// )
|
||||
// )
|
||||
// )
|
||||
// )
|
||||
func (s *Server) handleWithGlobalMiddlewares(handler httputils.APIFunc) httputils.APIFunc {
|
||||
middlewares := []middleware{
|
||||
versionMiddleware,
|
||||
s.corsMiddleware,
|
||||
s.userAgentMiddleware,
|
||||
next := handler
|
||||
|
||||
handleVersion := middleware.NewVersionMiddleware(dockerversion.Version, api.DefaultVersion, api.MinVersion)
|
||||
next = handleVersion(next)
|
||||
|
||||
if s.cfg.EnableCors {
|
||||
handleCORS := middleware.NewCORSMiddleware(s.cfg.CorsHeaders)
|
||||
next = handleCORS(next)
|
||||
}
|
||||
|
||||
handleUserAgent := middleware.NewUserAgentMiddleware(s.cfg.Version)
|
||||
next = handleUserAgent(next)
|
||||
|
||||
// Only want this on debug level
|
||||
if s.cfg.Logging && logrus.GetLevel() == logrus.DebugLevel {
|
||||
middlewares = append(middlewares, debugRequestMiddleware)
|
||||
next = middleware.DebugRequestMiddleware(next)
|
||||
}
|
||||
|
||||
if len(s.cfg.AuthorizationPluginNames) > 0 {
|
||||
s.authZPlugins = authorization.NewPlugins(s.cfg.AuthorizationPluginNames)
|
||||
middlewares = append(middlewares, s.authorizationMiddleware)
|
||||
handleAuthorization := middleware.NewAuthorizationMiddleware(s.authZPlugins)
|
||||
next = handleAuthorization(next)
|
||||
}
|
||||
|
||||
h := handler
|
||||
for _, m := range middlewares {
|
||||
h = m(h)
|
||||
}
|
||||
return h
|
||||
return next
|
||||
}
|
||||
|
|
42
api/server/middleware/authorization.go
Normal file
42
api/server/middleware/authorization.go
Normal file
|
@ -0,0 +1,42 @@
|
|||
package middleware
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/api/server/httputils"
|
||||
"github.com/docker/docker/pkg/authorization"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
// NewAuthorizationMiddleware creates a new Authorization middleware.
|
||||
func NewAuthorizationMiddleware(plugins []authorization.Plugin) Middleware {
|
||||
return func(handler httputils.APIFunc) httputils.APIFunc {
|
||||
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
// FIXME: fill when authN gets in
|
||||
// User and UserAuthNMethod are taken from AuthN plugins
|
||||
// Currently tracked in https://github.com/docker/docker/pull/13994
|
||||
user := ""
|
||||
userAuthNMethod := ""
|
||||
authCtx := authorization.NewCtx(plugins, user, userAuthNMethod, r.Method, r.RequestURI)
|
||||
|
||||
if err := authCtx.AuthZRequest(w, r); err != nil {
|
||||
logrus.Errorf("AuthZRequest for %s %s returned error: %s", r.Method, r.RequestURI, err)
|
||||
return err
|
||||
}
|
||||
|
||||
rw := authorization.NewResponseModifier(w)
|
||||
|
||||
if err := handler(ctx, rw, r, vars); err != nil {
|
||||
logrus.Errorf("Handler for %s %s returned error: %s", r.Method, r.RequestURI, err)
|
||||
return err
|
||||
}
|
||||
|
||||
if err := authCtx.AuthZResponse(rw, r); err != nil {
|
||||
logrus.Errorf("AuthZResponse for %s %s returned error: %s", r.Method, r.RequestURI, err)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
33
api/server/middleware/cors.go
Normal file
33
api/server/middleware/cors.go
Normal file
|
@ -0,0 +1,33 @@
|
|||
package middleware
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/api/server/httputils"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
// NewCORSMiddleware creates a new CORS middleware.
|
||||
func NewCORSMiddleware(defaultHeaders string) Middleware {
|
||||
return func(handler httputils.APIFunc) httputils.APIFunc {
|
||||
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
// If "api-cors-header" is not given, but "api-enable-cors" is true, we set cors to "*"
|
||||
// otherwise, all head values will be passed to HTTP handler
|
||||
corsHeaders := defaultHeaders
|
||||
if corsHeaders == "" {
|
||||
corsHeaders = "*"
|
||||
}
|
||||
|
||||
writeCorsHeaders(w, r, corsHeaders)
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func writeCorsHeaders(w http.ResponseWriter, r *http.Request, corsHeaders string) {
|
||||
logrus.Debugf("CORS header is enabled and set to: %s", corsHeaders)
|
||||
w.Header().Add("Access-Control-Allow-Origin", corsHeaders)
|
||||
w.Header().Add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept, X-Registry-Auth")
|
||||
w.Header().Add("Access-Control-Allow-Methods", "HEAD, GET, POST, DELETE, PUT, OPTIONS")
|
||||
}
|
56
api/server/middleware/debug.go
Normal file
56
api/server/middleware/debug.go
Normal file
|
@ -0,0 +1,56 @@
|
|||
package middleware
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"net/http"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/api/server/httputils"
|
||||
"github.com/docker/docker/pkg/ioutils"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
// DebugRequestMiddleware dumps the request to logger
|
||||
func DebugRequestMiddleware(handler httputils.APIFunc) httputils.APIFunc {
|
||||
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
logrus.Debugf("Calling %s %s", r.Method, r.RequestURI)
|
||||
|
||||
if r.Method != "POST" {
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
if err := httputils.CheckForJSON(r); err != nil {
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
maxBodySize := 4096 // 4KB
|
||||
if r.ContentLength > int64(maxBodySize) {
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
|
||||
body := r.Body
|
||||
bufReader := bufio.NewReaderSize(body, maxBodySize)
|
||||
r.Body = ioutils.NewReadCloserWrapper(bufReader, func() error { return body.Close() })
|
||||
|
||||
b, err := bufReader.Peek(maxBodySize)
|
||||
if err != io.EOF {
|
||||
// either there was an error reading, or the buffer is full (in which case the request is too large)
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
|
||||
var postForm map[string]interface{}
|
||||
if err := json.Unmarshal(b, &postForm); err == nil {
|
||||
if _, exists := postForm["password"]; exists {
|
||||
postForm["password"] = "*****"
|
||||
}
|
||||
formStr, errMarshal := json.Marshal(postForm)
|
||||
if errMarshal == nil {
|
||||
logrus.Debugf("form data: %s", string(formStr))
|
||||
} else {
|
||||
logrus.Debugf("form data: %q", postForm)
|
||||
}
|
||||
}
|
||||
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
}
|
7
api/server/middleware/middleware.go
Normal file
7
api/server/middleware/middleware.go
Normal file
|
@ -0,0 +1,7 @@
|
|||
package middleware
|
||||
|
||||
import "github.com/docker/docker/api/server/httputils"
|
||||
|
||||
// Middleware is an adapter to allow the use of ordinary functions as Docker API filters.
|
||||
// Any function that has the appropriate signature can be registered as a middleware.
|
||||
type Middleware func(handler httputils.APIFunc) httputils.APIFunc
|
35
api/server/middleware/user_agent.go
Normal file
35
api/server/middleware/user_agent.go
Normal file
|
@ -0,0 +1,35 @@
|
|||
package middleware
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/api/server/httputils"
|
||||
"github.com/docker/docker/pkg/version"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
// NewUserAgentMiddleware creates a new UserAgent middleware.
|
||||
func NewUserAgentMiddleware(versionCheck string) Middleware {
|
||||
serverVersion := version.Version(versionCheck)
|
||||
|
||||
return func(handler httputils.APIFunc) httputils.APIFunc {
|
||||
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if strings.Contains(r.Header.Get("User-Agent"), "Docker-Client/") {
|
||||
userAgent := strings.Split(r.Header.Get("User-Agent"), "/")
|
||||
|
||||
// v1.20 onwards includes the GOOS of the client after the version
|
||||
// such as Docker/1.7.0 (linux)
|
||||
if len(userAgent) == 2 && strings.Contains(userAgent[1], " ") {
|
||||
userAgent[1] = strings.Split(userAgent[1], " ")[0]
|
||||
}
|
||||
|
||||
if len(userAgent) == 2 && !serverVersion.Equal(version.Version(userAgent[1])) {
|
||||
logrus.Debugf("Client and server don't have the same version (client: %s, server: %s)", userAgent[1], serverVersion)
|
||||
}
|
||||
}
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
}
|
||||
}
|
45
api/server/middleware/version.go
Normal file
45
api/server/middleware/version.go
Normal file
|
@ -0,0 +1,45 @@
|
|||
package middleware
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"runtime"
|
||||
|
||||
"github.com/docker/docker/api/server/httputils"
|
||||
"github.com/docker/docker/pkg/version"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
type badRequestError struct {
|
||||
error
|
||||
}
|
||||
|
||||
func (badRequestError) HTTPErrorStatusCode() int {
|
||||
return http.StatusBadRequest
|
||||
}
|
||||
|
||||
// NewVersionMiddleware creates a new Version middleware.
|
||||
func NewVersionMiddleware(versionCheck string, defaultVersion, minVersion version.Version) Middleware {
|
||||
serverVersion := version.Version(versionCheck)
|
||||
|
||||
return func(handler httputils.APIFunc) httputils.APIFunc {
|
||||
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
apiVersion := version.Version(vars["version"])
|
||||
if apiVersion == "" {
|
||||
apiVersion = defaultVersion
|
||||
}
|
||||
|
||||
if apiVersion.GreaterThan(defaultVersion) {
|
||||
return badRequestError{fmt.Errorf("client is newer than server (client API version: %s, server API version: %s)", apiVersion, defaultVersion)}
|
||||
}
|
||||
if apiVersion.LessThan(minVersion) {
|
||||
return badRequestError{fmt.Errorf("client version %s is too old. Minimum supported API version is %s, please upgrade your client to a newer version", apiVersion, minVersion)}
|
||||
}
|
||||
|
||||
header := fmt.Sprintf("Docker/%s (%s)", serverVersion, runtime.GOOS)
|
||||
w.Header().Set("Server", header)
|
||||
ctx = context.WithValue(ctx, httputils.APIVersionKey, apiVersion)
|
||||
return handler(ctx, w, r, vars)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,13 +1,13 @@
|
|||
package server
|
||||
package middleware
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/docker/distribution/registry/api/errcode"
|
||||
"github.com/docker/docker/api/server/httputils"
|
||||
"github.com/docker/docker/errors"
|
||||
"github.com/docker/docker/pkg/version"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
|
@ -19,7 +19,10 @@ func TestVersionMiddleware(t *testing.T) {
|
|||
return nil
|
||||
}
|
||||
|
||||
h := versionMiddleware(handler)
|
||||
defaultVersion := version.Version("1.10.0")
|
||||
minVersion := version.Version("1.2.0")
|
||||
m := NewVersionMiddleware(defaultVersion.String(), defaultVersion, minVersion)
|
||||
h := m(handler)
|
||||
|
||||
req, _ := http.NewRequest("GET", "/containers/json", nil)
|
||||
resp := httptest.NewRecorder()
|
||||
|
@ -37,7 +40,10 @@ func TestVersionMiddlewareWithErrors(t *testing.T) {
|
|||
return nil
|
||||
}
|
||||
|
||||
h := versionMiddleware(handler)
|
||||
defaultVersion := version.Version("1.10.0")
|
||||
minVersion := version.Version("1.2.0")
|
||||
m := NewVersionMiddleware(defaultVersion.String(), defaultVersion, minVersion)
|
||||
h := m(handler)
|
||||
|
||||
req, _ := http.NewRequest("GET", "/containers/json", nil)
|
||||
resp := httptest.NewRecorder()
|
||||
|
@ -45,13 +51,14 @@ func TestVersionMiddlewareWithErrors(t *testing.T) {
|
|||
|
||||
vars := map[string]string{"version": "0.1"}
|
||||
err := h(ctx, resp, req, vars)
|
||||
if derr, ok := err.(errcode.Error); !ok || derr.ErrorCode() != errors.ErrorCodeOldClientVersion {
|
||||
t.Fatalf("Expected ErrorCodeOldClientVersion, got %v", err)
|
||||
|
||||
if !strings.Contains(err.Error(), "client version 0.1 is too old. Minimum supported API version is 1.2.0") {
|
||||
t.Fatalf("Expected too old client error, got %v", err)
|
||||
}
|
||||
|
||||
vars["version"] = "100000"
|
||||
err = h(ctx, resp, req, vars)
|
||||
if derr, ok := err.(errcode.Error); !ok || derr.ErrorCode() != errors.ErrorCodeNewerClientVersion {
|
||||
t.Fatalf("Expected ErrorCodeNewerClientVersion, got %v", err)
|
||||
if !strings.Contains(err.Error(), "client is newer than server") {
|
||||
t.Fatalf("Expected client newer than server error, got %v", err)
|
||||
}
|
||||
}
|
|
@ -9,8 +9,10 @@ import (
|
|||
"github.com/gorilla/mux"
|
||||
)
|
||||
|
||||
func profilerSetup(mainRouter *mux.Router, path string) {
|
||||
var r = mainRouter.PathPrefix(path).Subrouter()
|
||||
const debugPathPrefix = "/debug/"
|
||||
|
||||
func profilerSetup(mainRouter *mux.Router) {
|
||||
var r = mainRouter.PathPrefix(debugPathPrefix).Subrouter()
|
||||
r.HandleFunc("/vars", expVars)
|
||||
r.HandleFunc("/pprof/", pprof.Index)
|
||||
r.HandleFunc("/pprof/cmdline", pprof.Cmdline)
|
||||
|
|
|
@ -4,7 +4,6 @@ import (
|
|||
"bytes"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
|
@ -17,7 +16,6 @@ import (
|
|||
"github.com/docker/docker/pkg/ioutils"
|
||||
"github.com/docker/docker/pkg/progress"
|
||||
"github.com/docker/docker/pkg/streamformatter"
|
||||
"github.com/docker/docker/utils"
|
||||
"github.com/docker/engine-api/types"
|
||||
"github.com/docker/engine-api/types/container"
|
||||
"github.com/docker/go-units"
|
||||
|
@ -60,11 +58,11 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
|
|||
options.ShmSize = shmSize
|
||||
}
|
||||
|
||||
if i := container.IsolationLevel(r.FormValue("isolation")); i != "" {
|
||||
if !container.IsolationLevel.IsValid(i) {
|
||||
if i := container.Isolation(r.FormValue("isolation")); i != "" {
|
||||
if !container.Isolation.IsValid(i) {
|
||||
return nil, fmt.Errorf("Unsupported isolation: %q", i)
|
||||
}
|
||||
options.IsolationLevel = i
|
||||
options.Isolation = i
|
||||
}
|
||||
|
||||
var buildUlimits = []*units.Ulimit{}
|
||||
|
@ -117,7 +115,7 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
|
|||
if !output.Flushed() {
|
||||
return err
|
||||
}
|
||||
_, err = w.Write(sf.FormatError(errors.New(utils.GetErrorMessage(err))))
|
||||
_, err = w.Write(sf.FormatError(err))
|
||||
if err != nil {
|
||||
logrus.Warnf("could not write error response: %v", err)
|
||||
}
|
||||
|
@ -159,6 +157,8 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
|
|||
buildOptions.Dockerfile = dockerfileName
|
||||
}
|
||||
|
||||
buildOptions.AuthConfigs = authConfigs
|
||||
|
||||
out = output
|
||||
if buildOptions.SuppressOutput {
|
||||
out = notVerboseBuffer
|
||||
|
|
|
@ -5,7 +5,6 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/docker/docker/api/types/backend"
|
||||
"github.com/docker/docker/daemon/exec"
|
||||
"github.com/docker/docker/pkg/archive"
|
||||
"github.com/docker/docker/pkg/version"
|
||||
"github.com/docker/engine-api/types"
|
||||
|
@ -15,7 +14,7 @@ import (
|
|||
// execBackend includes functions to implement to provide exec functionality.
|
||||
type execBackend interface {
|
||||
ContainerExecCreate(config *types.ExecConfig) (string, error)
|
||||
ContainerExecInspect(id string) (*exec.Config, error)
|
||||
ContainerExecInspect(id string) (*backend.ExecInspect, error)
|
||||
ContainerExecResize(name string, height, width int) error
|
||||
ContainerExecStart(name string, stdin io.ReadCloser, stdout io.Writer, stderr io.Writer) error
|
||||
ExecExists(name string) (bool, error)
|
||||
|
|
|
@ -11,15 +11,12 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/distribution/registry/api/errcode"
|
||||
"github.com/docker/docker/api/server/httputils"
|
||||
"github.com/docker/docker/api/types/backend"
|
||||
derr "github.com/docker/docker/errors"
|
||||
"github.com/docker/docker/pkg/ioutils"
|
||||
"github.com/docker/docker/pkg/signal"
|
||||
"github.com/docker/docker/pkg/term"
|
||||
"github.com/docker/docker/runconfig"
|
||||
"github.com/docker/docker/utils"
|
||||
"github.com/docker/engine-api/types"
|
||||
"github.com/docker/engine-api/types/container"
|
||||
"github.com/docker/engine-api/types/filters"
|
||||
|
@ -126,7 +123,7 @@ func (s *containerRouter) getContainersLogs(ctx context.Context, w http.Response
|
|||
// The client may be expecting all of the data we're sending to
|
||||
// be multiplexed, so send it through OutStream, which will
|
||||
// have been set up to handle that if needed.
|
||||
fmt.Fprintf(logsConfig.OutStream, "Error running logs job: %s\n", utils.GetErrorMessage(err))
|
||||
fmt.Fprintf(logsConfig.OutStream, "Error running logs job: %v\n", err)
|
||||
default:
|
||||
return err
|
||||
}
|
||||
|
@ -182,6 +179,10 @@ func (s *containerRouter) postContainersStop(ctx context.Context, w http.Respons
|
|||
return nil
|
||||
}
|
||||
|
||||
type errContainerIsRunning interface {
|
||||
ContainerIsRunning() bool
|
||||
}
|
||||
|
||||
func (s *containerRouter) postContainersKill(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
if err := httputils.ParseForm(r); err != nil {
|
||||
return err
|
||||
|
@ -199,15 +200,17 @@ func (s *containerRouter) postContainersKill(ctx context.Context, w http.Respons
|
|||
}
|
||||
|
||||
if err := s.backend.ContainerKill(name, uint64(sig)); err != nil {
|
||||
theErr, isDerr := err.(errcode.ErrorCoder)
|
||||
isStopped := isDerr && theErr.ErrorCode() == derr.ErrorCodeNotRunning
|
||||
var isStopped bool
|
||||
if e, ok := err.(errContainerIsRunning); ok {
|
||||
isStopped = !e.ContainerIsRunning()
|
||||
}
|
||||
|
||||
// Return error that's not caused because the container is stopped.
|
||||
// Return error if the container is not running and the api is >= 1.20
|
||||
// to keep backwards compatibility.
|
||||
version := httputils.VersionFromContext(ctx)
|
||||
if version.GreaterThanOrEqualTo("1.20") || !isStopped {
|
||||
return fmt.Errorf("Cannot kill container %s: %v", name, utils.GetErrorMessage(err))
|
||||
return fmt.Errorf("Cannot kill container %s: %v", name, err)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -322,7 +325,8 @@ func (s *containerRouter) postContainerUpdate(ctx context.Context, w http.Respon
|
|||
}
|
||||
|
||||
hostConfig := &container.HostConfig{
|
||||
Resources: updateConfig.Resources,
|
||||
Resources: updateConfig.Resources,
|
||||
RestartPolicy: updateConfig.RestartPolicy,
|
||||
}
|
||||
|
||||
name := vars["name"]
|
||||
|
@ -429,7 +433,7 @@ func (s *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
|
|||
|
||||
hijacker, ok := w.(http.Hijacker)
|
||||
if !ok {
|
||||
return derr.ErrorCodeNoHijackConnection.WithArgs(containerName)
|
||||
return fmt.Errorf("error attaching to container %s, hijack connection missing", containerName)
|
||||
}
|
||||
|
||||
setupStreams := func() (io.ReadCloser, io.Writer, io.Writer, error) {
|
||||
|
|
|
@ -10,7 +10,6 @@ import (
|
|||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/api/server/httputils"
|
||||
"github.com/docker/docker/pkg/stdcopy"
|
||||
"github.com/docker/docker/utils"
|
||||
"github.com/docker/engine-api/types"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
@ -46,7 +45,7 @@ func (s *containerRouter) postContainerExecCreate(ctx context.Context, w http.Re
|
|||
// Register an instance of Exec in container.
|
||||
id, err := s.backend.ContainerExecCreate(execConfig)
|
||||
if err != nil {
|
||||
logrus.Errorf("Error setting up exec command in container %s: %s", name, utils.GetErrorMessage(err))
|
||||
logrus.Errorf("Error setting up exec command in container %s: %v", name, err)
|
||||
return err
|
||||
}
|
||||
|
||||
|
@ -113,7 +112,7 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
|
|||
if execStartCheck.Detach {
|
||||
return err
|
||||
}
|
||||
logrus.Errorf("Error running exec in container: %v\n", utils.GetErrorMessage(err))
|
||||
logrus.Errorf("Error running exec in container: %v\n", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -20,7 +20,6 @@ type Backend interface {
|
|||
|
||||
type containerBackend interface {
|
||||
Commit(name string, config *types.ContainerCommitConfig) (imageID string, err error)
|
||||
Exists(containerName string) bool
|
||||
}
|
||||
|
||||
type imageBackend interface {
|
||||
|
|
|
@ -4,14 +4,14 @@ import "github.com/docker/docker/api/server/router"
|
|||
|
||||
// imageRouter is a router to talk with the image controller
|
||||
type imageRouter struct {
|
||||
daemon Backend
|
||||
routes []router.Route
|
||||
backend Backend
|
||||
routes []router.Route
|
||||
}
|
||||
|
||||
// NewRouter initializes a new image router
|
||||
func NewRouter(daemon Backend) router.Router {
|
||||
func NewRouter(backend Backend) router.Router {
|
||||
r := &imageRouter{
|
||||
daemon: daemon,
|
||||
backend: backend,
|
||||
}
|
||||
r.initRoutes()
|
||||
return r
|
||||
|
|
|
@ -14,7 +14,6 @@ import (
|
|||
"github.com/docker/distribution/registry/api/errcode"
|
||||
"github.com/docker/docker/api/server/httputils"
|
||||
"github.com/docker/docker/builder/dockerfile"
|
||||
derr "github.com/docker/docker/errors"
|
||||
"github.com/docker/docker/pkg/ioutils"
|
||||
"github.com/docker/docker/pkg/streamformatter"
|
||||
"github.com/docker/docker/reference"
|
||||
|
@ -49,10 +48,6 @@ func (s *imageRouter) postCommit(ctx context.Context, w http.ResponseWriter, r *
|
|||
c = &container.Config{}
|
||||
}
|
||||
|
||||
if !s.daemon.Exists(cname) {
|
||||
return derr.ErrorCodeNoSuchContainer.WithArgs(cname)
|
||||
}
|
||||
|
||||
newConfig, err := dockerfile.BuildFromConfig(c, r.Form["changes"])
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -68,7 +63,7 @@ func (s *imageRouter) postCommit(ctx context.Context, w http.ResponseWriter, r *
|
|||
MergeConfigs: true,
|
||||
}
|
||||
|
||||
imgID, err := s.daemon.Commit(cname, commitCfg)
|
||||
imgID, err := s.backend.Commit(cname, commitCfg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -134,7 +129,7 @@ func (s *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrite
|
|||
}
|
||||
}
|
||||
|
||||
err = s.daemon.PullImage(ref, metaHeaders, authConfig, output)
|
||||
err = s.backend.PullImage(ref, metaHeaders, authConfig, output)
|
||||
}
|
||||
}
|
||||
// Check the error from pulling an image to make sure the request
|
||||
|
@ -175,7 +170,7 @@ func (s *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrite
|
|||
return err
|
||||
}
|
||||
|
||||
err = s.daemon.ImportImage(src, newRef, message, r.Body, output, newConfig)
|
||||
err = s.backend.ImportImage(src, newRef, message, r.Body, output, newConfig)
|
||||
}
|
||||
if err != nil {
|
||||
if !output.Flushed() {
|
||||
|
@ -233,7 +228,7 @@ func (s *imageRouter) postImagesPush(ctx context.Context, w http.ResponseWriter,
|
|||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
|
||||
if err := s.daemon.PushImage(ref, metaHeaders, authConfig, output); err != nil {
|
||||
if err := s.backend.PushImage(ref, metaHeaders, authConfig, output); err != nil {
|
||||
if !output.Flushed() {
|
||||
return err
|
||||
}
|
||||
|
@ -259,7 +254,7 @@ func (s *imageRouter) getImagesGet(ctx context.Context, w http.ResponseWriter, r
|
|||
names = r.Form["names"]
|
||||
}
|
||||
|
||||
if err := s.daemon.ExportImage(names, output); err != nil {
|
||||
if err := s.backend.ExportImage(names, output); err != nil {
|
||||
if !output.Flushed() {
|
||||
return err
|
||||
}
|
||||
|
@ -275,7 +270,7 @@ func (s *imageRouter) postImagesLoad(ctx context.Context, w http.ResponseWriter,
|
|||
}
|
||||
quiet := httputils.BoolValueOrDefault(r, "quiet", true)
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
return s.daemon.LoadImage(r.Body, w, quiet)
|
||||
return s.backend.LoadImage(r.Body, w, quiet)
|
||||
}
|
||||
|
||||
func (s *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
|
@ -292,7 +287,7 @@ func (s *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter, r
|
|||
force := httputils.BoolValue(r, "force")
|
||||
prune := !httputils.BoolValue(r, "noprune")
|
||||
|
||||
list, err := s.daemon.ImageDelete(name, force, prune)
|
||||
list, err := s.backend.ImageDelete(name, force, prune)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -301,7 +296,7 @@ func (s *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter, r
|
|||
}
|
||||
|
||||
func (s *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
imageInspect, err := s.daemon.LookupImage(vars["name"])
|
||||
imageInspect, err := s.backend.LookupImage(vars["name"])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -315,7 +310,7 @@ func (s *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter,
|
|||
}
|
||||
|
||||
// FIXME: The filter parameter could just be a match filter
|
||||
images, err := s.daemon.Images(r.Form.Get("filters"), r.Form.Get("filter"), httputils.BoolValue(r, "all"))
|
||||
images, err := s.backend.Images(r.Form.Get("filters"), r.Form.Get("filter"), httputils.BoolValue(r, "all"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -325,7 +320,7 @@ func (s *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter,
|
|||
|
||||
func (s *imageRouter) getImagesHistory(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
name := vars["name"]
|
||||
history, err := s.daemon.ImageHistory(name)
|
||||
history, err := s.backend.ImageHistory(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -348,7 +343,7 @@ func (s *imageRouter) postImagesTag(ctx context.Context, w http.ResponseWriter,
|
|||
return err
|
||||
}
|
||||
}
|
||||
if err := s.daemon.TagImage(newTag, vars["name"]); err != nil {
|
||||
if err := s.backend.TagImage(newTag, vars["name"]); err != nil {
|
||||
return err
|
||||
}
|
||||
w.WriteHeader(http.StatusCreated)
|
||||
|
@ -378,7 +373,7 @@ func (s *imageRouter) getImagesSearch(ctx context.Context, w http.ResponseWriter
|
|||
headers[k] = v
|
||||
}
|
||||
}
|
||||
query, err := s.daemon.SearchRegistryForImages(r.Form.Get("term"), config, headers)
|
||||
query, err := s.backend.SearchRegistryForImages(r.Form.Get("term"), config, headers)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -8,13 +8,11 @@ import (
|
|||
// Backend is all the methods that need to be implemented
|
||||
// to provide network specific functionality.
|
||||
type Backend interface {
|
||||
NetworkControllerEnabled() bool
|
||||
|
||||
FindNetwork(idName string) (libnetwork.Network, error)
|
||||
GetNetworkByName(idName string) (libnetwork.Network, error)
|
||||
GetNetworksByID(partialID string) []libnetwork.Network
|
||||
GetAllNetworks() []libnetwork.Network
|
||||
CreateNetwork(name, driver string, ipam network.IPAM, options map[string]string, internal bool) (libnetwork.Network, error)
|
||||
CreateNetwork(name, driver string, ipam network.IPAM, options map[string]string, internal bool, enableIPv6 bool) (libnetwork.Network, error)
|
||||
ConnectContainerToNetwork(containerName, networkName string, endpointConfig *network.EndpointSettings) error
|
||||
DisconnectContainerFromNetwork(containerName string, network libnetwork.Network, force bool) error
|
||||
DeleteNetwork(name string) error
|
||||
|
|
|
@ -84,8 +84,8 @@ func filterNetworkByID(nws []libnetwork.Network, id string) (retNws []libnetwork
|
|||
return retNws, nil
|
||||
}
|
||||
|
||||
// filterAllNetworks filter network list according to user specified filter
|
||||
// and return user chosen networks
|
||||
// filterAllNetworks filters network list according to user specified filter
|
||||
// and returns user chosen networks
|
||||
func filterNetworks(nws []libnetwork.Network, filter filters.Args) ([]libnetwork.Network, error) {
|
||||
// if filter is empty, return original network list
|
||||
if filter.Len() == 0 {
|
||||
|
|
|
@ -1,13 +1,6 @@
|
|||
package network
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/docker/docker/api/server/httputils"
|
||||
"github.com/docker/docker/api/server/router"
|
||||
"github.com/docker/docker/errors"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
import "github.com/docker/docker/api/server/router"
|
||||
|
||||
// networkRouter is a router to talk with the network controller
|
||||
type networkRouter struct {
|
||||
|
@ -32,24 +25,13 @@ func (r *networkRouter) Routes() []router.Route {
|
|||
func (r *networkRouter) initRoutes() {
|
||||
r.routes = []router.Route{
|
||||
// GET
|
||||
router.NewGetRoute("/networks", r.controllerEnabledMiddleware(r.getNetworksList)),
|
||||
router.NewGetRoute("/networks/{id:.*}", r.controllerEnabledMiddleware(r.getNetwork)),
|
||||
router.NewGetRoute("/networks", r.getNetworksList),
|
||||
router.NewGetRoute("/networks/{id:.*}", r.getNetwork),
|
||||
// POST
|
||||
router.NewPostRoute("/networks/create", r.controllerEnabledMiddleware(r.postNetworkCreate)),
|
||||
router.NewPostRoute("/networks/{id:.*}/connect", r.controllerEnabledMiddleware(r.postNetworkConnect)),
|
||||
router.NewPostRoute("/networks/{id:.*}/disconnect", r.controllerEnabledMiddleware(r.postNetworkDisconnect)),
|
||||
router.NewPostRoute("/networks/create", r.postNetworkCreate),
|
||||
router.NewPostRoute("/networks/{id:.*}/connect", r.postNetworkConnect),
|
||||
router.NewPostRoute("/networks/{id:.*}/disconnect", r.postNetworkDisconnect),
|
||||
// DELETE
|
||||
router.NewDeleteRoute("/networks/{id:.*}", r.controllerEnabledMiddleware(r.deleteNetwork)),
|
||||
router.NewDeleteRoute("/networks/{id:.*}", r.deleteNetwork),
|
||||
}
|
||||
}
|
||||
|
||||
func (r *networkRouter) controllerEnabledMiddleware(handler httputils.APIFunc) httputils.APIFunc {
|
||||
if r.backend.NetworkControllerEnabled() {
|
||||
return handler
|
||||
}
|
||||
return networkControllerDisabled
|
||||
}
|
||||
|
||||
func networkControllerDisabled(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
|
||||
return errors.ErrorNetworkControllerNotEnabled.WithArgs()
|
||||
}
|
||||
|
|
|
@ -91,7 +91,7 @@ func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWr
|
|||
warning = fmt.Sprintf("Network with name %s (id : %s) already exists", nw.Name(), nw.ID())
|
||||
}
|
||||
|
||||
nw, err = n.backend.CreateNetwork(create.Name, create.Driver, create.IPAM, create.Options, create.Internal)
|
||||
nw, err = n.backend.CreateNetwork(create.Name, create.Driver, create.IPAM, create.Options, create.Internal, create.EnableIPv6)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -160,6 +160,8 @@ func buildNetworkResource(nw libnetwork.Network) *types.NetworkResource {
|
|||
r.ID = nw.ID()
|
||||
r.Scope = nw.Info().Scope()
|
||||
r.Driver = nw.Type()
|
||||
r.EnableIPv6 = nw.Info().IPv6Enabled()
|
||||
r.Internal = nw.Info().Internal()
|
||||
r.Options = nw.Info().DriverOptions()
|
||||
r.Containers = make(map[string]types.EndpointResource)
|
||||
buildIpamResources(r, nw)
|
||||
|
|
|
@ -7,7 +7,7 @@ import (
|
|||
"github.com/gorilla/mux"
|
||||
)
|
||||
|
||||
// routerSwapper is an http.Handler that allow you to swap
|
||||
// routerSwapper is an http.Handler that allows you to swap
|
||||
// mux routers.
|
||||
type routerSwapper struct {
|
||||
mu sync.Mutex
|
||||
|
|
|
@ -9,17 +9,7 @@ import (
|
|||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/api/server/httputils"
|
||||
"github.com/docker/docker/api/server/router"
|
||||
"github.com/docker/docker/api/server/router/build"
|
||||
"github.com/docker/docker/api/server/router/container"
|
||||
"github.com/docker/docker/api/server/router/image"
|
||||
"github.com/docker/docker/api/server/router/network"
|
||||
"github.com/docker/docker/api/server/router/system"
|
||||
"github.com/docker/docker/api/server/router/volume"
|
||||
"github.com/docker/docker/builder/dockerfile"
|
||||
"github.com/docker/docker/daemon"
|
||||
"github.com/docker/docker/pkg/authorization"
|
||||
"github.com/docker/docker/utils"
|
||||
"github.com/docker/go-connections/sockets"
|
||||
"github.com/gorilla/mux"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
@ -37,7 +27,6 @@ type Config struct {
|
|||
Version string
|
||||
SocketGroup string
|
||||
TLSConfig *tls.Config
|
||||
Addrs []Addr
|
||||
}
|
||||
|
||||
// Server contains instance details for the server
|
||||
|
@ -49,27 +38,25 @@ type Server struct {
|
|||
routerSwapper *routerSwapper
|
||||
}
|
||||
|
||||
// Addr contains string representation of address and its protocol (tcp, unix...).
|
||||
type Addr struct {
|
||||
Proto string
|
||||
Addr string
|
||||
}
|
||||
|
||||
// New returns a new instance of the server based on the specified configuration.
|
||||
// It allocates resources which will be needed for ServeAPI(ports, unix-sockets).
|
||||
func New(cfg *Config) (*Server, error) {
|
||||
s := &Server{
|
||||
func New(cfg *Config) *Server {
|
||||
return &Server{
|
||||
cfg: cfg,
|
||||
}
|
||||
for _, addr := range cfg.Addrs {
|
||||
srv, err := s.newServer(addr.Proto, addr.Addr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Accept sets a listener the server accepts connections into.
|
||||
func (s *Server) Accept(addr string, listeners ...net.Listener) {
|
||||
for _, listener := range listeners {
|
||||
httpServer := &HTTPServer{
|
||||
srv: &http.Server{
|
||||
Addr: addr,
|
||||
},
|
||||
l: listener,
|
||||
}
|
||||
logrus.Debugf("Server created for HTTP on %s (%s)", addr.Proto, addr.Addr)
|
||||
s.servers = append(s.servers, srv...)
|
||||
s.servers = append(s.servers, httpServer)
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
// Close closes servers and thus stop receiving requests
|
||||
|
@ -84,8 +71,6 @@ func (s *Server) Close() {
|
|||
// serveAPI loops through all initialized servers and spawns goroutine
|
||||
// with Server method for each. It sets createMux() as Handler also.
|
||||
func (s *Server) serveAPI() error {
|
||||
s.initRouterSwapper()
|
||||
|
||||
var chErrors = make(chan error, len(s.servers))
|
||||
for _, srv := range s.servers {
|
||||
srv.srv.Handler = s.routerSwapper
|
||||
|
@ -127,31 +112,8 @@ func (s *HTTPServer) Close() error {
|
|||
return s.l.Close()
|
||||
}
|
||||
|
||||
func writeCorsHeaders(w http.ResponseWriter, r *http.Request, corsHeaders string) {
|
||||
logrus.Debugf("CORS header is enabled and set to: %s", corsHeaders)
|
||||
w.Header().Add("Access-Control-Allow-Origin", corsHeaders)
|
||||
w.Header().Add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept, X-Registry-Auth")
|
||||
w.Header().Add("Access-Control-Allow-Methods", "HEAD, GET, POST, DELETE, PUT, OPTIONS")
|
||||
}
|
||||
|
||||
func (s *Server) initTCPSocket(addr string) (l net.Listener, err error) {
|
||||
if s.cfg.TLSConfig == nil || s.cfg.TLSConfig.ClientAuth != tls.RequireAndVerifyClientCert {
|
||||
logrus.Warn("/!\\ DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING /!\\")
|
||||
}
|
||||
if l, err = sockets.NewTCPSocket(addr, s.cfg.TLSConfig); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := allocateDaemonPort(addr); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (s *Server) makeHTTPHandler(handler httputils.APIFunc) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
// log the handler call
|
||||
logrus.Debugf("Calling %s %s", r.Method, r.URL.Path)
|
||||
|
||||
// Define the context that we'll pass around to share info
|
||||
// like the docker-request-id.
|
||||
//
|
||||
|
@ -168,33 +130,31 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc) http.HandlerFunc {
|
|||
}
|
||||
|
||||
if err := handlerFunc(ctx, w, r, vars); err != nil {
|
||||
logrus.Errorf("Handler for %s %s returned error: %s", r.Method, r.URL.Path, utils.GetErrorMessage(err))
|
||||
logrus.Errorf("Handler for %s %s returned error: %v", r.Method, r.URL.Path, err)
|
||||
httputils.WriteError(w, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// InitRouters initializes a list of routers for the server.
|
||||
func (s *Server) InitRouters(d *daemon.Daemon) {
|
||||
s.addRouter(container.NewRouter(d))
|
||||
s.addRouter(image.NewRouter(d))
|
||||
s.addRouter(network.NewRouter(d))
|
||||
s.addRouter(system.NewRouter(d))
|
||||
s.addRouter(volume.NewRouter(d))
|
||||
s.addRouter(build.NewRouter(dockerfile.NewBuildManager(d)))
|
||||
}
|
||||
// InitRouter initializes the list of routers for the server.
|
||||
// This method also enables the Go profiler if enableProfiler is true.
|
||||
func (s *Server) InitRouter(enableProfiler bool, routers ...router.Router) {
|
||||
for _, r := range routers {
|
||||
s.routers = append(s.routers, r)
|
||||
}
|
||||
|
||||
// addRouter adds a new router to the server.
|
||||
func (s *Server) addRouter(r router.Router) {
|
||||
s.routers = append(s.routers, r)
|
||||
m := s.createMux()
|
||||
if enableProfiler {
|
||||
profilerSetup(m)
|
||||
}
|
||||
s.routerSwapper = &routerSwapper{
|
||||
router: m,
|
||||
}
|
||||
}
|
||||
|
||||
// createMux initializes the main router the server uses.
|
||||
func (s *Server) createMux() *mux.Router {
|
||||
m := mux.NewRouter()
|
||||
if utils.IsDebugEnabled() {
|
||||
profilerSetup(m, "/debug/")
|
||||
}
|
||||
|
||||
logrus.Debugf("Registering routers")
|
||||
for _, apiRouter := range s.routers {
|
||||
|
@ -222,23 +182,14 @@ func (s *Server) Wait(waitChan chan error) {
|
|||
waitChan <- nil
|
||||
}
|
||||
|
||||
func (s *Server) initRouterSwapper() {
|
||||
s.routerSwapper = &routerSwapper{
|
||||
router: s.createMux(),
|
||||
}
|
||||
// DisableProfiler reloads the server mux without adding the profiler routes.
|
||||
func (s *Server) DisableProfiler() {
|
||||
s.routerSwapper.Swap(s.createMux())
|
||||
}
|
||||
|
||||
// Reload reads configuration changes and modifies the
|
||||
// server according to those changes.
|
||||
// Currently, only the --debug configuration is taken into account.
|
||||
func (s *Server) Reload(config *daemon.Config) {
|
||||
debugEnabled := utils.IsDebugEnabled()
|
||||
switch {
|
||||
case debugEnabled && !config.Debug: // disable debug
|
||||
utils.DisableDebug()
|
||||
s.routerSwapper.Swap(s.createMux())
|
||||
case config.Debug && !debugEnabled: // enable debug
|
||||
utils.EnableDebug()
|
||||
s.routerSwapper.Swap(s.createMux())
|
||||
}
|
||||
// EnableProfiler reloads the server mux adding the profiler routes.
|
||||
func (s *Server) EnableProfiler() {
|
||||
m := s.createMux()
|
||||
profilerSetup(m)
|
||||
s.routerSwapper.Swap(m)
|
||||
}
|
||||
|
|
|
@ -42,3 +42,28 @@ type ContainerStatsConfig struct {
|
|||
Stop <-chan bool
|
||||
Version string
|
||||
}
|
||||
|
||||
// ExecInspect holds information about a running process started
|
||||
// with docker exec.
|
||||
type ExecInspect struct {
|
||||
ID string
|
||||
Running bool
|
||||
ExitCode *int
|
||||
ProcessConfig *ExecProcessConfig
|
||||
OpenStdin bool
|
||||
OpenStderr bool
|
||||
OpenStdout bool
|
||||
CanRemove bool
|
||||
ContainerID string
|
||||
DetachKeys []byte
|
||||
}
|
||||
|
||||
// ExecProcessConfig holds information about the exec process
|
||||
// running on the host.
|
||||
type ExecProcessConfig struct {
|
||||
Tty bool `json:"tty"`
|
||||
Entrypoint string `json:"entrypoint"`
|
||||
Arguments []string `json:"arguments"`
|
||||
Privileged *bool `json:"privileged,omitempty"`
|
||||
User string `json:"user,omitempty"`
|
||||
}
|
||||
|
|
|
@ -14,6 +14,11 @@ import (
|
|||
"github.com/docker/engine-api/types/container"
|
||||
)
|
||||
|
||||
const (
|
||||
// DefaultDockerfileName is the Default filename with Docker commands, read by docker build
|
||||
DefaultDockerfileName string = "Dockerfile"
|
||||
)
|
||||
|
||||
// Context represents a file system tree.
|
||||
type Context interface {
|
||||
// Close allows to signal that the filesystem tree won't be used anymore.
|
||||
|
@ -141,7 +146,7 @@ type Image interface {
|
|||
// ImageCache abstracts an image cache store.
|
||||
// (parent image, child runconfig) -> child image
|
||||
type ImageCache interface {
|
||||
// GetCachedImage returns a reference to a cached image whose parent equals `parent`
|
||||
// GetCachedImageOnBuild returns a reference to a cached image whose parent equals `parent`
|
||||
// and runconfig equals `cfg`. A cache miss is expected to return an empty ID and a nil error.
|
||||
GetCachedImageOnBuild(parentID string, cfg *container.Config) (imageID string, err error)
|
||||
}
|
||||
|
|
260
builder/context.go
Normal file
260
builder/context.go
Normal file
|
@ -0,0 +1,260 @@
|
|||
package builder
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"github.com/docker/docker/pkg/archive"
|
||||
"github.com/docker/docker/pkg/fileutils"
|
||||
"github.com/docker/docker/pkg/gitutils"
|
||||
"github.com/docker/docker/pkg/httputils"
|
||||
"github.com/docker/docker/pkg/ioutils"
|
||||
"github.com/docker/docker/pkg/progress"
|
||||
"github.com/docker/docker/pkg/streamformatter"
|
||||
)
|
||||
|
||||
// ValidateContextDirectory checks if all the contents of the directory
|
||||
// can be read and returns an error if some files can't be read
|
||||
// symlinks which point to non-existing files don't trigger an error
|
||||
func ValidateContextDirectory(srcPath string, excludes []string) error {
|
||||
contextRoot, err := getContextRoot(srcPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return filepath.Walk(contextRoot, func(filePath string, f os.FileInfo, err error) error {
|
||||
// skip this directory/file if it's not in the path, it won't get added to the context
|
||||
if relFilePath, err := filepath.Rel(contextRoot, filePath); err != nil {
|
||||
return err
|
||||
} else if skip, err := fileutils.Matches(relFilePath, excludes); err != nil {
|
||||
return err
|
||||
} else if skip {
|
||||
if f.IsDir() {
|
||||
return filepath.SkipDir
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
if os.IsPermission(err) {
|
||||
return fmt.Errorf("can't stat '%s'", filePath)
|
||||
}
|
||||
if os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// skip checking if symlinks point to non-existing files, such symlinks can be useful
|
||||
// also skip named pipes, because they hanging on open
|
||||
if f.Mode()&(os.ModeSymlink|os.ModeNamedPipe) != 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
if !f.IsDir() {
|
||||
currentFile, err := os.Open(filePath)
|
||||
if err != nil && os.IsPermission(err) {
|
||||
return fmt.Errorf("no permission to read from '%s'", filePath)
|
||||
}
|
||||
currentFile.Close()
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// GetContextFromReader will read the contents of the given reader as either a
|
||||
// Dockerfile or tar archive. Returns a tar archive used as a context and a
|
||||
// path to the Dockerfile inside the tar.
|
||||
func GetContextFromReader(r io.ReadCloser, dockerfileName string) (out io.ReadCloser, relDockerfile string, err error) {
|
||||
buf := bufio.NewReader(r)
|
||||
|
||||
magic, err := buf.Peek(archive.HeaderSize)
|
||||
if err != nil && err != io.EOF {
|
||||
return nil, "", fmt.Errorf("failed to peek context header from STDIN: %v", err)
|
||||
}
|
||||
|
||||
if archive.IsArchive(magic) {
|
||||
return ioutils.NewReadCloserWrapper(buf, func() error { return r.Close() }), dockerfileName, nil
|
||||
}
|
||||
|
||||
// Input should be read as a Dockerfile.
|
||||
tmpDir, err := ioutil.TempDir("", "docker-build-context-")
|
||||
if err != nil {
|
||||
return nil, "", fmt.Errorf("unbale to create temporary context directory: %v", err)
|
||||
}
|
||||
|
||||
f, err := os.Create(filepath.Join(tmpDir, DefaultDockerfileName))
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
_, err = io.Copy(f, buf)
|
||||
if err != nil {
|
||||
f.Close()
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
if err := f.Close(); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
if err := r.Close(); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
tar, err := archive.Tar(tmpDir, archive.Uncompressed)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
return ioutils.NewReadCloserWrapper(tar, func() error {
|
||||
err := tar.Close()
|
||||
os.RemoveAll(tmpDir)
|
||||
return err
|
||||
}), DefaultDockerfileName, nil
|
||||
|
||||
}
|
||||
|
||||
// GetContextFromGitURL uses a Git URL as context for a `docker build`. The
|
||||
// git repo is cloned into a temporary directory used as the context directory.
|
||||
// Returns the absolute path to the temporary context directory, the relative
|
||||
// path of the dockerfile in that context directory, and a non-nil error on
|
||||
// success.
|
||||
func GetContextFromGitURL(gitURL, dockerfileName string) (absContextDir, relDockerfile string, err error) {
|
||||
if _, err := exec.LookPath("git"); err != nil {
|
||||
return "", "", fmt.Errorf("unable to find 'git': %v", err)
|
||||
}
|
||||
if absContextDir, err = gitutils.Clone(gitURL); err != nil {
|
||||
return "", "", fmt.Errorf("unable to 'git clone' to temporary context directory: %v", err)
|
||||
}
|
||||
|
||||
return getDockerfileRelPath(absContextDir, dockerfileName)
|
||||
}
|
||||
|
||||
// GetContextFromURL uses a remote URL as context for a `docker build`. The
|
||||
// remote resource is downloaded as either a Dockerfile or a tar archive.
|
||||
// Returns the tar archive used for the context and a path of the
|
||||
// dockerfile inside the tar.
|
||||
func GetContextFromURL(out io.Writer, remoteURL, dockerfileName string) (io.ReadCloser, string, error) {
|
||||
response, err := httputils.Download(remoteURL)
|
||||
if err != nil {
|
||||
return nil, "", fmt.Errorf("unable to download remote context %s: %v", remoteURL, err)
|
||||
}
|
||||
progressOutput := streamformatter.NewStreamFormatter().NewProgressOutput(out, true)
|
||||
|
||||
// Pass the response body through a progress reader.
|
||||
progReader := progress.NewProgressReader(response.Body, progressOutput, response.ContentLength, "", fmt.Sprintf("Downloading build context from remote url: %s", remoteURL))
|
||||
|
||||
return GetContextFromReader(ioutils.NewReadCloserWrapper(progReader, func() error { return response.Body.Close() }), dockerfileName)
|
||||
}
|
||||
|
||||
// GetContextFromLocalDir uses the given local directory as context for a
|
||||
// `docker build`. Returns the absolute path to the local context directory,
|
||||
// the relative path of the dockerfile in that context directory, and a non-nil
|
||||
// error on success.
|
||||
func GetContextFromLocalDir(localDir, dockerfileName string) (absContextDir, relDockerfile string, err error) {
|
||||
// When using a local context directory, when the Dockerfile is specified
|
||||
// with the `-f/--file` option then it is considered relative to the
|
||||
// current directory and not the context directory.
|
||||
if dockerfileName != "" {
|
||||
if dockerfileName, err = filepath.Abs(dockerfileName); err != nil {
|
||||
return "", "", fmt.Errorf("unable to get absolute path to Dockerfile: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
return getDockerfileRelPath(localDir, dockerfileName)
|
||||
}
|
||||
|
||||
// getDockerfileRelPath uses the given context directory for a `docker build`
|
||||
// and returns the absolute path to the context directory, the relative path of
|
||||
// the dockerfile in that context directory, and a non-nil error on success.
|
||||
func getDockerfileRelPath(givenContextDir, givenDockerfile string) (absContextDir, relDockerfile string, err error) {
|
||||
if absContextDir, err = filepath.Abs(givenContextDir); err != nil {
|
||||
return "", "", fmt.Errorf("unable to get absolute context directory: %v", err)
|
||||
}
|
||||
|
||||
// The context dir might be a symbolic link, so follow it to the actual
|
||||
// target directory.
|
||||
//
|
||||
// FIXME. We use isUNC (always false on non-Windows platforms) to workaround
|
||||
// an issue in golang. On Windows, EvalSymLinks does not work on UNC file
|
||||
// paths (those starting with \\). This hack means that when using links
|
||||
// on UNC paths, they will not be followed.
|
||||
if !isUNC(absContextDir) {
|
||||
absContextDir, err = filepath.EvalSymlinks(absContextDir)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("unable to evaluate symlinks in context path: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
stat, err := os.Lstat(absContextDir)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("unable to stat context directory %q: %v", absContextDir, err)
|
||||
}
|
||||
|
||||
if !stat.IsDir() {
|
||||
return "", "", fmt.Errorf("context must be a directory: %s", absContextDir)
|
||||
}
|
||||
|
||||
absDockerfile := givenDockerfile
|
||||
if absDockerfile == "" {
|
||||
// No -f/--file was specified so use the default relative to the
|
||||
// context directory.
|
||||
absDockerfile = filepath.Join(absContextDir, DefaultDockerfileName)
|
||||
|
||||
// Just to be nice ;-) look for 'dockerfile' too but only
|
||||
// use it if we found it, otherwise ignore this check
|
||||
if _, err = os.Lstat(absDockerfile); os.IsNotExist(err) {
|
||||
altPath := filepath.Join(absContextDir, strings.ToLower(DefaultDockerfileName))
|
||||
if _, err = os.Lstat(altPath); err == nil {
|
||||
absDockerfile = altPath
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If not already an absolute path, the Dockerfile path should be joined to
|
||||
// the base directory.
|
||||
if !filepath.IsAbs(absDockerfile) {
|
||||
absDockerfile = filepath.Join(absContextDir, absDockerfile)
|
||||
}
|
||||
|
||||
// Evaluate symlinks in the path to the Dockerfile too.
|
||||
//
|
||||
// FIXME. We use isUNC (always false on non-Windows platforms) to workaround
|
||||
// an issue in golang. On Windows, EvalSymLinks does not work on UNC file
|
||||
// paths (those starting with \\). This hack means that when using links
|
||||
// on UNC paths, they will not be followed.
|
||||
if !isUNC(absDockerfile) {
|
||||
absDockerfile, err = filepath.EvalSymlinks(absDockerfile)
|
||||
if err != nil {
|
||||
return "", "", fmt.Errorf("unable to evaluate symlinks in Dockerfile path: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
if _, err := os.Lstat(absDockerfile); err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return "", "", fmt.Errorf("Cannot locate Dockerfile: %q", absDockerfile)
|
||||
}
|
||||
return "", "", fmt.Errorf("unable to stat Dockerfile: %v", err)
|
||||
}
|
||||
|
||||
if relDockerfile, err = filepath.Rel(absContextDir, absDockerfile); err != nil {
|
||||
return "", "", fmt.Errorf("unable to get relative Dockerfile path: %v", err)
|
||||
}
|
||||
|
||||
if strings.HasPrefix(relDockerfile, ".."+string(filepath.Separator)) {
|
||||
return "", "", fmt.Errorf("The Dockerfile (%s) must be within the build context (%s)", givenDockerfile, givenContextDir)
|
||||
}
|
||||
|
||||
return absContextDir, relDockerfile, nil
|
||||
}
|
||||
|
||||
// isUNC returns true if the path is UNC (one starting \\). It always returns
|
||||
// false on Linux.
|
||||
func isUNC(path string) bool {
|
||||
return runtime.GOOS == "windows" && strings.HasPrefix(path, `\\`)
|
||||
}
|
|
@ -1,6 +1,6 @@
|
|||
// +build !windows
|
||||
|
||||
package client
|
||||
package builder
|
||||
|
||||
import (
|
||||
"path/filepath"
|
|
@ -1,6 +1,6 @@
|
|||
// +build windows
|
||||
|
||||
package client
|
||||
package builder
|
||||
|
||||
import (
|
||||
"path/filepath"
|
|
@ -19,7 +19,6 @@ import (
|
|||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/api"
|
||||
"github.com/docker/docker/builder"
|
||||
derr "github.com/docker/docker/errors"
|
||||
"github.com/docker/docker/pkg/signal"
|
||||
"github.com/docker/docker/pkg/system"
|
||||
runconfigopts "github.com/docker/docker/runconfig/opts"
|
||||
|
@ -40,12 +39,12 @@ func nullDispatch(b *Builder, args []string, attributes map[string]bool, origina
|
|||
//
|
||||
func env(b *Builder, args []string, attributes map[string]bool, original string) error {
|
||||
if len(args) == 0 {
|
||||
return derr.ErrorCodeAtLeastOneArg.WithArgs("ENV")
|
||||
return errAtLeastOneArgument("ENV")
|
||||
}
|
||||
|
||||
if len(args)%2 != 0 {
|
||||
// should never get here, but just in case
|
||||
return derr.ErrorCodeTooManyArgs.WithArgs("ENV")
|
||||
return errTooManyArguments("ENV")
|
||||
}
|
||||
|
||||
if err := b.flags.Parse(); err != nil {
|
||||
|
@ -99,7 +98,7 @@ func env(b *Builder, args []string, attributes map[string]bool, original string)
|
|||
// Sets the maintainer metadata.
|
||||
func maintainer(b *Builder, args []string, attributes map[string]bool, original string) error {
|
||||
if len(args) != 1 {
|
||||
return derr.ErrorCodeExactlyOneArg.WithArgs("MAINTAINER")
|
||||
return errExactlyOneArgument("MAINTAINER")
|
||||
}
|
||||
|
||||
if err := b.flags.Parse(); err != nil {
|
||||
|
@ -116,11 +115,11 @@ func maintainer(b *Builder, args []string, attributes map[string]bool, original
|
|||
//
|
||||
func label(b *Builder, args []string, attributes map[string]bool, original string) error {
|
||||
if len(args) == 0 {
|
||||
return derr.ErrorCodeAtLeastOneArg.WithArgs("LABEL")
|
||||
return errAtLeastOneArgument("LABEL")
|
||||
}
|
||||
if len(args)%2 != 0 {
|
||||
// should never get here, but just in case
|
||||
return derr.ErrorCodeTooManyArgs.WithArgs("LABEL")
|
||||
return errTooManyArguments("LABEL")
|
||||
}
|
||||
|
||||
if err := b.flags.Parse(); err != nil {
|
||||
|
@ -152,7 +151,7 @@ func label(b *Builder, args []string, attributes map[string]bool, original strin
|
|||
//
|
||||
func add(b *Builder, args []string, attributes map[string]bool, original string) error {
|
||||
if len(args) < 2 {
|
||||
return derr.ErrorCodeAtLeastTwoArgs.WithArgs("ADD")
|
||||
return errAtLeastOneArgument("ADD")
|
||||
}
|
||||
|
||||
if err := b.flags.Parse(); err != nil {
|
||||
|
@ -168,7 +167,7 @@ func add(b *Builder, args []string, attributes map[string]bool, original string)
|
|||
//
|
||||
func dispatchCopy(b *Builder, args []string, attributes map[string]bool, original string) error {
|
||||
if len(args) < 2 {
|
||||
return derr.ErrorCodeAtLeastTwoArgs.WithArgs("COPY")
|
||||
return errAtLeastOneArgument("COPY")
|
||||
}
|
||||
|
||||
if err := b.flags.Parse(); err != nil {
|
||||
|
@ -184,7 +183,7 @@ func dispatchCopy(b *Builder, args []string, attributes map[string]bool, origina
|
|||
//
|
||||
func from(b *Builder, args []string, attributes map[string]bool, original string) error {
|
||||
if len(args) != 1 {
|
||||
return derr.ErrorCodeExactlyOneArg.WithArgs("FROM")
|
||||
return errExactlyOneArgument("FROM")
|
||||
}
|
||||
|
||||
if err := b.flags.Parse(); err != nil {
|
||||
|
@ -233,7 +232,7 @@ func from(b *Builder, args []string, attributes map[string]bool, original string
|
|||
//
|
||||
func onbuild(b *Builder, args []string, attributes map[string]bool, original string) error {
|
||||
if len(args) == 0 {
|
||||
return derr.ErrorCodeAtLeastOneArg.WithArgs("ONBUILD")
|
||||
return errAtLeastOneArgument("ONBUILD")
|
||||
}
|
||||
|
||||
if err := b.flags.Parse(); err != nil {
|
||||
|
@ -243,9 +242,9 @@ func onbuild(b *Builder, args []string, attributes map[string]bool, original str
|
|||
triggerInstruction := strings.ToUpper(strings.TrimSpace(args[0]))
|
||||
switch triggerInstruction {
|
||||
case "ONBUILD":
|
||||
return derr.ErrorCodeChainOnBuild
|
||||
return fmt.Errorf("Chaining ONBUILD via `ONBUILD ONBUILD` isn't allowed")
|
||||
case "MAINTAINER", "FROM":
|
||||
return derr.ErrorCodeBadOnBuildCmd.WithArgs(triggerInstruction)
|
||||
return fmt.Errorf("%s isn't allowed as an ONBUILD trigger", triggerInstruction)
|
||||
}
|
||||
|
||||
original = regexp.MustCompile(`(?i)^\s*ONBUILD\s*`).ReplaceAllString(original, "")
|
||||
|
@ -260,7 +259,7 @@ func onbuild(b *Builder, args []string, attributes map[string]bool, original str
|
|||
//
|
||||
func workdir(b *Builder, args []string, attributes map[string]bool, original string) error {
|
||||
if len(args) != 1 {
|
||||
return derr.ErrorCodeExactlyOneArg.WithArgs("WORKDIR")
|
||||
return errExactlyOneArgument("WORKDIR")
|
||||
}
|
||||
|
||||
if err := b.flags.Parse(); err != nil {
|
||||
|
@ -293,7 +292,7 @@ func workdir(b *Builder, args []string, attributes map[string]bool, original str
|
|||
//
|
||||
func run(b *Builder, args []string, attributes map[string]bool, original string) error {
|
||||
if b.image == "" && !b.noBaseImage {
|
||||
return derr.ErrorCodeMissingFrom
|
||||
return fmt.Errorf("Please provide a source image with `from` prior to run")
|
||||
}
|
||||
|
||||
if err := b.flags.Parse(); err != nil {
|
||||
|
@ -311,20 +310,20 @@ func run(b *Builder, args []string, attributes map[string]bool, original string)
|
|||
}
|
||||
|
||||
config := &container.Config{
|
||||
Cmd: strslice.New(args...),
|
||||
Cmd: strslice.StrSlice(args),
|
||||
Image: b.image,
|
||||
}
|
||||
|
||||
// stash the cmd
|
||||
cmd := b.runConfig.Cmd
|
||||
if b.runConfig.Entrypoint.Len() == 0 && b.runConfig.Cmd.Len() == 0 {
|
||||
if len(b.runConfig.Entrypoint) == 0 && len(b.runConfig.Cmd) == 0 {
|
||||
b.runConfig.Cmd = config.Cmd
|
||||
}
|
||||
|
||||
// stash the config environment
|
||||
env := b.runConfig.Env
|
||||
|
||||
defer func(cmd *strslice.StrSlice) { b.runConfig.Cmd = cmd }(cmd)
|
||||
defer func(cmd strslice.StrSlice) { b.runConfig.Cmd = cmd }(cmd)
|
||||
defer func(env []string) { b.runConfig.Env = env }(env)
|
||||
|
||||
// derive the net build-time environment for this run. We let config
|
||||
|
@ -367,7 +366,7 @@ func run(b *Builder, args []string, attributes map[string]bool, original string)
|
|||
if len(cmdBuildEnv) > 0 {
|
||||
sort.Strings(cmdBuildEnv)
|
||||
tmpEnv := append([]string{fmt.Sprintf("|%d", len(cmdBuildEnv))}, cmdBuildEnv...)
|
||||
saveCmd = strslice.New(append(tmpEnv, saveCmd.Slice()...)...)
|
||||
saveCmd = strslice.StrSlice(append(tmpEnv, saveCmd...))
|
||||
}
|
||||
|
||||
b.runConfig.Cmd = saveCmd
|
||||
|
@ -425,7 +424,7 @@ func cmd(b *Builder, args []string, attributes map[string]bool, original string)
|
|||
}
|
||||
}
|
||||
|
||||
b.runConfig.Cmd = strslice.New(cmdSlice...)
|
||||
b.runConfig.Cmd = strslice.StrSlice(cmdSlice)
|
||||
|
||||
if err := b.commit("", b.runConfig.Cmd, fmt.Sprintf("CMD %q", cmdSlice)); err != nil {
|
||||
return err
|
||||
|
@ -456,16 +455,16 @@ func entrypoint(b *Builder, args []string, attributes map[string]bool, original
|
|||
switch {
|
||||
case attributes["json"]:
|
||||
// ENTRYPOINT ["echo", "hi"]
|
||||
b.runConfig.Entrypoint = strslice.New(parsed...)
|
||||
b.runConfig.Entrypoint = strslice.StrSlice(parsed)
|
||||
case len(parsed) == 0:
|
||||
// ENTRYPOINT []
|
||||
b.runConfig.Entrypoint = nil
|
||||
default:
|
||||
// ENTRYPOINT echo hi
|
||||
if runtime.GOOS != "windows" {
|
||||
b.runConfig.Entrypoint = strslice.New("/bin/sh", "-c", parsed[0])
|
||||
b.runConfig.Entrypoint = strslice.StrSlice{"/bin/sh", "-c", parsed[0]}
|
||||
} else {
|
||||
b.runConfig.Entrypoint = strslice.New("cmd", "/S", "/C", parsed[0])
|
||||
b.runConfig.Entrypoint = strslice.StrSlice{"cmd", "/S", "/C", parsed[0]}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -491,7 +490,7 @@ func expose(b *Builder, args []string, attributes map[string]bool, original stri
|
|||
portsTab := args
|
||||
|
||||
if len(args) == 0 {
|
||||
return derr.ErrorCodeAtLeastOneArg.WithArgs("EXPOSE")
|
||||
return errAtLeastOneArgument("EXPOSE")
|
||||
}
|
||||
|
||||
if err := b.flags.Parse(); err != nil {
|
||||
|
@ -530,7 +529,7 @@ func expose(b *Builder, args []string, attributes map[string]bool, original stri
|
|||
//
|
||||
func user(b *Builder, args []string, attributes map[string]bool, original string) error {
|
||||
if len(args) != 1 {
|
||||
return derr.ErrorCodeExactlyOneArg.WithArgs("USER")
|
||||
return errExactlyOneArgument("USER")
|
||||
}
|
||||
|
||||
if err := b.flags.Parse(); err != nil {
|
||||
|
@ -547,7 +546,7 @@ func user(b *Builder, args []string, attributes map[string]bool, original string
|
|||
//
|
||||
func volume(b *Builder, args []string, attributes map[string]bool, original string) error {
|
||||
if len(args) == 0 {
|
||||
return derr.ErrorCodeAtLeastOneArg.WithArgs("VOLUME")
|
||||
return errAtLeastOneArgument("VOLUME")
|
||||
}
|
||||
|
||||
if err := b.flags.Parse(); err != nil {
|
||||
|
@ -560,7 +559,7 @@ func volume(b *Builder, args []string, attributes map[string]bool, original stri
|
|||
for _, v := range args {
|
||||
v = strings.TrimSpace(v)
|
||||
if v == "" {
|
||||
return derr.ErrorCodeVolumeEmpty
|
||||
return fmt.Errorf("Volume specified can not be an empty string")
|
||||
}
|
||||
b.runConfig.Volumes[v] = struct{}{}
|
||||
}
|
||||
|
@ -631,3 +630,15 @@ func arg(b *Builder, args []string, attributes map[string]bool, original string)
|
|||
|
||||
return b.commit("", b.runConfig.Cmd, fmt.Sprintf("ARG %s", arg))
|
||||
}
|
||||
|
||||
func errAtLeastOneArgument(command string) error {
|
||||
return fmt.Errorf("%s requires at least one argument", command)
|
||||
}
|
||||
|
||||
func errExactlyOneArgument(command string) error {
|
||||
return fmt.Errorf("%s requires exactly one argument", command)
|
||||
}
|
||||
|
||||
func errTooManyArguments(command string) error {
|
||||
return fmt.Errorf("Bad input to %s, too many arguments", command)
|
||||
}
|
||||
|
|
|
@ -19,7 +19,6 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/api"
|
||||
"github.com/docker/docker/builder"
|
||||
"github.com/docker/docker/builder/dockerfile/parser"
|
||||
"github.com/docker/docker/pkg/archive"
|
||||
|
@ -38,7 +37,7 @@ import (
|
|||
"github.com/docker/engine-api/types/strslice"
|
||||
)
|
||||
|
||||
func (b *Builder) commit(id string, autoCmd *strslice.StrSlice, comment string) error {
|
||||
func (b *Builder) commit(id string, autoCmd strslice.StrSlice, comment string) error {
|
||||
if b.disableCommit {
|
||||
return nil
|
||||
}
|
||||
|
@ -49,11 +48,11 @@ func (b *Builder) commit(id string, autoCmd *strslice.StrSlice, comment string)
|
|||
if id == "" {
|
||||
cmd := b.runConfig.Cmd
|
||||
if runtime.GOOS != "windows" {
|
||||
b.runConfig.Cmd = strslice.New("/bin/sh", "-c", "#(nop) "+comment)
|
||||
b.runConfig.Cmd = strslice.StrSlice{"/bin/sh", "-c", "#(nop) " + comment}
|
||||
} else {
|
||||
b.runConfig.Cmd = strslice.New("cmd", "/S /C", "REM (nop) "+comment)
|
||||
b.runConfig.Cmd = strslice.StrSlice{"cmd", "/S /C", "REM (nop) " + comment}
|
||||
}
|
||||
defer func(cmd *strslice.StrSlice) { b.runConfig.Cmd = cmd }(cmd)
|
||||
defer func(cmd strslice.StrSlice) { b.runConfig.Cmd = cmd }(cmd)
|
||||
|
||||
hit, err := b.probeCache()
|
||||
if err != nil {
|
||||
|
@ -172,11 +171,11 @@ func (b *Builder) runContextCommand(args []string, allowRemote bool, allowLocalD
|
|||
|
||||
cmd := b.runConfig.Cmd
|
||||
if runtime.GOOS != "windows" {
|
||||
b.runConfig.Cmd = strslice.New("/bin/sh", "-c", fmt.Sprintf("#(nop) %s %s in %s", cmdName, srcHash, dest))
|
||||
b.runConfig.Cmd = strslice.StrSlice{"/bin/sh", "-c", fmt.Sprintf("#(nop) %s %s in %s", cmdName, srcHash, dest)}
|
||||
} else {
|
||||
b.runConfig.Cmd = strslice.New("cmd", "/S", "/C", fmt.Sprintf("REM (nop) %s %s in %s", cmdName, srcHash, dest))
|
||||
b.runConfig.Cmd = strslice.StrSlice{"cmd", "/S", "/C", fmt.Sprintf("REM (nop) %s %s in %s", cmdName, srcHash, dest)}
|
||||
}
|
||||
defer func(cmd *strslice.StrSlice) { b.runConfig.Cmd = cmd }(cmd)
|
||||
defer func(cmd strslice.StrSlice) { b.runConfig.Cmd = cmd }(cmd)
|
||||
|
||||
if hit, err := b.probeCache(); err != nil {
|
||||
return err
|
||||
|
@ -506,7 +505,7 @@ func (b *Builder) create() (string, error) {
|
|||
|
||||
// TODO: why not embed a hostconfig in builder?
|
||||
hostConfig := &container.HostConfig{
|
||||
Isolation: b.options.IsolationLevel,
|
||||
Isolation: b.options.Isolation,
|
||||
ShmSize: b.options.ShmSize,
|
||||
Resources: resources,
|
||||
}
|
||||
|
@ -528,9 +527,9 @@ func (b *Builder) create() (string, error) {
|
|||
b.tmpContainers[c.ID] = struct{}{}
|
||||
fmt.Fprintf(b.Stdout, " ---> Running in %s\n", stringid.TruncateID(c.ID))
|
||||
|
||||
if config.Cmd.Len() > 0 {
|
||||
if len(config.Cmd) > 0 {
|
||||
// override the entry point that may have been picked up from the base image
|
||||
if err := b.docker.ContainerUpdateCmdOnBuild(c.ID, config.Cmd.Slice()); err != nil {
|
||||
if err := b.docker.ContainerUpdateCmdOnBuild(c.ID, config.Cmd); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
@ -568,7 +567,7 @@ func (b *Builder) run(cID string) (err error) {
|
|||
if ret, _ := b.docker.ContainerWait(cID, -1); ret != 0 {
|
||||
// TODO: change error type, because jsonmessage.JSONError assumes HTTP
|
||||
return &jsonmessage.JSONError{
|
||||
Message: fmt.Sprintf("The command '%s' returned a non-zero code: %d", b.runConfig.Cmd.ToString(), ret),
|
||||
Message: fmt.Sprintf("The command '%s' returned a non-zero code: %d", strings.Join(b.runConfig.Cmd, " "), ret),
|
||||
Code: ret,
|
||||
}
|
||||
}
|
||||
|
@ -604,7 +603,7 @@ func (b *Builder) readDockerfile() error {
|
|||
// that then look for 'dockerfile'. If neither are found then default
|
||||
// back to 'Dockerfile' and use that in the error message.
|
||||
if b.options.Dockerfile == "" {
|
||||
b.options.Dockerfile = api.DefaultDockerfileName
|
||||
b.options.Dockerfile = builder.DefaultDockerfileName
|
||||
if _, _, err := b.context.Stat(b.options.Dockerfile); os.IsNotExist(err) {
|
||||
lowercase := strings.ToLower(b.options.Dockerfile)
|
||||
if _, _, err := b.context.Stat(lowercase); err == nil {
|
||||
|
|
|
@ -71,7 +71,7 @@ func parseWords(rest string) []string {
|
|||
if unicode.IsSpace(ch) { // skip spaces
|
||||
continue
|
||||
}
|
||||
phase = inWord // found it, fall thru
|
||||
phase = inWord // found it, fall through
|
||||
}
|
||||
if (phase == inWord || phase == inQuote) && (pos == len(rest)) {
|
||||
if blankOK || len(word) > 0 {
|
||||
|
|
|
@ -118,7 +118,7 @@ func extractBuilderFlags(line string) (string, []string, error) {
|
|||
return line[pos:], words, nil
|
||||
}
|
||||
|
||||
phase = inWord // found someting with "--", fall thru
|
||||
phase = inWord // found someting with "--", fall through
|
||||
}
|
||||
if (phase == inWord || phase == inQuote) && (pos == len(line)) {
|
||||
if word != "--" && (blankOK || len(word) > 0) {
|
||||
|
|
|
@ -8,7 +8,6 @@ import (
|
|||
"io/ioutil"
|
||||
"regexp"
|
||||
|
||||
"github.com/docker/docker/api"
|
||||
"github.com/docker/docker/pkg/archive"
|
||||
"github.com/docker/docker/pkg/httputils"
|
||||
"github.com/docker/docker/pkg/urlutil"
|
||||
|
@ -87,7 +86,7 @@ func DetectContextFromRemoteURL(r io.ReadCloser, remoteURL string, createProgres
|
|||
|
||||
// dockerfileName is set to signal that the remote was interpreted as a single Dockerfile, in which case the caller
|
||||
// should use dockerfileName as the new name for the Dockerfile, irrespective of any other user input.
|
||||
dockerfileName = api.DefaultDockerfileName
|
||||
dockerfileName = DefaultDockerfileName
|
||||
|
||||
// TODO: return a context without tarsum
|
||||
return archive.Generate(dockerfileName, string(dockerfile))
|
||||
|
|
|
@ -19,7 +19,7 @@ type CommonFlags struct {
|
|||
TrustKey string
|
||||
}
|
||||
|
||||
// Command is the struct contains command name and description
|
||||
// Command is the struct containing the command name and description
|
||||
type Command struct {
|
||||
Name string
|
||||
Description string
|
||||
|
@ -42,7 +42,7 @@ var dockerCommands = []Command{
|
|||
{"inspect", "Return low-level information on a container or image"},
|
||||
{"kill", "Kill a running container"},
|
||||
{"load", "Load an image from a tar archive or STDIN"},
|
||||
{"login", "Register or log in to a Docker registry"},
|
||||
{"login", "Log in to a Docker registry"},
|
||||
{"logout", "Log out from a Docker registry"},
|
||||
{"logs", "Fetch the logs of a container"},
|
||||
{"network", "Manage Docker networks"},
|
||||
|
@ -64,7 +64,7 @@ var dockerCommands = []Command{
|
|||
{"tag", "Tag an image into a repository"},
|
||||
{"top", "Display the running processes of a container"},
|
||||
{"unpause", "Unpause all processes within a container"},
|
||||
{"update", "Update resources of one or more containers"},
|
||||
{"update", "Update configuration of one or more containers"},
|
||||
{"version", "Show the Docker version information"},
|
||||
{"volume", "Manage Docker volumes"},
|
||||
{"wait", "Block until a container stops, then print its exit code"},
|
||||
|
|
|
@ -17,6 +17,7 @@ import (
|
|||
const (
|
||||
// ConfigFileName is the name of config file
|
||||
ConfigFileName = "config.json"
|
||||
configFileDir = ".docker"
|
||||
oldConfigfile = ".dockercfg"
|
||||
|
||||
// This constant is only used for really old config files when the
|
||||
|
@ -31,7 +32,7 @@ var (
|
|||
|
||||
func init() {
|
||||
if configDir == "" {
|
||||
configDir = filepath.Join(homedir.Get(), ".docker")
|
||||
configDir = filepath.Join(homedir.Get(), configFileDir)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -47,12 +48,13 @@ func SetConfigDir(dir string) {
|
|||
|
||||
// ConfigFile ~/.docker/config.json file info
|
||||
type ConfigFile struct {
|
||||
AuthConfigs map[string]types.AuthConfig `json:"auths"`
|
||||
HTTPHeaders map[string]string `json:"HttpHeaders,omitempty"`
|
||||
PsFormat string `json:"psFormat,omitempty"`
|
||||
ImagesFormat string `json:"imagesFormat,omitempty"`
|
||||
DetachKeys string `json:"detachKeys,omitempty"`
|
||||
filename string // Note: not serialized - for internal use only
|
||||
AuthConfigs map[string]types.AuthConfig `json:"auths"`
|
||||
HTTPHeaders map[string]string `json:"HttpHeaders,omitempty"`
|
||||
PsFormat string `json:"psFormat,omitempty"`
|
||||
ImagesFormat string `json:"imagesFormat,omitempty"`
|
||||
DetachKeys string `json:"detachKeys,omitempty"`
|
||||
CredentialsStore string `json:"credsStore,omitempty"`
|
||||
filename string // Note: not serialized - for internal use only
|
||||
}
|
||||
|
||||
// NewConfigFile initializes an empty configuration file for the given filename 'fn'
|
||||
|
@ -86,11 +88,6 @@ func (configFile *ConfigFile) LegacyLoadFromReader(configData io.Reader) error {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
origEmail := strings.Split(arr[1], " = ")
|
||||
if len(origEmail) != 2 {
|
||||
return fmt.Errorf("Invalid Auth config file")
|
||||
}
|
||||
authConfig.Email = origEmail[1]
|
||||
authConfig.ServerAddress = defaultIndexserver
|
||||
configFile.AuthConfigs[defaultIndexserver] = authConfig
|
||||
} else {
|
||||
|
@ -126,6 +123,13 @@ func (configFile *ConfigFile) LoadFromReader(configData io.Reader) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// ContainsAuth returns whether there is authentication configured
|
||||
// in this file or not.
|
||||
func (configFile *ConfigFile) ContainsAuth() bool {
|
||||
return configFile.CredentialsStore != "" ||
|
||||
(configFile.AuthConfigs != nil && len(configFile.AuthConfigs) > 0)
|
||||
}
|
||||
|
||||
// LegacyLoadFromReader is a convenience function that creates a ConfigFile object from
|
||||
// a non-nested reader
|
||||
func LegacyLoadFromReader(configData io.Reader) (*ConfigFile, error) {
|
||||
|
@ -249,6 +253,10 @@ func (configFile *ConfigFile) Filename() string {
|
|||
|
||||
// encodeAuth creates a base64 encoded string to containing authorization information
|
||||
func encodeAuth(authConfig *types.AuthConfig) string {
|
||||
if authConfig.Username == "" && authConfig.Password == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
authStr := authConfig.Username + ":" + authConfig.Password
|
||||
msg := []byte(authStr)
|
||||
encoded := make([]byte, base64.StdEncoding.EncodedLen(len(msg)))
|
||||
|
@ -258,6 +266,10 @@ func encodeAuth(authConfig *types.AuthConfig) string {
|
|||
|
||||
// decodeAuth decodes a base64 encoded string and returns username and password
|
||||
func decodeAuth(authStr string) (string, string, error) {
|
||||
if authStr == "" {
|
||||
return "", "", nil
|
||||
}
|
||||
|
||||
decLen := base64.StdEncoding.DecodedLen(len(authStr))
|
||||
decoded := make([]byte, decLen)
|
||||
authByte := []byte(authStr)
|
||||
|
|
|
@ -111,12 +111,9 @@ func TestOldInvalidsAuth(t *testing.T) {
|
|||
invalids := map[string]string{
|
||||
`username = test`: "The Auth config file is empty",
|
||||
`username
|
||||
password
|
||||
email`: "Invalid Auth config file",
|
||||
password`: "Invalid Auth config file",
|
||||
`username = test
|
||||
email`: "Invalid auth configuration file",
|
||||
`username = am9lam9lOmhlbGxv
|
||||
email`: "Invalid Auth config file",
|
||||
}
|
||||
|
||||
tmpHome, err := ioutil.TempDir("", "config-test")
|
||||
|
@ -164,7 +161,7 @@ func TestOldValidAuth(t *testing.T) {
|
|||
|
||||
fn := filepath.Join(tmpHome, oldConfigfile)
|
||||
js := `username = am9lam9lOmhlbGxv
|
||||
email = user@example.com`
|
||||
email = user@example.com`
|
||||
if err := ioutil.WriteFile(fn, []byte(js), 0600); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -176,15 +173,23 @@ email = user@example.com`
|
|||
|
||||
// defaultIndexserver is https://index.docker.io/v1/
|
||||
ac := config.AuthConfigs["https://index.docker.io/v1/"]
|
||||
if ac.Email != "user@example.com" || ac.Username != "joejoe" || ac.Password != "hello" {
|
||||
if ac.Username != "joejoe" || ac.Password != "hello" {
|
||||
t.Fatalf("Missing data from parsing:\n%q", config)
|
||||
}
|
||||
|
||||
// Now save it and make sure it shows up in new form
|
||||
configStr := saveConfigAndValidateNewFormat(t, config, tmpHome)
|
||||
|
||||
if !strings.Contains(configStr, "user@example.com") {
|
||||
t.Fatalf("Should have save in new form: %s", configStr)
|
||||
expConfStr := `{
|
||||
"auths": {
|
||||
"https://index.docker.io/v1/": {
|
||||
"auth": "am9lam9lOmhlbGxv"
|
||||
}
|
||||
}
|
||||
}`
|
||||
|
||||
if configStr != expConfStr {
|
||||
t.Fatalf("Should have save in new form: \n%s\n not \n%s", configStr, expConfStr)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -239,15 +244,24 @@ func TestOldJson(t *testing.T) {
|
|||
}
|
||||
|
||||
ac := config.AuthConfigs["https://index.docker.io/v1/"]
|
||||
if ac.Email != "user@example.com" || ac.Username != "joejoe" || ac.Password != "hello" {
|
||||
if ac.Username != "joejoe" || ac.Password != "hello" {
|
||||
t.Fatalf("Missing data from parsing:\n%q", config)
|
||||
}
|
||||
|
||||
// Now save it and make sure it shows up in new form
|
||||
configStr := saveConfigAndValidateNewFormat(t, config, tmpHome)
|
||||
|
||||
if !strings.Contains(configStr, "user@example.com") {
|
||||
t.Fatalf("Should have save in new form: %s", configStr)
|
||||
expConfStr := `{
|
||||
"auths": {
|
||||
"https://index.docker.io/v1/": {
|
||||
"auth": "am9lam9lOmhlbGxv",
|
||||
"email": "user@example.com"
|
||||
}
|
||||
}
|
||||
}`
|
||||
|
||||
if configStr != expConfStr {
|
||||
t.Fatalf("Should have save in new form: \n'%s'\n not \n'%s'\n", configStr, expConfStr)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -259,7 +273,7 @@ func TestNewJson(t *testing.T) {
|
|||
defer os.RemoveAll(tmpHome)
|
||||
|
||||
fn := filepath.Join(tmpHome, ConfigFileName)
|
||||
js := ` { "auths": { "https://index.docker.io/v1/": { "auth": "am9lam9lOmhlbGxv", "email": "user@example.com" } } }`
|
||||
js := ` { "auths": { "https://index.docker.io/v1/": { "auth": "am9lam9lOmhlbGxv" } } }`
|
||||
if err := ioutil.WriteFile(fn, []byte(js), 0600); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -270,15 +284,62 @@ func TestNewJson(t *testing.T) {
|
|||
}
|
||||
|
||||
ac := config.AuthConfigs["https://index.docker.io/v1/"]
|
||||
if ac.Email != "user@example.com" || ac.Username != "joejoe" || ac.Password != "hello" {
|
||||
if ac.Username != "joejoe" || ac.Password != "hello" {
|
||||
t.Fatalf("Missing data from parsing:\n%q", config)
|
||||
}
|
||||
|
||||
// Now save it and make sure it shows up in new form
|
||||
configStr := saveConfigAndValidateNewFormat(t, config, tmpHome)
|
||||
|
||||
if !strings.Contains(configStr, "user@example.com") {
|
||||
t.Fatalf("Should have save in new form: %s", configStr)
|
||||
expConfStr := `{
|
||||
"auths": {
|
||||
"https://index.docker.io/v1/": {
|
||||
"auth": "am9lam9lOmhlbGxv"
|
||||
}
|
||||
}
|
||||
}`
|
||||
|
||||
if configStr != expConfStr {
|
||||
t.Fatalf("Should have save in new form: \n%s\n not \n%s", configStr, expConfStr)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewJsonNoEmail(t *testing.T) {
|
||||
tmpHome, err := ioutil.TempDir("", "config-test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.RemoveAll(tmpHome)
|
||||
|
||||
fn := filepath.Join(tmpHome, ConfigFileName)
|
||||
js := ` { "auths": { "https://index.docker.io/v1/": { "auth": "am9lam9lOmhlbGxv" } } }`
|
||||
if err := ioutil.WriteFile(fn, []byte(js), 0600); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
config, err := Load(tmpHome)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed loading on empty json file: %q", err)
|
||||
}
|
||||
|
||||
ac := config.AuthConfigs["https://index.docker.io/v1/"]
|
||||
if ac.Username != "joejoe" || ac.Password != "hello" {
|
||||
t.Fatalf("Missing data from parsing:\n%q", config)
|
||||
}
|
||||
|
||||
// Now save it and make sure it shows up in new form
|
||||
configStr := saveConfigAndValidateNewFormat(t, config, tmpHome)
|
||||
|
||||
expConfStr := `{
|
||||
"auths": {
|
||||
"https://index.docker.io/v1/": {
|
||||
"auth": "am9lam9lOmhlbGxv"
|
||||
}
|
||||
}
|
||||
}`
|
||||
|
||||
if configStr != expConfStr {
|
||||
t.Fatalf("Should have save in new form: \n%s\n not \n%s", configStr, expConfStr)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -366,7 +427,7 @@ func TestJsonReaderNoFile(t *testing.T) {
|
|||
}
|
||||
|
||||
ac := config.AuthConfigs["https://index.docker.io/v1/"]
|
||||
if ac.Email != "user@example.com" || ac.Username != "joejoe" || ac.Password != "hello" {
|
||||
if ac.Username != "joejoe" || ac.Password != "hello" {
|
||||
t.Fatalf("Missing data from parsing:\n%q", config)
|
||||
}
|
||||
|
||||
|
@ -381,7 +442,7 @@ func TestOldJsonReaderNoFile(t *testing.T) {
|
|||
}
|
||||
|
||||
ac := config.AuthConfigs["https://index.docker.io/v1/"]
|
||||
if ac.Email != "user@example.com" || ac.Username != "joejoe" || ac.Password != "hello" {
|
||||
if ac.Username != "joejoe" || ac.Password != "hello" {
|
||||
t.Fatalf("Missing data from parsing:\n%q", config)
|
||||
}
|
||||
}
|
||||
|
@ -404,7 +465,7 @@ func TestJsonWithPsFormatNoFile(t *testing.T) {
|
|||
|
||||
func TestJsonSaveWithNoFile(t *testing.T) {
|
||||
js := `{
|
||||
"auths": { "https://index.docker.io/v1/": { "auth": "am9lam9lOmhlbGxv", "email": "user@example.com" } },
|
||||
"auths": { "https://index.docker.io/v1/": { "auth": "am9lam9lOmhlbGxv" } },
|
||||
"psFormat": "table {{.ID}}\\t{{.Label \"com.docker.label.cpu\"}}"
|
||||
}`
|
||||
config, err := LoadFromReader(strings.NewReader(js))
|
||||
|
@ -426,9 +487,16 @@ func TestJsonSaveWithNoFile(t *testing.T) {
|
|||
t.Fatalf("Failed saving to file: %q", err)
|
||||
}
|
||||
buf, err := ioutil.ReadFile(filepath.Join(tmpHome, ConfigFileName))
|
||||
if !strings.Contains(string(buf), `"auths":`) ||
|
||||
!strings.Contains(string(buf), "user@example.com") {
|
||||
t.Fatalf("Should have save in new form: %s", string(buf))
|
||||
expConfStr := `{
|
||||
"auths": {
|
||||
"https://index.docker.io/v1/": {
|
||||
"auth": "am9lam9lOmhlbGxv"
|
||||
}
|
||||
},
|
||||
"psFormat": "table {{.ID}}\\t{{.Label \"com.docker.label.cpu\"}}"
|
||||
}`
|
||||
if string(buf) != expConfStr {
|
||||
t.Fatalf("Should have save in new form: \n%s\nnot \n%s", string(buf), expConfStr)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -454,14 +522,23 @@ func TestLegacyJsonSaveWithNoFile(t *testing.T) {
|
|||
t.Fatalf("Failed saving to file: %q", err)
|
||||
}
|
||||
buf, err := ioutil.ReadFile(filepath.Join(tmpHome, ConfigFileName))
|
||||
if !strings.Contains(string(buf), `"auths":`) ||
|
||||
!strings.Contains(string(buf), "user@example.com") {
|
||||
t.Fatalf("Should have save in new form: %s", string(buf))
|
||||
|
||||
expConfStr := `{
|
||||
"auths": {
|
||||
"https://index.docker.io/v1/": {
|
||||
"auth": "am9lam9lOmhlbGxv",
|
||||
"email": "user@example.com"
|
||||
}
|
||||
}
|
||||
}`
|
||||
|
||||
if string(buf) != expConfStr {
|
||||
t.Fatalf("Should have save in new form: \n%s\n not \n%s", string(buf), expConfStr)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEncodeAuth(t *testing.T) {
|
||||
newAuthConfig := &types.AuthConfig{Username: "ken", Password: "test", Email: "test@example.com"}
|
||||
newAuthConfig := &types.AuthConfig{Username: "ken", Password: "test"}
|
||||
authStr := encodeAuth(newAuthConfig)
|
||||
decAuthConfig := &types.AuthConfig{}
|
||||
var err error
|
||||
|
|
17
cliconfig/credentials/credentials.go
Normal file
17
cliconfig/credentials/credentials.go
Normal file
|
@ -0,0 +1,17 @@
|
|||
package credentials
|
||||
|
||||
import (
|
||||
"github.com/docker/engine-api/types"
|
||||
)
|
||||
|
||||
// Store is the interface that any credentials store must implement.
|
||||
type Store interface {
|
||||
// Erase removes credentials from the store for a given server.
|
||||
Erase(serverAddress string) error
|
||||
// Get retrieves credentials from the store for a given server.
|
||||
Get(serverAddress string) (types.AuthConfig, error)
|
||||
// GetAll retrieves all the credentials from the store.
|
||||
GetAll() (map[string]types.AuthConfig, error)
|
||||
// Store saves credentials in the store.
|
||||
Store(authConfig types.AuthConfig) error
|
||||
}
|
22
cliconfig/credentials/default_store.go
Normal file
22
cliconfig/credentials/default_store.go
Normal file
|
@ -0,0 +1,22 @@
|
|||
package credentials
|
||||
|
||||
import (
|
||||
"os/exec"
|
||||
|
||||
"github.com/docker/docker/cliconfig"
|
||||
)
|
||||
|
||||
// DetectDefaultStore sets the default credentials store
|
||||
// if the host includes the default store helper program.
|
||||
func DetectDefaultStore(c *cliconfig.ConfigFile) {
|
||||
if c.CredentialsStore != "" {
|
||||
// user defined
|
||||
return
|
||||
}
|
||||
|
||||
if defaultCredentialsStore != "" {
|
||||
if _, err := exec.LookPath(remoteCredentialsPrefix + defaultCredentialsStore); err == nil {
|
||||
c.CredentialsStore = defaultCredentialsStore
|
||||
}
|
||||
}
|
||||
}
|
3
cliconfig/credentials/default_store_darwin.go
Normal file
3
cliconfig/credentials/default_store_darwin.go
Normal file
|
@ -0,0 +1,3 @@
|
|||
package credentials
|
||||
|
||||
const defaultCredentialsStore = "osxkeychain"
|
3
cliconfig/credentials/default_store_linux.go
Normal file
3
cliconfig/credentials/default_store_linux.go
Normal file
|
@ -0,0 +1,3 @@
|
|||
package credentials
|
||||
|
||||
const defaultCredentialsStore = "secretservice"
|
5
cliconfig/credentials/default_store_unsupported.go
Normal file
5
cliconfig/credentials/default_store_unsupported.go
Normal file
|
@ -0,0 +1,5 @@
|
|||
// +build !windows,!darwin,!linux
|
||||
|
||||
package credentials
|
||||
|
||||
const defaultCredentialsStore = ""
|
3
cliconfig/credentials/default_store_windows.go
Normal file
3
cliconfig/credentials/default_store_windows.go
Normal file
|
@ -0,0 +1,3 @@
|
|||
package credentials
|
||||
|
||||
const defaultCredentialsStore = "wincred"
|
67
cliconfig/credentials/file_store.go
Normal file
67
cliconfig/credentials/file_store.go
Normal file
|
@ -0,0 +1,67 @@
|
|||
package credentials
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/docker/docker/cliconfig"
|
||||
"github.com/docker/engine-api/types"
|
||||
)
|
||||
|
||||
// fileStore implements a credentials store using
|
||||
// the docker configuration file to keep the credentials in plain text.
|
||||
type fileStore struct {
|
||||
file *cliconfig.ConfigFile
|
||||
}
|
||||
|
||||
// NewFileStore creates a new file credentials store.
|
||||
func NewFileStore(file *cliconfig.ConfigFile) Store {
|
||||
return &fileStore{
|
||||
file: file,
|
||||
}
|
||||
}
|
||||
|
||||
// Erase removes the given credentials from the file store.
|
||||
func (c *fileStore) Erase(serverAddress string) error {
|
||||
delete(c.file.AuthConfigs, serverAddress)
|
||||
return c.file.Save()
|
||||
}
|
||||
|
||||
// Get retrieves credentials for a specific server from the file store.
|
||||
func (c *fileStore) Get(serverAddress string) (types.AuthConfig, error) {
|
||||
authConfig, ok := c.file.AuthConfigs[serverAddress]
|
||||
if !ok {
|
||||
// Maybe they have a legacy config file, we will iterate the keys converting
|
||||
// them to the new format and testing
|
||||
for registry, ac := range c.file.AuthConfigs {
|
||||
if serverAddress == convertToHostname(registry) {
|
||||
return ac, nil
|
||||
}
|
||||
}
|
||||
|
||||
authConfig = types.AuthConfig{}
|
||||
}
|
||||
return authConfig, nil
|
||||
}
|
||||
|
||||
func (c *fileStore) GetAll() (map[string]types.AuthConfig, error) {
|
||||
return c.file.AuthConfigs, nil
|
||||
}
|
||||
|
||||
// Store saves the given credentials in the file store.
|
||||
func (c *fileStore) Store(authConfig types.AuthConfig) error {
|
||||
c.file.AuthConfigs[authConfig.ServerAddress] = authConfig
|
||||
return c.file.Save()
|
||||
}
|
||||
|
||||
func convertToHostname(url string) string {
|
||||
stripped := url
|
||||
if strings.HasPrefix(url, "http://") {
|
||||
stripped = strings.Replace(url, "http://", "", 1)
|
||||
} else if strings.HasPrefix(url, "https://") {
|
||||
stripped = strings.Replace(url, "https://", "", 1)
|
||||
}
|
||||
|
||||
nameParts := strings.SplitN(stripped, "/", 2)
|
||||
|
||||
return nameParts[0]
|
||||
}
|
138
cliconfig/credentials/file_store_test.go
Normal file
138
cliconfig/credentials/file_store_test.go
Normal file
|
@ -0,0 +1,138 @@
|
|||
package credentials
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"testing"
|
||||
|
||||
"github.com/docker/docker/cliconfig"
|
||||
"github.com/docker/engine-api/types"
|
||||
)
|
||||
|
||||
func newConfigFile(auths map[string]types.AuthConfig) *cliconfig.ConfigFile {
|
||||
tmp, _ := ioutil.TempFile("", "docker-test")
|
||||
name := tmp.Name()
|
||||
tmp.Close()
|
||||
|
||||
c := cliconfig.NewConfigFile(name)
|
||||
c.AuthConfigs = auths
|
||||
return c
|
||||
}
|
||||
|
||||
func TestFileStoreAddCredentials(t *testing.T) {
|
||||
f := newConfigFile(make(map[string]types.AuthConfig))
|
||||
|
||||
s := NewFileStore(f)
|
||||
err := s.Store(types.AuthConfig{
|
||||
Auth: "super_secret_token",
|
||||
Email: "foo@example.com",
|
||||
ServerAddress: "https://example.com",
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if len(f.AuthConfigs) != 1 {
|
||||
t.Fatalf("expected 1 auth config, got %d", len(f.AuthConfigs))
|
||||
}
|
||||
|
||||
a, ok := f.AuthConfigs["https://example.com"]
|
||||
if !ok {
|
||||
t.Fatalf("expected auth for https://example.com, got %v", f.AuthConfigs)
|
||||
}
|
||||
if a.Auth != "super_secret_token" {
|
||||
t.Fatalf("expected auth `super_secret_token`, got %s", a.Auth)
|
||||
}
|
||||
if a.Email != "foo@example.com" {
|
||||
t.Fatalf("expected email `foo@example.com`, got %s", a.Email)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFileStoreGet(t *testing.T) {
|
||||
f := newConfigFile(map[string]types.AuthConfig{
|
||||
"https://example.com": {
|
||||
Auth: "super_secret_token",
|
||||
Email: "foo@example.com",
|
||||
ServerAddress: "https://example.com",
|
||||
},
|
||||
})
|
||||
|
||||
s := NewFileStore(f)
|
||||
a, err := s.Get("https://example.com")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if a.Auth != "super_secret_token" {
|
||||
t.Fatalf("expected auth `super_secret_token`, got %s", a.Auth)
|
||||
}
|
||||
if a.Email != "foo@example.com" {
|
||||
t.Fatalf("expected email `foo@example.com`, got %s", a.Email)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFileStoreGetAll(t *testing.T) {
|
||||
s1 := "https://example.com"
|
||||
s2 := "https://example2.com"
|
||||
f := newConfigFile(map[string]types.AuthConfig{
|
||||
s1: {
|
||||
Auth: "super_secret_token",
|
||||
Email: "foo@example.com",
|
||||
ServerAddress: "https://example.com",
|
||||
},
|
||||
s2: {
|
||||
Auth: "super_secret_token2",
|
||||
Email: "foo@example2.com",
|
||||
ServerAddress: "https://example2.com",
|
||||
},
|
||||
})
|
||||
|
||||
s := NewFileStore(f)
|
||||
as, err := s.GetAll()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(as) != 2 {
|
||||
t.Fatalf("wanted 2, got %d", len(as))
|
||||
}
|
||||
if as[s1].Auth != "super_secret_token" {
|
||||
t.Fatalf("expected auth `super_secret_token`, got %s", as[s1].Auth)
|
||||
}
|
||||
if as[s1].Email != "foo@example.com" {
|
||||
t.Fatalf("expected email `foo@example.com`, got %s", as[s1].Email)
|
||||
}
|
||||
if as[s2].Auth != "super_secret_token2" {
|
||||
t.Fatalf("expected auth `super_secret_token2`, got %s", as[s2].Auth)
|
||||
}
|
||||
if as[s2].Email != "foo@example2.com" {
|
||||
t.Fatalf("expected email `foo@example2.com`, got %s", as[s2].Email)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFileStoreErase(t *testing.T) {
|
||||
f := newConfigFile(map[string]types.AuthConfig{
|
||||
"https://example.com": {
|
||||
Auth: "super_secret_token",
|
||||
Email: "foo@example.com",
|
||||
ServerAddress: "https://example.com",
|
||||
},
|
||||
})
|
||||
|
||||
s := NewFileStore(f)
|
||||
err := s.Erase("https://example.com")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// file store never returns errors, check that the auth config is empty
|
||||
a, err := s.Get("https://example.com")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if a.Auth != "" {
|
||||
t.Fatalf("expected empty auth token, got %s", a.Auth)
|
||||
}
|
||||
if a.Email != "" {
|
||||
t.Fatalf("expected empty email, got %s", a.Email)
|
||||
}
|
||||
}
|
180
cliconfig/credentials/native_store.go
Normal file
180
cliconfig/credentials/native_store.go
Normal file
|
@ -0,0 +1,180 @@
|
|||
package credentials
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/cliconfig"
|
||||
"github.com/docker/engine-api/types"
|
||||
)
|
||||
|
||||
const remoteCredentialsPrefix = "docker-credential-"
|
||||
|
||||
// Standarize the not found error, so every helper returns
|
||||
// the same message and docker can handle it properly.
|
||||
var errCredentialsNotFound = errors.New("credentials not found in native keychain")
|
||||
|
||||
// command is an interface that remote executed commands implement.
|
||||
type command interface {
|
||||
Output() ([]byte, error)
|
||||
Input(in io.Reader)
|
||||
}
|
||||
|
||||
// credentialsRequest holds information shared between docker and a remote credential store.
|
||||
type credentialsRequest struct {
|
||||
ServerURL string
|
||||
Username string
|
||||
Password string
|
||||
}
|
||||
|
||||
// credentialsGetResponse is the information serialized from a remote store
|
||||
// when the plugin sends requests to get the user credentials.
|
||||
type credentialsGetResponse struct {
|
||||
Username string
|
||||
Password string
|
||||
}
|
||||
|
||||
// nativeStore implements a credentials store
|
||||
// using native keychain to keep credentials secure.
|
||||
// It piggybacks into a file store to keep users' emails.
|
||||
type nativeStore struct {
|
||||
commandFn func(args ...string) command
|
||||
fileStore Store
|
||||
}
|
||||
|
||||
// NewNativeStore creates a new native store that
|
||||
// uses a remote helper program to manage credentials.
|
||||
func NewNativeStore(file *cliconfig.ConfigFile) Store {
|
||||
return &nativeStore{
|
||||
commandFn: shellCommandFn(file.CredentialsStore),
|
||||
fileStore: NewFileStore(file),
|
||||
}
|
||||
}
|
||||
|
||||
// Erase removes the given credentials from the native store.
|
||||
func (c *nativeStore) Erase(serverAddress string) error {
|
||||
if err := c.eraseCredentialsFromStore(serverAddress); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Fallback to plain text store to remove email
|
||||
return c.fileStore.Erase(serverAddress)
|
||||
}
|
||||
|
||||
// Get retrieves credentials for a specific server from the native store.
|
||||
func (c *nativeStore) Get(serverAddress string) (types.AuthConfig, error) {
|
||||
// load user email if it exist or an empty auth config.
|
||||
auth, _ := c.fileStore.Get(serverAddress)
|
||||
|
||||
creds, err := c.getCredentialsFromStore(serverAddress)
|
||||
if err != nil {
|
||||
return auth, err
|
||||
}
|
||||
auth.Username = creds.Username
|
||||
auth.Password = creds.Password
|
||||
|
||||
return auth, nil
|
||||
}
|
||||
|
||||
// GetAll retrieves all the credentials from the native store.
|
||||
func (c *nativeStore) GetAll() (map[string]types.AuthConfig, error) {
|
||||
auths, _ := c.fileStore.GetAll()
|
||||
|
||||
for s, ac := range auths {
|
||||
creds, _ := c.getCredentialsFromStore(s)
|
||||
ac.Username = creds.Username
|
||||
ac.Password = creds.Password
|
||||
auths[s] = ac
|
||||
}
|
||||
|
||||
return auths, nil
|
||||
}
|
||||
|
||||
// Store saves the given credentials in the file store.
|
||||
func (c *nativeStore) Store(authConfig types.AuthConfig) error {
|
||||
if err := c.storeCredentialsInStore(authConfig); err != nil {
|
||||
return err
|
||||
}
|
||||
authConfig.Username = ""
|
||||
authConfig.Password = ""
|
||||
|
||||
// Fallback to old credential in plain text to save only the email
|
||||
return c.fileStore.Store(authConfig)
|
||||
}
|
||||
|
||||
// storeCredentialsInStore executes the command to store the credentials in the native store.
|
||||
func (c *nativeStore) storeCredentialsInStore(config types.AuthConfig) error {
|
||||
cmd := c.commandFn("store")
|
||||
creds := &credentialsRequest{
|
||||
ServerURL: config.ServerAddress,
|
||||
Username: config.Username,
|
||||
Password: config.Password,
|
||||
}
|
||||
|
||||
buffer := new(bytes.Buffer)
|
||||
if err := json.NewEncoder(buffer).Encode(creds); err != nil {
|
||||
return err
|
||||
}
|
||||
cmd.Input(buffer)
|
||||
|
||||
out, err := cmd.Output()
|
||||
if err != nil {
|
||||
t := strings.TrimSpace(string(out))
|
||||
logrus.Debugf("error adding credentials - err: %v, out: `%s`", err, t)
|
||||
return fmt.Errorf(t)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// getCredentialsFromStore executes the command to get the credentials from the native store.
|
||||
func (c *nativeStore) getCredentialsFromStore(serverAddress string) (types.AuthConfig, error) {
|
||||
var ret types.AuthConfig
|
||||
|
||||
cmd := c.commandFn("get")
|
||||
cmd.Input(strings.NewReader(serverAddress))
|
||||
|
||||
out, err := cmd.Output()
|
||||
if err != nil {
|
||||
t := strings.TrimSpace(string(out))
|
||||
|
||||
// do not return an error if the credentials are not
|
||||
// in the keyckain. Let docker ask for new credentials.
|
||||
if t == errCredentialsNotFound.Error() {
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
logrus.Debugf("error getting credentials - err: %v, out: `%s`", err, t)
|
||||
return ret, fmt.Errorf(t)
|
||||
}
|
||||
|
||||
var resp credentialsGetResponse
|
||||
if err := json.NewDecoder(bytes.NewReader(out)).Decode(&resp); err != nil {
|
||||
return ret, err
|
||||
}
|
||||
|
||||
ret.Username = resp.Username
|
||||
ret.Password = resp.Password
|
||||
ret.ServerAddress = serverAddress
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
// eraseCredentialsFromStore executes the command to remove the server redentails from the native store.
|
||||
func (c *nativeStore) eraseCredentialsFromStore(serverURL string) error {
|
||||
cmd := c.commandFn("erase")
|
||||
cmd.Input(strings.NewReader(serverURL))
|
||||
|
||||
out, err := cmd.Output()
|
||||
if err != nil {
|
||||
t := strings.TrimSpace(string(out))
|
||||
logrus.Debugf("error erasing credentials - err: %v, out: `%s`", err, t)
|
||||
return fmt.Errorf(t)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
309
cliconfig/credentials/native_store_test.go
Normal file
309
cliconfig/credentials/native_store_test.go
Normal file
|
@ -0,0 +1,309 @@
|
|||
package credentials
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/docker/engine-api/types"
|
||||
)
|
||||
|
||||
const (
|
||||
validServerAddress = "https://index.docker.io/v1"
|
||||
validServerAddress2 = "https://example.com:5002"
|
||||
invalidServerAddress = "https://foobar.example.com"
|
||||
missingCredsAddress = "https://missing.docker.io/v1"
|
||||
)
|
||||
|
||||
var errCommandExited = fmt.Errorf("exited 1")
|
||||
|
||||
// mockCommand simulates interactions between the docker client and a remote
|
||||
// credentials helper.
|
||||
// Unit tests inject this mocked command into the remote to control execution.
|
||||
type mockCommand struct {
|
||||
arg string
|
||||
input io.Reader
|
||||
}
|
||||
|
||||
// Output returns responses from the remote credentials helper.
|
||||
// It mocks those reponses based in the input in the mock.
|
||||
func (m *mockCommand) Output() ([]byte, error) {
|
||||
in, err := ioutil.ReadAll(m.input)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
inS := string(in)
|
||||
|
||||
switch m.arg {
|
||||
case "erase":
|
||||
switch inS {
|
||||
case validServerAddress:
|
||||
return nil, nil
|
||||
default:
|
||||
return []byte("error erasing credentials"), errCommandExited
|
||||
}
|
||||
case "get":
|
||||
switch inS {
|
||||
case validServerAddress, validServerAddress2:
|
||||
return []byte(`{"Username": "foo", "Password": "bar"}`), nil
|
||||
case missingCredsAddress:
|
||||
return []byte(errCredentialsNotFound.Error()), errCommandExited
|
||||
case invalidServerAddress:
|
||||
return []byte("error getting credentials"), errCommandExited
|
||||
}
|
||||
case "store":
|
||||
var c credentialsRequest
|
||||
err := json.NewDecoder(strings.NewReader(inS)).Decode(&c)
|
||||
if err != nil {
|
||||
return []byte("error storing credentials"), errCommandExited
|
||||
}
|
||||
switch c.ServerURL {
|
||||
case validServerAddress:
|
||||
return nil, nil
|
||||
default:
|
||||
return []byte("error storing credentials"), errCommandExited
|
||||
}
|
||||
}
|
||||
|
||||
return []byte(fmt.Sprintf("unknown argument %q with %q", m.arg, inS)), errCommandExited
|
||||
}
|
||||
|
||||
// Input sets the input to send to a remote credentials helper.
|
||||
func (m *mockCommand) Input(in io.Reader) {
|
||||
m.input = in
|
||||
}
|
||||
|
||||
func mockCommandFn(args ...string) command {
|
||||
return &mockCommand{
|
||||
arg: args[0],
|
||||
}
|
||||
}
|
||||
|
||||
func TestNativeStoreAddCredentials(t *testing.T) {
|
||||
f := newConfigFile(make(map[string]types.AuthConfig))
|
||||
f.CredentialsStore = "mock"
|
||||
|
||||
s := &nativeStore{
|
||||
commandFn: mockCommandFn,
|
||||
fileStore: NewFileStore(f),
|
||||
}
|
||||
err := s.Store(types.AuthConfig{
|
||||
Username: "foo",
|
||||
Password: "bar",
|
||||
Email: "foo@example.com",
|
||||
ServerAddress: validServerAddress,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if len(f.AuthConfigs) != 1 {
|
||||
t.Fatalf("expected 1 auth config, got %d", len(f.AuthConfigs))
|
||||
}
|
||||
|
||||
a, ok := f.AuthConfigs[validServerAddress]
|
||||
if !ok {
|
||||
t.Fatalf("expected auth for %s, got %v", validServerAddress, f.AuthConfigs)
|
||||
}
|
||||
if a.Auth != "" {
|
||||
t.Fatalf("expected auth to be empty, got %s", a.Auth)
|
||||
}
|
||||
if a.Username != "" {
|
||||
t.Fatalf("expected username to be empty, got %s", a.Username)
|
||||
}
|
||||
if a.Password != "" {
|
||||
t.Fatalf("expected password to be empty, got %s", a.Password)
|
||||
}
|
||||
if a.Email != "foo@example.com" {
|
||||
t.Fatalf("expected email `foo@example.com`, got %s", a.Email)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNativeStoreAddInvalidCredentials(t *testing.T) {
|
||||
f := newConfigFile(make(map[string]types.AuthConfig))
|
||||
f.CredentialsStore = "mock"
|
||||
|
||||
s := &nativeStore{
|
||||
commandFn: mockCommandFn,
|
||||
fileStore: NewFileStore(f),
|
||||
}
|
||||
err := s.Store(types.AuthConfig{
|
||||
Username: "foo",
|
||||
Password: "bar",
|
||||
Email: "foo@example.com",
|
||||
ServerAddress: invalidServerAddress,
|
||||
})
|
||||
|
||||
if err == nil {
|
||||
t.Fatal("expected error, got nil")
|
||||
}
|
||||
|
||||
if err.Error() != "error storing credentials" {
|
||||
t.Fatalf("expected `error storing credentials`, got %v", err)
|
||||
}
|
||||
|
||||
if len(f.AuthConfigs) != 0 {
|
||||
t.Fatalf("expected 0 auth config, got %d", len(f.AuthConfigs))
|
||||
}
|
||||
}
|
||||
|
||||
func TestNativeStoreGet(t *testing.T) {
|
||||
f := newConfigFile(map[string]types.AuthConfig{
|
||||
validServerAddress: {
|
||||
Email: "foo@example.com",
|
||||
},
|
||||
})
|
||||
f.CredentialsStore = "mock"
|
||||
|
||||
s := &nativeStore{
|
||||
commandFn: mockCommandFn,
|
||||
fileStore: NewFileStore(f),
|
||||
}
|
||||
a, err := s.Get(validServerAddress)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if a.Username != "foo" {
|
||||
t.Fatalf("expected username `foo`, got %s", a.Username)
|
||||
}
|
||||
if a.Password != "bar" {
|
||||
t.Fatalf("expected password `bar`, got %s", a.Password)
|
||||
}
|
||||
if a.Email != "foo@example.com" {
|
||||
t.Fatalf("expected email `foo@example.com`, got %s", a.Email)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNativeStoreGetAll(t *testing.T) {
|
||||
f := newConfigFile(map[string]types.AuthConfig{
|
||||
validServerAddress: {
|
||||
Email: "foo@example.com",
|
||||
},
|
||||
validServerAddress2: {
|
||||
Email: "foo@example2.com",
|
||||
},
|
||||
})
|
||||
f.CredentialsStore = "mock"
|
||||
|
||||
s := &nativeStore{
|
||||
commandFn: mockCommandFn,
|
||||
fileStore: NewFileStore(f),
|
||||
}
|
||||
as, err := s.GetAll()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if len(as) != 2 {
|
||||
t.Fatalf("wanted 2, got %d", len(as))
|
||||
}
|
||||
|
||||
if as[validServerAddress].Username != "foo" {
|
||||
t.Fatalf("expected username `foo` for %s, got %s", validServerAddress, as[validServerAddress].Username)
|
||||
}
|
||||
if as[validServerAddress].Password != "bar" {
|
||||
t.Fatalf("expected password `bar` for %s, got %s", validServerAddress, as[validServerAddress].Password)
|
||||
}
|
||||
if as[validServerAddress].Email != "foo@example.com" {
|
||||
t.Fatalf("expected email `foo@example.com` for %s, got %s", validServerAddress, as[validServerAddress].Email)
|
||||
}
|
||||
if as[validServerAddress2].Username != "foo" {
|
||||
t.Fatalf("expected username `foo` for %s, got %s", validServerAddress2, as[validServerAddress2].Username)
|
||||
}
|
||||
if as[validServerAddress2].Password != "bar" {
|
||||
t.Fatalf("expected password `bar` for %s, got %s", validServerAddress2, as[validServerAddress2].Password)
|
||||
}
|
||||
if as[validServerAddress2].Email != "foo@example2.com" {
|
||||
t.Fatalf("expected email `foo@example2.com` for %s, got %s", validServerAddress2, as[validServerAddress2].Email)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNativeStoreGetMissingCredentials(t *testing.T) {
|
||||
f := newConfigFile(map[string]types.AuthConfig{
|
||||
validServerAddress: {
|
||||
Email: "foo@example.com",
|
||||
},
|
||||
})
|
||||
f.CredentialsStore = "mock"
|
||||
|
||||
s := &nativeStore{
|
||||
commandFn: mockCommandFn,
|
||||
fileStore: NewFileStore(f),
|
||||
}
|
||||
_, err := s.Get(missingCredsAddress)
|
||||
if err != nil {
|
||||
// missing credentials do not produce an error
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNativeStoreGetInvalidAddress(t *testing.T) {
|
||||
f := newConfigFile(map[string]types.AuthConfig{
|
||||
validServerAddress: {
|
||||
Email: "foo@example.com",
|
||||
},
|
||||
})
|
||||
f.CredentialsStore = "mock"
|
||||
|
||||
s := &nativeStore{
|
||||
commandFn: mockCommandFn,
|
||||
fileStore: NewFileStore(f),
|
||||
}
|
||||
_, err := s.Get(invalidServerAddress)
|
||||
if err == nil {
|
||||
t.Fatal("expected error, got nil")
|
||||
}
|
||||
|
||||
if err.Error() != "error getting credentials" {
|
||||
t.Fatalf("expected `error getting credentials`, got %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNativeStoreErase(t *testing.T) {
|
||||
f := newConfigFile(map[string]types.AuthConfig{
|
||||
validServerAddress: {
|
||||
Email: "foo@example.com",
|
||||
},
|
||||
})
|
||||
f.CredentialsStore = "mock"
|
||||
|
||||
s := &nativeStore{
|
||||
commandFn: mockCommandFn,
|
||||
fileStore: NewFileStore(f),
|
||||
}
|
||||
err := s.Erase(validServerAddress)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if len(f.AuthConfigs) != 0 {
|
||||
t.Fatalf("expected 0 auth configs, got %d", len(f.AuthConfigs))
|
||||
}
|
||||
}
|
||||
|
||||
func TestNativeStoreEraseInvalidAddress(t *testing.T) {
|
||||
f := newConfigFile(map[string]types.AuthConfig{
|
||||
validServerAddress: {
|
||||
Email: "foo@example.com",
|
||||
},
|
||||
})
|
||||
f.CredentialsStore = "mock"
|
||||
|
||||
s := &nativeStore{
|
||||
commandFn: mockCommandFn,
|
||||
fileStore: NewFileStore(f),
|
||||
}
|
||||
err := s.Erase(invalidServerAddress)
|
||||
if err == nil {
|
||||
t.Fatal("expected error, got nil")
|
||||
}
|
||||
|
||||
if err.Error() != "error erasing credentials" {
|
||||
t.Fatalf("expected `error erasing credentials`, got %v", err)
|
||||
}
|
||||
}
|
28
cliconfig/credentials/shell_command.go
Normal file
28
cliconfig/credentials/shell_command.go
Normal file
|
@ -0,0 +1,28 @@
|
|||
package credentials
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os/exec"
|
||||
)
|
||||
|
||||
func shellCommandFn(storeName string) func(args ...string) command {
|
||||
name := remoteCredentialsPrefix + storeName
|
||||
return func(args ...string) command {
|
||||
return &shell{cmd: exec.Command(name, args...)}
|
||||
}
|
||||
}
|
||||
|
||||
// shell invokes shell commands to talk with a remote credentials helper.
|
||||
type shell struct {
|
||||
cmd *exec.Cmd
|
||||
}
|
||||
|
||||
// Output returns responses from the remote credentials helper.
|
||||
func (s *shell) Output() ([]byte, error) {
|
||||
return s.cmd.Output()
|
||||
}
|
||||
|
||||
// Input sets the input to send to a remote credentials helper.
|
||||
func (s *shell) Input(in io.Reader) {
|
||||
s.cmd.Stdin = in
|
||||
}
|
|
@ -16,13 +16,12 @@ import (
|
|||
"github.com/docker/docker/daemon/logger"
|
||||
"github.com/docker/docker/daemon/logger/jsonfilelog"
|
||||
"github.com/docker/docker/daemon/network"
|
||||
derr "github.com/docker/docker/errors"
|
||||
"github.com/docker/docker/image"
|
||||
"github.com/docker/docker/layer"
|
||||
"github.com/docker/docker/pkg/idtools"
|
||||
"github.com/docker/docker/pkg/promise"
|
||||
"github.com/docker/docker/pkg/signal"
|
||||
"github.com/docker/docker/pkg/symlink"
|
||||
"github.com/docker/docker/pkg/system"
|
||||
"github.com/docker/docker/runconfig"
|
||||
"github.com/docker/docker/volume"
|
||||
containertypes "github.com/docker/engine-api/types/container"
|
||||
|
@ -185,10 +184,17 @@ func (container *Container) WriteHostConfig() error {
|
|||
}
|
||||
|
||||
// SetupWorkingDirectory sets up the container's working directory as set in container.Config.WorkingDir
|
||||
func (container *Container) SetupWorkingDirectory() error {
|
||||
func (container *Container) SetupWorkingDirectory(rootUID, rootGID int) error {
|
||||
if container.Config.WorkingDir == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
// If can't mount container FS at this point (eg Hyper-V Containers on
|
||||
// Windows) bail out now with no action.
|
||||
if !container.canMountFS() {
|
||||
return nil
|
||||
}
|
||||
|
||||
container.Config.WorkingDir = filepath.Clean(container.Config.WorkingDir)
|
||||
|
||||
pth, err := container.GetResourcePath(container.Config.WorkingDir)
|
||||
|
@ -196,10 +202,10 @@ func (container *Container) SetupWorkingDirectory() error {
|
|||
return err
|
||||
}
|
||||
|
||||
if err := system.MkdirAll(pth, 0755); err != nil {
|
||||
if err := idtools.MkdirAllNewAs(pth, 0755, rootUID, rootGID); err != nil {
|
||||
pthInfo, err2 := os.Stat(pth)
|
||||
if err2 == nil && pthInfo != nil && !pthInfo.IsDir() {
|
||||
return derr.ErrorCodeNotADir.WithArgs(container.Config.WorkingDir)
|
||||
return fmt.Errorf("Cannot mkdir: %s is not a directory", container.Config.WorkingDir)
|
||||
}
|
||||
|
||||
return err
|
||||
|
@ -277,37 +283,17 @@ func (container *Container) ConfigPath() (string, error) {
|
|||
return container.GetRootResourcePath(configFileName)
|
||||
}
|
||||
|
||||
func validateID(id string) error {
|
||||
if id == "" {
|
||||
return derr.ErrorCodeEmptyID
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Returns true if the container exposes a certain port
|
||||
func (container *Container) exposes(p nat.Port) bool {
|
||||
_, exists := container.Config.ExposedPorts[p]
|
||||
return exists
|
||||
}
|
||||
|
||||
// GetLogConfig returns the log configuration for the container.
|
||||
func (container *Container) GetLogConfig(defaultConfig containertypes.LogConfig) containertypes.LogConfig {
|
||||
cfg := container.HostConfig.LogConfig
|
||||
if cfg.Type != "" || len(cfg.Config) > 0 { // container has log driver configured
|
||||
if cfg.Type == "" {
|
||||
cfg.Type = jsonfilelog.Name
|
||||
}
|
||||
return cfg
|
||||
}
|
||||
// Use daemon's default log config for containers
|
||||
return defaultConfig
|
||||
}
|
||||
|
||||
// StartLogger starts a new logger driver for the container.
|
||||
func (container *Container) StartLogger(cfg containertypes.LogConfig) (logger.Logger, error) {
|
||||
c, err := logger.GetLogDriver(cfg.Type)
|
||||
if err != nil {
|
||||
return nil, derr.ErrorCodeLoggingFactory.WithArgs(err)
|
||||
return nil, fmt.Errorf("Failed to get logging factory: %v", err)
|
||||
}
|
||||
ctx := logger.Context{
|
||||
Config: cfg.Config,
|
||||
|
@ -594,3 +580,20 @@ func (container *Container) InitDNSHostConfig() {
|
|||
container.HostConfig.DNSOptions = make([]string, 0)
|
||||
}
|
||||
}
|
||||
|
||||
// UpdateMonitor updates monitor configure for running container
|
||||
func (container *Container) UpdateMonitor(restartPolicy containertypes.RestartPolicy) {
|
||||
monitor := container.monitor
|
||||
// No need to update monitor if container hasn't got one
|
||||
// monitor will be generated correctly according to container
|
||||
if monitor == nil {
|
||||
return
|
||||
}
|
||||
|
||||
monitor.mux.Lock()
|
||||
// to check whether restart policy has changed.
|
||||
if restartPolicy.Name != "" && !monitor.restartPolicy.IsSame(&restartPolicy) {
|
||||
monitor.restartPolicy = restartPolicy
|
||||
}
|
||||
monitor.mux.Unlock()
|
||||
}
|
||||
|
|
|
@ -14,7 +14,6 @@ import (
|
|||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/daemon/execdriver"
|
||||
derr "github.com/docker/docker/errors"
|
||||
"github.com/docker/docker/pkg/chrootarchive"
|
||||
"github.com/docker/docker/pkg/symlink"
|
||||
"github.com/docker/docker/pkg/system"
|
||||
|
@ -34,6 +33,11 @@ import (
|
|||
// DefaultSHMSize is the default size (64MB) of the SHM which will be mounted in the container
|
||||
const DefaultSHMSize int64 = 67108864
|
||||
|
||||
var (
|
||||
errInvalidEndpoint = fmt.Errorf("invalid endpoint while building port map info")
|
||||
errInvalidNetwork = fmt.Errorf("invalid network settings while building port map info")
|
||||
)
|
||||
|
||||
// Container holds the fields specific to unixen implementations.
|
||||
// See CommonContainer for standard fields common to all containers.
|
||||
type Container struct {
|
||||
|
@ -46,6 +50,7 @@ type Container struct {
|
|||
ShmPath string
|
||||
ResolvConfPath string
|
||||
SeccompProfile string
|
||||
NoNewPrivileges bool
|
||||
}
|
||||
|
||||
// CreateDaemonEnvironment returns the list of all environment variables given the list of
|
||||
|
@ -116,12 +121,12 @@ func (container *Container) GetEndpointInNetwork(n libnetwork.Network) (libnetwo
|
|||
|
||||
func (container *Container) buildPortMapInfo(ep libnetwork.Endpoint) error {
|
||||
if ep == nil {
|
||||
return derr.ErrorCodeEmptyEndpoint
|
||||
return errInvalidEndpoint
|
||||
}
|
||||
|
||||
networkSettings := container.NetworkSettings
|
||||
if networkSettings == nil {
|
||||
return derr.ErrorCodeEmptyNetwork
|
||||
return errInvalidNetwork
|
||||
}
|
||||
|
||||
if len(networkSettings.Ports) == 0 {
|
||||
|
@ -151,7 +156,7 @@ func getEndpointPortMapInfo(ep libnetwork.Endpoint) (nat.PortMap, error) {
|
|||
for _, tp := range exposedPorts {
|
||||
natPort, err := nat.NewPort(tp.Proto.String(), strconv.Itoa(int(tp.Port)))
|
||||
if err != nil {
|
||||
return pm, derr.ErrorCodeParsingPort.WithArgs(tp.Port, err)
|
||||
return pm, fmt.Errorf("Error parsing Port value(%v):%v", tp.Port, err)
|
||||
}
|
||||
pm[natPort] = nil
|
||||
}
|
||||
|
@ -195,12 +200,12 @@ func getSandboxPortMapInfo(sb libnetwork.Sandbox) nat.PortMap {
|
|||
// BuildEndpointInfo sets endpoint-related fields on container.NetworkSettings based on the provided network and endpoint.
|
||||
func (container *Container) BuildEndpointInfo(n libnetwork.Network, ep libnetwork.Endpoint) error {
|
||||
if ep == nil {
|
||||
return derr.ErrorCodeEmptyEndpoint
|
||||
return errInvalidEndpoint
|
||||
}
|
||||
|
||||
networkSettings := container.NetworkSettings
|
||||
if networkSettings == nil {
|
||||
return derr.ErrorCodeEmptyNetwork
|
||||
return errInvalidNetwork
|
||||
}
|
||||
|
||||
epInfo := ep.Info()
|
||||
|
@ -285,7 +290,6 @@ func (container *Container) BuildJoinOptions(n libnetwork.Network) ([]libnetwork
|
|||
// BuildCreateEndpointOptions builds endpoint options from a given network.
|
||||
func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network, epConfig *network.EndpointSettings, sb libnetwork.Sandbox) ([]libnetwork.EndpointOption, error) {
|
||||
var (
|
||||
portSpecs = make(nat.PortSet)
|
||||
bindings = make(nat.PortMap)
|
||||
pbList []types.PortBinding
|
||||
exposeList []types.TransportPort
|
||||
|
@ -338,10 +342,6 @@ func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network, epC
|
|||
return createOptions, nil
|
||||
}
|
||||
|
||||
if container.Config.ExposedPorts != nil {
|
||||
portSpecs = container.Config.ExposedPorts
|
||||
}
|
||||
|
||||
if container.HostConfig.PortBindings != nil {
|
||||
for p, b := range container.HostConfig.PortBindings {
|
||||
bindings[p] = []nat.PortBinding{}
|
||||
|
@ -354,6 +354,7 @@ func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network, epC
|
|||
}
|
||||
}
|
||||
|
||||
portSpecs := container.Config.ExposedPorts
|
||||
ports := make([]nat.Port, len(portSpecs))
|
||||
var i int
|
||||
for p := range portSpecs {
|
||||
|
@ -377,7 +378,7 @@ func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network, epC
|
|||
portStart, portEnd, err = newP.Range()
|
||||
}
|
||||
if err != nil {
|
||||
return nil, derr.ErrorCodeHostPort.WithArgs(binding[i].HostPort, err)
|
||||
return nil, fmt.Errorf("Error parsing HostPort value(%s):%v", binding[i].HostPort, err)
|
||||
}
|
||||
pbCopy.HostPort = uint16(portStart)
|
||||
pbCopy.HostPortEnd = uint16(portEnd)
|
||||
|
@ -498,11 +499,6 @@ func (container *Container) ShmResourcePath() (string, error) {
|
|||
return container.GetRootResourcePath("shm")
|
||||
}
|
||||
|
||||
// MqueueResourcePath returns path to mqueue
|
||||
func (container *Container) MqueueResourcePath() (string, error) {
|
||||
return container.GetRootResourcePath("mqueue")
|
||||
}
|
||||
|
||||
// HasMountFor checks if path is a mountpoint
|
||||
func (container *Container) HasMountFor(path string) bool {
|
||||
_, exists := container.MountPoints[path]
|
||||
|
@ -564,10 +560,11 @@ func updateCommand(c *execdriver.Command, resources containertypes.Resources) {
|
|||
c.Resources.KernelMemory = resources.KernelMemory
|
||||
}
|
||||
|
||||
// UpdateContainer updates resources of a container.
|
||||
// UpdateContainer updates configuration of a container.
|
||||
func (container *Container) UpdateContainer(hostConfig *containertypes.HostConfig) error {
|
||||
container.Lock()
|
||||
|
||||
// update resources of container
|
||||
resources := hostConfig.Resources
|
||||
cResources := &container.HostConfig.Resources
|
||||
if resources.BlkioWeight != 0 {
|
||||
|
@ -600,6 +597,11 @@ func (container *Container) UpdateContainer(hostConfig *containertypes.HostConfi
|
|||
if resources.KernelMemory != 0 {
|
||||
cResources.KernelMemory = resources.KernelMemory
|
||||
}
|
||||
|
||||
// update HostConfig of container
|
||||
if hostConfig.RestartPolicy.Name != "" {
|
||||
container.HostConfig.RestartPolicy = hostConfig.RestartPolicy
|
||||
}
|
||||
container.Unlock()
|
||||
|
||||
// If container is not running, update hostConfig struct is enough,
|
||||
|
@ -722,3 +724,9 @@ func (container *Container) TmpfsMounts() []execdriver.Mount {
|
|||
func cleanResourcePath(path string) string {
|
||||
return filepath.Join(string(os.PathSeparator), path)
|
||||
}
|
||||
|
||||
// canMountFS determines if the file system for the container
|
||||
// can be mounted locally. A no-op on non-Windows platforms
|
||||
func (container *Container) canMountFS() bool {
|
||||
return true
|
||||
}
|
||||
|
|
|
@ -3,12 +3,13 @@
|
|||
package container
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/docker/docker/daemon/execdriver"
|
||||
"github.com/docker/docker/volume"
|
||||
"github.com/docker/engine-api/types/container"
|
||||
containertypes "github.com/docker/engine-api/types/container"
|
||||
)
|
||||
|
||||
// Container holds fields specific to the Windows implementation. See
|
||||
|
@ -45,8 +46,22 @@ func (container *Container) TmpfsMounts() []execdriver.Mount {
|
|||
return nil
|
||||
}
|
||||
|
||||
// UpdateContainer updates resources of a container
|
||||
func (container *Container) UpdateContainer(hostConfig *container.HostConfig) error {
|
||||
// UpdateContainer updates configuration of a container
|
||||
func (container *Container) UpdateContainer(hostConfig *containertypes.HostConfig) error {
|
||||
container.Lock()
|
||||
defer container.Unlock()
|
||||
resources := hostConfig.Resources
|
||||
if resources.BlkioWeight != 0 || resources.CPUShares != 0 ||
|
||||
resources.CPUPeriod != 0 || resources.CPUQuota != 0 ||
|
||||
resources.CpusetCpus != "" || resources.CpusetMems != "" ||
|
||||
resources.Memory != 0 || resources.MemorySwap != 0 ||
|
||||
resources.MemoryReservation != 0 || resources.KernelMemory != 0 {
|
||||
return fmt.Errorf("Resource updating isn't supported on Windows")
|
||||
}
|
||||
// update HostConfig of container
|
||||
if hostConfig.RestartPolicy.Name != "" {
|
||||
container.HostConfig.RestartPolicy = hostConfig.RestartPolicy
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -68,3 +83,10 @@ func cleanResourcePath(path string) string {
|
|||
}
|
||||
return filepath.Join(string(os.PathSeparator), path)
|
||||
}
|
||||
|
||||
// canMountFS determines if the file system for the container
|
||||
// can be mounted locally. In the case of Windows, this is not possible
|
||||
// for Hyper-V containers during WORKDIR execution for example.
|
||||
func (container *Container) canMountFS() bool {
|
||||
return !containertypes.Isolation.IsHyperV(container.HostConfig.Isolation)
|
||||
}
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package container
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os/exec"
|
||||
"strings"
|
||||
|
@ -10,10 +11,8 @@ import (
|
|||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/daemon/execdriver"
|
||||
derr "github.com/docker/docker/errors"
|
||||
"github.com/docker/docker/pkg/promise"
|
||||
"github.com/docker/docker/pkg/stringid"
|
||||
"github.com/docker/docker/utils"
|
||||
"github.com/docker/engine-api/types/container"
|
||||
)
|
||||
|
||||
|
@ -79,11 +78,11 @@ type containerMonitor struct {
|
|||
|
||||
// StartMonitor initializes a containerMonitor for this container with the provided supervisor and restart policy
|
||||
// and starts the container's process.
|
||||
func (container *Container) StartMonitor(s supervisor, policy container.RestartPolicy) error {
|
||||
func (container *Container) StartMonitor(s supervisor) error {
|
||||
container.monitor = &containerMonitor{
|
||||
supervisor: s,
|
||||
container: container,
|
||||
restartPolicy: policy,
|
||||
restartPolicy: container.HostConfig.RestartPolicy,
|
||||
timeIncrement: defaultTimeIncrement,
|
||||
stopChan: make(chan struct{}),
|
||||
startSignal: make(chan struct{}),
|
||||
|
@ -126,9 +125,6 @@ func (m *containerMonitor) Close() error {
|
|||
// Cleanup networking and mounts
|
||||
m.supervisor.Cleanup(m.container)
|
||||
|
||||
// FIXME: here is race condition between two RUN instructions in Dockerfile
|
||||
// because they share same runconfig and change image. Must be fixed
|
||||
// in builder/builder.go
|
||||
if err := m.container.ToDisk(); err != nil {
|
||||
logrus.Errorf("Error dumping container %s state to disk: %s", m.container.ID, err)
|
||||
|
||||
|
@ -190,7 +186,7 @@ func (m *containerMonitor) start() error {
|
|||
if m.container.RestartCount == 0 {
|
||||
m.container.ExitCode = 127
|
||||
m.resetContainer(false)
|
||||
return derr.ErrorCodeCmdNotFound
|
||||
return fmt.Errorf("Container command not found or does not exist.")
|
||||
}
|
||||
}
|
||||
// set to 126 for container cmd can't be invoked errors
|
||||
|
@ -198,7 +194,7 @@ func (m *containerMonitor) start() error {
|
|||
if m.container.RestartCount == 0 {
|
||||
m.container.ExitCode = 126
|
||||
m.resetContainer(false)
|
||||
return derr.ErrorCodeCmdCouldNotBeInvoked
|
||||
return fmt.Errorf("Container command could not be invoked.")
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -206,7 +202,7 @@ func (m *containerMonitor) start() error {
|
|||
m.container.ExitCode = -1
|
||||
m.resetContainer(false)
|
||||
|
||||
return derr.ErrorCodeCantStart.WithArgs(m.container.ID, utils.GetErrorMessage(err))
|
||||
return fmt.Errorf("Cannot start container %s: %v", m.container.ID, err)
|
||||
}
|
||||
|
||||
logrus.Errorf("Error running container: %s", err)
|
||||
|
|
|
@ -6,7 +6,6 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/docker/docker/daemon/execdriver"
|
||||
derr "github.com/docker/docker/errors"
|
||||
"github.com/docker/go-units"
|
||||
)
|
||||
|
||||
|
@ -113,17 +112,17 @@ func wait(waitChan <-chan struct{}, timeout time.Duration) error {
|
|||
}
|
||||
select {
|
||||
case <-time.After(timeout):
|
||||
return derr.ErrorCodeTimedOut.WithArgs(timeout)
|
||||
return fmt.Errorf("Timed out: %v", timeout)
|
||||
case <-waitChan:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// waitRunning waits until state is running. If state is already
|
||||
// WaitRunning waits until state is running. If state is already
|
||||
// running it returns immediately. If you want wait forever you must
|
||||
// supply negative timeout. Returns pid, that was passed to
|
||||
// SetRunning.
|
||||
func (s *State) waitRunning(timeout time.Duration) (int, error) {
|
||||
func (s *State) WaitRunning(timeout time.Duration) (int, error) {
|
||||
s.Lock()
|
||||
if s.Running {
|
||||
pid := s.Pid
|
||||
|
@ -256,14 +255,15 @@ func (s *State) IsRestarting() bool {
|
|||
}
|
||||
|
||||
// SetRemovalInProgress sets the container state as being removed.
|
||||
func (s *State) SetRemovalInProgress() error {
|
||||
// It returns true if the container was already in that state.
|
||||
func (s *State) SetRemovalInProgress() bool {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
if s.RemovalInProgress {
|
||||
return derr.ErrorCodeAlreadyRemoving
|
||||
return true
|
||||
}
|
||||
s.RemovalInProgress = true
|
||||
return nil
|
||||
return false
|
||||
}
|
||||
|
||||
// ResetRemovalInProgress make the RemovalInProgress state to false.
|
||||
|
|
|
@ -14,7 +14,7 @@ func TestStateRunStop(t *testing.T) {
|
|||
started := make(chan struct{})
|
||||
var pid int64
|
||||
go func() {
|
||||
runPid, _ := s.waitRunning(-1 * time.Second)
|
||||
runPid, _ := s.WaitRunning(-1 * time.Second)
|
||||
atomic.StoreInt64(&pid, int64(runPid))
|
||||
close(started)
|
||||
}()
|
||||
|
@ -41,8 +41,8 @@ func TestStateRunStop(t *testing.T) {
|
|||
if runPid != i+100 {
|
||||
t.Fatalf("Pid %v, expected %v", runPid, i+100)
|
||||
}
|
||||
if pid, err := s.waitRunning(-1 * time.Second); err != nil || pid != i+100 {
|
||||
t.Fatalf("waitRunning returned pid: %v, err: %v, expected pid: %v, err: %v", pid, err, i+100, nil)
|
||||
if pid, err := s.WaitRunning(-1 * time.Second); err != nil || pid != i+100 {
|
||||
t.Fatalf("WaitRunning returned pid: %v, err: %v, expected pid: %v, err: %v", pid, err, i+100, nil)
|
||||
}
|
||||
|
||||
stopped := make(chan struct{})
|
||||
|
@ -82,7 +82,7 @@ func TestStateTimeoutWait(t *testing.T) {
|
|||
s := NewState()
|
||||
started := make(chan struct{})
|
||||
go func() {
|
||||
s.waitRunning(100 * time.Millisecond)
|
||||
s.WaitRunning(100 * time.Millisecond)
|
||||
close(started)
|
||||
}()
|
||||
select {
|
||||
|
@ -98,7 +98,7 @@ func TestStateTimeoutWait(t *testing.T) {
|
|||
|
||||
stopped := make(chan struct{})
|
||||
go func() {
|
||||
s.waitRunning(100 * time.Millisecond)
|
||||
s.WaitRunning(100 * time.Millisecond)
|
||||
close(stopped)
|
||||
}()
|
||||
select {
|
||||
|
|
|
@ -11,8 +11,7 @@ import (
|
|||
)
|
||||
|
||||
type profileData struct {
|
||||
MajorVersion int
|
||||
MinorVersion int
|
||||
Version int
|
||||
}
|
||||
|
||||
func main() {
|
||||
|
@ -23,13 +22,12 @@ func main() {
|
|||
// parse the arg
|
||||
apparmorProfilePath := os.Args[1]
|
||||
|
||||
majorVersion, minorVersion, err := aaparser.GetVersion()
|
||||
version, err := aaparser.GetVersion()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
data := profileData{
|
||||
MajorVersion: majorVersion,
|
||||
MinorVersion: minorVersion,
|
||||
Version: version,
|
||||
}
|
||||
fmt.Printf("apparmor_parser is of version %+v\n", data)
|
||||
|
||||
|
|
|
@ -20,11 +20,11 @@ profile /usr/bin/docker (attach_disconnected, complain) {
|
|||
|
||||
umount,
|
||||
pivot_root,
|
||||
{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
|
||||
{{if ge .Version 209000}}
|
||||
signal (receive) peer=@{profile_name},
|
||||
signal (receive) peer=unconfined,
|
||||
signal (send),
|
||||
{{end}}{{end}}
|
||||
{{end}}
|
||||
network,
|
||||
capability,
|
||||
owner /** rw,
|
||||
|
@ -46,12 +46,12 @@ profile /usr/bin/docker (attach_disconnected, complain) {
|
|||
/etc/ld.so.cache r,
|
||||
/etc/passwd r,
|
||||
|
||||
{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
|
||||
{{if ge .Version 209000}}
|
||||
ptrace peer=@{profile_name},
|
||||
ptrace (read) peer=docker-default,
|
||||
deny ptrace (trace) peer=docker-default,
|
||||
deny ptrace peer=/usr/bin/docker///bin/ps,
|
||||
{{end}}{{end}}
|
||||
{{end}}
|
||||
|
||||
/usr/lib/** rm,
|
||||
/lib/** rm,
|
||||
|
@ -72,11 +72,11 @@ profile /usr/bin/docker (attach_disconnected, complain) {
|
|||
/sbin/zfs rCx,
|
||||
/sbin/apparmor_parser rCx,
|
||||
|
||||
{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
|
||||
{{if ge .Version 209000}}
|
||||
# Transitions
|
||||
change_profile -> docker-*,
|
||||
change_profile -> unconfined,
|
||||
{{end}}{{end}}
|
||||
{{end}}
|
||||
|
||||
profile /bin/cat (complain) {
|
||||
/etc/ld.so.cache r,
|
||||
|
@ -98,10 +98,10 @@ profile /usr/bin/docker (attach_disconnected, complain) {
|
|||
/dev/null rw,
|
||||
/bin/ps mr,
|
||||
|
||||
{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
|
||||
{{if ge .Version 209000}}
|
||||
# We don't need ptrace so we'll deny and ignore the error.
|
||||
deny ptrace (read, trace),
|
||||
{{end}}{{end}}
|
||||
{{end}}
|
||||
|
||||
# Quiet dac_override denials
|
||||
deny capability dac_override,
|
||||
|
@ -119,15 +119,15 @@ profile /usr/bin/docker (attach_disconnected, complain) {
|
|||
/proc/tty/drivers r,
|
||||
}
|
||||
profile /sbin/iptables (complain) {
|
||||
{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
|
||||
{{if ge .Version 209000}}
|
||||
signal (receive) peer=/usr/bin/docker,
|
||||
{{end}}{{end}}
|
||||
{{end}}
|
||||
capability net_admin,
|
||||
}
|
||||
profile /sbin/auplink flags=(attach_disconnected, complain) {
|
||||
{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
|
||||
{{if ge .Version 209000}}
|
||||
signal (receive) peer=/usr/bin/docker,
|
||||
{{end}}{{end}}
|
||||
{{end}}
|
||||
capability sys_admin,
|
||||
capability dac_override,
|
||||
|
||||
|
@ -146,9 +146,9 @@ profile /usr/bin/docker (attach_disconnected, complain) {
|
|||
/proc/[0-9]*/mounts rw,
|
||||
}
|
||||
profile /sbin/modprobe /bin/kmod (complain) {
|
||||
{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
|
||||
{{if ge .Version 209000}}
|
||||
signal (receive) peer=/usr/bin/docker,
|
||||
{{end}}{{end}}
|
||||
{{end}}
|
||||
capability sys_module,
|
||||
/etc/ld.so.cache r,
|
||||
/lib/** rm,
|
||||
|
@ -162,9 +162,9 @@ profile /usr/bin/docker (attach_disconnected, complain) {
|
|||
}
|
||||
# xz works via pipes, so we do not need access to the filesystem.
|
||||
profile /usr/bin/xz (complain) {
|
||||
{{if ge .MajorVersion 2}}{{if ge .MinorVersion 9}}
|
||||
{{if ge .Version 209000}}
|
||||
signal (receive) peer=/usr/bin/docker,
|
||||
{{end}}{{end}}
|
||||
{{end}}
|
||||
/etc/ld.so.cache r,
|
||||
/lib/** rm,
|
||||
/usr/bin/xz rm,
|
||||
|
|
|
@ -115,6 +115,17 @@ check_device() {
|
|||
fi
|
||||
}
|
||||
|
||||
check_distro_userns() {
|
||||
source /etc/os-release 2>/dev/null || /bin/true
|
||||
if [[ "${ID}" =~ ^(centos|rhel)$ && "${VERSION_ID}" =~ ^7 ]]; then
|
||||
# this is a CentOS7 or RHEL7 system
|
||||
grep -q "user_namespace.enable=1" /proc/cmdline || {
|
||||
# no user namespace support enabled
|
||||
wrap_bad " (RHEL7/CentOS7" "User namespaces disabled; add 'user_namespace.enable=1' to boot command line)"
|
||||
}
|
||||
fi
|
||||
}
|
||||
|
||||
if [ ! -e "$CONFIG" ]; then
|
||||
wrap_warning "warning: $CONFIG does not exist, searching other paths for kernel config ..."
|
||||
for tryConfig in "${possibleConfigs[@]}"; do
|
||||
|
@ -171,6 +182,7 @@ flags=(
|
|||
NAMESPACES {NET,PID,IPC,UTS}_NS
|
||||
DEVPTS_MULTIPLE_INSTANCES
|
||||
CGROUPS CGROUP_CPUACCT CGROUP_DEVICE CGROUP_FREEZER CGROUP_SCHED CPUSETS MEMCG
|
||||
KEYS
|
||||
MACVLAN VETH BRIDGE BRIDGE_NETFILTER
|
||||
NF_NAT_IPV4 IP_NF_FILTER IP_NF_TARGET_MASQUERADE
|
||||
NETFILTER_XT_MATCH_{ADDRTYPE,CONNTRACK}
|
||||
|
@ -185,10 +197,14 @@ echo
|
|||
echo 'Optional Features:'
|
||||
{
|
||||
check_flags USER_NS
|
||||
check_distro_userns
|
||||
}
|
||||
{
|
||||
check_flags SECCOMP
|
||||
}
|
||||
{
|
||||
check_flags CGROUP_PIDS
|
||||
}
|
||||
{
|
||||
check_flags MEMCG_KMEM MEMCG_SWAP MEMCG_SWAP_ENABLED
|
||||
if is_set MEMCG_SWAP && ! is_set MEMCG_SWAP_ENABLED; then
|
||||
|
|
|
@ -395,7 +395,9 @@ __docker_complete_isolation() {
|
|||
__docker_complete_log_drivers() {
|
||||
COMPREPLY=( $( compgen -W "
|
||||
awslogs
|
||||
etwlogs
|
||||
fluentd
|
||||
gcplogs
|
||||
gelf
|
||||
journald
|
||||
json-file
|
||||
|
@ -409,13 +411,14 @@ __docker_complete_log_options() {
|
|||
# see docs/reference/logging/index.md
|
||||
local awslogs_options="awslogs-region awslogs-group awslogs-stream"
|
||||
local fluentd_options="env fluentd-address labels tag"
|
||||
local gcplogs_options="env gcp-log-cmd gcp-project labels"
|
||||
local gelf_options="env gelf-address labels tag"
|
||||
local journald_options="env labels tag"
|
||||
local json_file_options="env labels max-file max-size"
|
||||
local syslog_options="syslog-address syslog-tls-ca-cert syslog-tls-cert syslog-tls-key syslog-tls-skip-verify syslog-facility tag"
|
||||
local splunk_options="env labels splunk-caname splunk-capath splunk-index splunk-insecureskipverify splunk-source splunk-sourcetype splunk-token splunk-url tag"
|
||||
|
||||
local all_options="$fluentd_options $gelf_options $journald_options $json_file_options $syslog_options $splunk_options"
|
||||
local all_options="$fluentd_options $gcplogs_options $gelf_options $journald_options $json_file_options $syslog_options $splunk_options"
|
||||
|
||||
case $(__docker_value_of_option --log-driver) in
|
||||
'')
|
||||
|
@ -427,6 +430,9 @@ __docker_complete_log_options() {
|
|||
fluentd)
|
||||
COMPREPLY=( $( compgen -W "$fluentd_options" -S = -- "$cur" ) )
|
||||
;;
|
||||
gcplogs)
|
||||
COMPREPLY=( $( compgen -W "$gcplogs_options" -S = -- "$cur" ) )
|
||||
;;
|
||||
gelf)
|
||||
COMPREPLY=( $( compgen -W "$gelf_options" -S = -- "$cur" ) )
|
||||
;;
|
||||
|
@ -515,6 +521,22 @@ __docker_complete_log_levels() {
|
|||
COMPREPLY=( $( compgen -W "debug info warn error fatal" -- "$cur" ) )
|
||||
}
|
||||
|
||||
__docker_complete_restart() {
|
||||
case "$prev" in
|
||||
--restart)
|
||||
case "$cur" in
|
||||
on-failure:*)
|
||||
;;
|
||||
*)
|
||||
COMPREPLY=( $( compgen -W "always no on-failure on-failure: unless-stopped" -- "$cur") )
|
||||
;;
|
||||
esac
|
||||
return
|
||||
;;
|
||||
esac
|
||||
return 1
|
||||
}
|
||||
|
||||
# a selection of the available signals that is most likely of interest in the
|
||||
# context of docker containers.
|
||||
__docker_complete_signals() {
|
||||
|
@ -794,7 +816,7 @@ _docker_daemon() {
|
|||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
|
||||
local key=$(__docker_map_key_of_current_option '--storage-opt')
|
||||
case "$key" in
|
||||
dm.@(blkdiscard|override_udev_sync_check|use_deferred_@(removal|deletion)))
|
||||
|
@ -1188,14 +1210,14 @@ _docker_load() {
|
|||
|
||||
_docker_login() {
|
||||
case "$prev" in
|
||||
--email|-e|--password|-p|--username|-u)
|
||||
--password|-p|--username|-u)
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--email -e --help --password -p --username -u" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--help --password -p --username -u" -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
@ -1275,7 +1297,7 @@ _docker_network_connect() {
|
|||
|
||||
_docker_network_create() {
|
||||
case "$prev" in
|
||||
--aux-address|--gateway|--ip-range|--ipam-opt|--opt|-o|--subnet)
|
||||
--aux-address|--gateway|--internal|--ip-range|--ipam-opt|--ipv6|--opt|-o|--subnet)
|
||||
return
|
||||
;;
|
||||
--ipam-driver)
|
||||
|
@ -1294,7 +1316,7 @@ _docker_network_create() {
|
|||
|
||||
case "$cur" in
|
||||
-*)
|
||||
COMPREPLY=( $( compgen -W "--aux-address --driver -d --gateway --help --internal --ip-range --ipam-driver --ipam-opt --opt -o --subnet" -- "$cur" ) )
|
||||
COMPREPLY=( $( compgen -W "--aux-address --driver -d --gateway --help --internal --ip-range --ipam-driver --ipam-opt --ipv6 --opt -o --subnet" -- "$cur" ) )
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
@ -1615,6 +1637,7 @@ _docker_run() {
|
|||
--net-alias
|
||||
--oom-score-adj
|
||||
--pid
|
||||
--pids-limit
|
||||
--publish -p
|
||||
--restart
|
||||
--security-opt
|
||||
|
@ -1657,6 +1680,7 @@ _docker_run() {
|
|||
|
||||
|
||||
__docker_complete_log_driver_options && return
|
||||
__docker_complete_restart && return
|
||||
|
||||
case "$prev" in
|
||||
--add-host)
|
||||
|
@ -1754,16 +1778,6 @@ _docker_run() {
|
|||
esac
|
||||
return
|
||||
;;
|
||||
--restart)
|
||||
case "$cur" in
|
||||
on-failure:*)
|
||||
;;
|
||||
*)
|
||||
COMPREPLY=( $( compgen -W "always no on-failure on-failure: unless-stopped" -- "$cur") )
|
||||
;;
|
||||
esac
|
||||
return
|
||||
;;
|
||||
--security-opt)
|
||||
case "$cur" in
|
||||
label:*:*)
|
||||
|
@ -1938,6 +1952,7 @@ _docker_update() {
|
|||
--memory -m
|
||||
--memory-reservation
|
||||
--memory-swap
|
||||
--restart
|
||||
"
|
||||
|
||||
local boolean_options="
|
||||
|
@ -1946,6 +1961,8 @@ _docker_update() {
|
|||
|
||||
local all_options="$options_with_args $boolean_options"
|
||||
|
||||
__docker_complete_restart && return
|
||||
|
||||
case "$prev" in
|
||||
$(__docker_to_extglob "$options_with_args") )
|
||||
return
|
||||
|
|
|
@ -221,8 +221,7 @@ complete -c docker -A -f -n '__fish_seen_subcommand_from load' -l help -d 'Print
|
|||
complete -c docker -A -f -n '__fish_seen_subcommand_from load' -s i -l input -d 'Read from a tar archive file, instead of STDIN'
|
||||
|
||||
# login
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a login -d 'Register or log in to a Docker registry server'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -s e -l email -d 'Email'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a login -d 'Log in to a Docker registry server'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -s p -l password -d 'Password'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -s u -l username -d 'Username'
|
||||
|
@ -290,6 +289,7 @@ complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -l help -d 'Print u
|
|||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s l -l link -d 'Remove the specified link and not the underlying container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s v -l volumes -d 'Remove the volumes associated with the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -a '(__fish_print_docker_containers stopped)' -d "Container"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s f -l force -a '(__fish_print_docker_containers all)' -d "Container"
|
||||
|
||||
# rmi
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a rmi -d 'Remove one or more images'
|
||||
|
@ -398,5 +398,3 @@ complete -c docker -f -n '__fish_docker_no_subcommand' -a version -d 'Show the D
|
|||
complete -c docker -f -n '__fish_docker_no_subcommand' -a wait -d 'Block until a container stops, then print its exit code'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from wait' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from wait' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
|
||||
|
|
|
@ -201,6 +201,7 @@ __docker_get_log_options() {
|
|||
|
||||
awslogs_options=("awslogs-region" "awslogs-group" "awslogs-stream")
|
||||
fluentd_options=("env" "fluentd-address" "labels" "tag")
|
||||
gcplogs_options=("env" "gcp-log-cmd" "gcp-project" "labels")
|
||||
gelf_options=("env" "gelf-address" "labels" "tag")
|
||||
journald_options=("env" "labels")
|
||||
json_file_options=("env" "labels" "max-file" "max-size")
|
||||
|
@ -209,6 +210,7 @@ __docker_get_log_options() {
|
|||
|
||||
[[ $log_driver = (awslogs|all) ]] && _describe -t awslogs-options "awslogs options" awslogs_options "$@" && ret=0
|
||||
[[ $log_driver = (fluentd|all) ]] && _describe -t fluentd-options "fluentd options" fluentd_options "$@" && ret=0
|
||||
[[ $log_driver = (gcplogs|all) ]] && _describe -t gcplogs-options "gcplogs options" gcplogs_options "$@" && ret=0
|
||||
[[ $log_driver = (gelf|all) ]] && _describe -t gelf-options "gelf options" gelf_options "$@" && ret=0
|
||||
[[ $log_driver = (journald|all) ]] && _describe -t journald-options "journald options" journald_options "$@" && ret=0
|
||||
[[ $log_driver = (json-file|all) ]] && _describe -t json-file-options "json-file options" json_file_options "$@" && ret=0
|
||||
|
@ -325,14 +327,15 @@ __docker_network_subcommand() {
|
|||
(create)
|
||||
_arguments $(__docker_arguments) -A '-*' \
|
||||
$opts_help \
|
||||
"($help)*--aux-address[Auxiliary ipv4 or ipv6 addresses used by network driver]:key=IP: " \
|
||||
"($help)*--aux-address[Auxiliary IPv4 or IPv6 addresses used by network driver]:key=IP: " \
|
||||
"($help -d --driver)"{-d=,--driver=}"[Driver to manage the Network]:driver:(null host bridge overlay)" \
|
||||
"($help)*--gateway=[ipv4 or ipv6 Gateway for the master subnet]:IP: " \
|
||||
"($help)*--gateway=[IPv4 or IPv6 Gateway for the master subnet]:IP: " \
|
||||
"($help)--internal[Restricts external access to the network]" \
|
||||
"($help)*--ip-range=[Allocate container ip from a sub-range]:IP/mask: " \
|
||||
"($help)--ipam-driver=[IP Address Management Driver]:driver:(default)" \
|
||||
"($help)*--ipam-opt=[Set custom IPAM plugin options]:opt=value: " \
|
||||
"($help)*"{-o=,--opt=}"[Set driver specific options]:opt=value: " \
|
||||
"($help)*--ipam-opt=[Custom IPAM plugin options]:opt=value: " \
|
||||
"($help)--ipv6[Enable IPv6 networking]" \
|
||||
"($help)*"{-o=,--opt=}"[Driver specific options]:opt=value: " \
|
||||
"($help)*--subnet=[Subnet in CIDR format that represents a network segment]:IP/mask: " \
|
||||
"($help -)1:Network Name: " && ret=0
|
||||
;;
|
||||
|
@ -421,9 +424,9 @@ __docker_volume_subcommand() {
|
|||
(create)
|
||||
_arguments $(__docker_arguments) \
|
||||
$opts_help \
|
||||
"($help -d --driver)"{-d=,--driver=}"[Specify volume driver name]:Driver name:(local)" \
|
||||
"($help)--name=[Specify volume name]" \
|
||||
"($help)*"{-o=,--opt=}"[Set driver specific options]:Driver option: " && ret=0
|
||||
"($help -d --driver)"{-d=,--driver=}"[Volume driver name]:Driver name:(local)" \
|
||||
"($help)--name=[Volume name]" \
|
||||
"($help)*"{-o=,--opt=}"[Driver specific options]:Driver option: " && ret=0
|
||||
;;
|
||||
(inspect)
|
||||
_arguments $(__docker_arguments) \
|
||||
|
@ -483,8 +486,8 @@ __docker_subcommand() {
|
|||
opts_help=("(: -)--help[Print usage]")
|
||||
opts_build_create_run=(
|
||||
"($help)--cgroup-parent=[Parent cgroup for the container]:cgroup: "
|
||||
"($help)--isolation=[]:isolation:(default hyperv process)"
|
||||
"($help)*--shm-size=[Size of '/dev/shm'. The format is '<number><unit>'. Default is '64m'.]:shm size: "
|
||||
"($help)--isolation=[Container isolation technology]:isolation:(default hyperv process)"
|
||||
"($help)*--shm-size=[Size of '/dev/shm' (format is '<number><unit>')]:shm size: "
|
||||
"($help)*--ulimit=[ulimit options]:ulimit: "
|
||||
)
|
||||
opts_build_create_run_update=(
|
||||
|
@ -508,10 +511,10 @@ __docker_subcommand() {
|
|||
"($help)*--device-read-iops=[Limit the read rate (IO per second) from a device]:device:IO rate: "
|
||||
"($help)*--device-write-bps=[Limit the write rate (bytes per second) to a device]:device:IO rate: "
|
||||
"($help)*--device-write-iops=[Limit the write rate (IO per second) to a device]:device:IO rate: "
|
||||
"($help)*--dns=[Set custom DNS servers]:DNS server: "
|
||||
"($help)*--dns-opt=[Set custom DNS options]:DNS option: "
|
||||
"($help)*--dns-search=[Set custom DNS search domains]:DNS domains: "
|
||||
"($help)*"{-e=,--env=}"[Set environment variables]:environment variable: "
|
||||
"($help)*--dns=[Custom DNS servers]:DNS server: "
|
||||
"($help)*--dns-opt=[Custom DNS options]:DNS option: "
|
||||
"($help)*--dns-search=[Custom DNS search domains]:DNS domains: "
|
||||
"($help)*"{-e=,--env=}"[Environment variables]:environment variable: "
|
||||
"($help)--entrypoint=[Overwrite the default entrypoint of the image]:entry point: "
|
||||
"($help)*--env-file=[Read environment variables from a file]:environment file:_files"
|
||||
"($help)*--expose=[Expose a port from the container without publishing it]: "
|
||||
|
@ -522,7 +525,7 @@ __docker_subcommand() {
|
|||
"($help)--ip6=[Container IPv6 address]:IPv6: "
|
||||
"($help)--ipc=[IPC namespace to use]:IPC namespace: "
|
||||
"($help)*--link=[Add link to another container]:link:->link"
|
||||
"($help)*"{-l=,--label=}"[Set meta data on a container]:label: "
|
||||
"($help)*"{-l=,--label=}"[Container metadata]:label: "
|
||||
"($help)--log-driver=[Default driver for container logs]:Logging driver:(json-file syslog journald gelf fluentd awslogs splunk none)"
|
||||
"($help)*--log-opt=[Log driver specific options]:log driver options:__docker_log_options"
|
||||
"($help)--mac-address=[Container MAC address]:MAC address: "
|
||||
|
@ -531,6 +534,7 @@ __docker_subcommand() {
|
|||
"($help)*--net-alias=[Add network-scoped alias for the container]:alias: "
|
||||
"($help)--oom-kill-disable[Disable OOM Killer]"
|
||||
"($help)--oom-score-adj[Tune the host's OOM preferences for containers (accepts -1000 to 1000)]"
|
||||
"($help)--pids-limit[Tune container pids limit (set -1 for unlimited)]"
|
||||
"($help -P --publish-all)"{-P,--publish-all}"[Publish all exposed ports]"
|
||||
"($help)*"{-p=,--publish=}"[Expose a container's port to the host]:port:_ports"
|
||||
"($help)--pid=[PID namespace to use]:PID: "
|
||||
|
@ -548,11 +552,11 @@ __docker_subcommand() {
|
|||
)
|
||||
opts_create_run_update=(
|
||||
"($help)--blkio-weight=[Block IO (relative weight), between 10 and 1000]:Block IO weight:(10 100 500 1000)"
|
||||
"($help)--kernel-memory=[Kernel memory limit in bytes.]:Memory limit: "
|
||||
"($help)--kernel-memory=[Kernel memory limit in bytes]:Memory limit: "
|
||||
"($help)--memory-reservation=[Memory soft limit]:Memory limit: "
|
||||
)
|
||||
opts_attach_exec_run_start=(
|
||||
"($help)--detach-keys=[Specify the escape key sequence used to detach a container]:sequence:__docker_complete_detach_keys"
|
||||
"($help)--detach-keys=[Escape key sequence used to detach a container]:sequence:__docker_complete_detach_keys"
|
||||
)
|
||||
|
||||
case "$words[1]" in
|
||||
|
@ -569,7 +573,7 @@ __docker_subcommand() {
|
|||
$opts_help \
|
||||
$opts_build_create_run \
|
||||
$opts_build_create_run_update \
|
||||
"($help)*--build-arg[Set build-time variables]:<varname>=<value>: " \
|
||||
"($help)*--build-arg[Build-time variables]:<varname>=<value>: " \
|
||||
"($help -f --file)"{-f=,--file=}"[Name of the Dockerfile]:Dockerfile:_files" \
|
||||
"($help)--force-rm[Always remove intermediate containers]" \
|
||||
"($help)--no-cache[Do not use cache when building the image]" \
|
||||
|
@ -592,7 +596,7 @@ __docker_subcommand() {
|
|||
(cp)
|
||||
_arguments $(__docker_arguments) \
|
||||
$opts_help \
|
||||
"($help -L --follow-link)"{-L,--follow-link}"[Always follow symbol link in SRC_PATH]" \
|
||||
"($help -L --follow-link)"{-L,--follow-link}"[Always follow symbol link]" \
|
||||
"($help -)1:container:->container" \
|
||||
"($help -)2:hostpath:_files" && ret=0
|
||||
case $state in
|
||||
|
@ -630,23 +634,23 @@ __docker_subcommand() {
|
|||
(daemon)
|
||||
_arguments $(__docker_arguments) \
|
||||
$opts_help \
|
||||
"($help)--api-cors-header=[Set CORS headers in the remote API]:CORS headers: " \
|
||||
"($help)*--authorization-plugin=[Set authorization plugins to load]" \
|
||||
"($help)--api-cors-header=[CORS headers in the remote API]:CORS headers: " \
|
||||
"($help)*--authorization-plugin=[Authorization plugins to load]" \
|
||||
"($help -b --bridge)"{-b=,--bridge=}"[Attach containers to a network bridge]:bridge:_net_interfaces" \
|
||||
"($help)--bip=[Specify network bridge IP]" \
|
||||
"($help)--cgroup-parent=[Set parent cgroup for all containers]:cgroup: " \
|
||||
"($help)--bip=[Network bridge IP]:IP address: " \
|
||||
"($help)--cgroup-parent=[Parent cgroup for all containers]:cgroup: " \
|
||||
"($help -D --debug)"{-D,--debug}"[Enable debug mode]" \
|
||||
"($help)--default-gateway[Container default gateway IPv4 address]:IPv4 address: " \
|
||||
"($help)--default-gateway-v6[Container default gateway IPv6 address]:IPv6 address: " \
|
||||
"($help)--cluster-store=[URL of the distributed storage backend]:Cluster Store:->cluster-store" \
|
||||
"($help)--cluster-advertise=[Address of the daemon instance to advertise]:Instance to advertise (host\:port): " \
|
||||
"($help)*--cluster-store-opt=[Set cluster options]:Cluster options:->cluster-store-options" \
|
||||
"($help)*--cluster-store-opt=[Cluster options]:Cluster options:->cluster-store-options" \
|
||||
"($help)*--dns=[DNS server to use]:DNS: " \
|
||||
"($help)*--dns-search=[DNS search domains to use]:DNS search: " \
|
||||
"($help)*--dns-opt=[DNS options to use]:DNS option: " \
|
||||
"($help)*--default-ulimit=[Set default ulimit settings for containers]:ulimit: " \
|
||||
"($help)*--default-ulimit=[Default ulimit settings for containers]:ulimit: " \
|
||||
"($help)--disable-legacy-registry[Do not contact legacy registries]" \
|
||||
"($help)*--exec-opt=[Set exec driver options]:exec driver options: " \
|
||||
"($help)*--exec-opt=[Exec driver options]:exec driver options: " \
|
||||
"($help)--exec-root=[Root of the Docker execdriver]:path:_directories" \
|
||||
"($help)--fixed-cidr=[IPv4 subnet for fixed IPs]:IPv4 subnet: " \
|
||||
"($help)--fixed-cidr-v6=[IPv6 subnet for fixed IPs]:IPv6 subnet: " \
|
||||
|
@ -660,17 +664,17 @@ __docker_subcommand() {
|
|||
"($help)--ip-masq[Enable IP masquerading]" \
|
||||
"($help)--iptables[Enable addition of iptables rules]" \
|
||||
"($help)--ipv6[Enable IPv6 networking]" \
|
||||
"($help -l --log-level)"{-l=,--log-level=}"[Set the logging level]:level:(debug info warn error fatal)" \
|
||||
"($help)*--label=[Set key=value labels to the daemon]:label: " \
|
||||
"($help -l --log-level)"{-l=,--log-level=}"[Logging level]:level:(debug info warn error fatal)" \
|
||||
"($help)*--label=[Key=value labels]:label: " \
|
||||
"($help)--log-driver=[Default driver for container logs]:Logging driver:(json-file syslog journald gelf fluentd awslogs splunk none)" \
|
||||
"($help)*--log-opt=[Log driver specific options]:log driver options:__docker_log_options" \
|
||||
"($help)--mtu=[Set the containers network MTU]:mtu:(0 576 1420 1500 9000)" \
|
||||
"($help)--mtu=[Network MTU]:mtu:(0 576 1420 1500 9000)" \
|
||||
"($help -p --pidfile)"{-p=,--pidfile=}"[Path to use for daemon PID file]:PID file:_files" \
|
||||
"($help)--raw-logs[Full timestamps without ANSI coloring]" \
|
||||
"($help)*--registry-mirror=[Preferred Docker registry mirror]:registry mirror: " \
|
||||
"($help -s --storage-driver)"{-s=,--storage-driver=}"[Storage driver to use]:driver:(aufs devicemapper btrfs zfs overlay)" \
|
||||
"($help)--selinux-enabled[Enable selinux support]" \
|
||||
"($help)*--storage-opt=[Set storage driver options]:storage driver options: " \
|
||||
"($help)*--storage-opt=[Storage driver options]:storage driver options: " \
|
||||
"($help)--tls[Use TLS]" \
|
||||
"($help)--tlscacert=[Trust certs signed only by this CA]:PEM file:_files -g "*.(pem|crt)"" \
|
||||
"($help)--tlscert=[Path to TLS certificate file]:PEM file:_files -g "*.(pem|crt)"" \
|
||||
|
@ -768,7 +772,7 @@ __docker_subcommand() {
|
|||
_arguments $(__docker_arguments) \
|
||||
$opts_help \
|
||||
"($help)*"{-c=,--change=}"[Apply Dockerfile instruction to the created image]:Dockerfile:_files" \
|
||||
"($help -m --message)"{-m=,--message=}"[Set commit message for imported image]:message: " \
|
||||
"($help -m --message)"{-m=,--message=}"[Commit message for imported image]:message: " \
|
||||
"($help -):URL:(- http:// file://)" \
|
||||
"($help -): :__docker_repositories_with_tags" && ret=0
|
||||
;;
|
||||
|
@ -811,7 +815,6 @@ __docker_subcommand() {
|
|||
(login)
|
||||
_arguments $(__docker_arguments) \
|
||||
$opts_help \
|
||||
"($help -e --email)"{-e=,--email=}"[Email]:email: " \
|
||||
"($help -p --password)"{-p=,--password=}"[Password]:password: " \
|
||||
"($help -u --user)"{-u=,--user=}"[Username]:username: " \
|
||||
"($help -)1:server: " && ret=0
|
||||
|
@ -1048,7 +1051,7 @@ _docker() {
|
|||
"($help)--config[Location of client config files]:path:_directories" \
|
||||
"($help -D --debug)"{-D,--debug}"[Enable debug mode]" \
|
||||
"($help -H --host)"{-H=,--host=}"[tcp://host:port to bind/connect to]:host: " \
|
||||
"($help -l --log-level)"{-l=,--log-level=}"[Set the logging level]:level:(debug info warn error fatal)" \
|
||||
"($help -l --log-level)"{-l=,--log-level=}"[Logging level]:level:(debug info warn error fatal)" \
|
||||
"($help)--tls[Use TLS]" \
|
||||
"($help)--tlscacert=[Trust certs signed only by this CA]:PEM file:_files -g "*.(pem|crt)"" \
|
||||
"($help)--tlscert=[Path to TLS certificate file]:PEM file:_files -g "*.(pem|crt)"" \
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue