Procházet zdrojové kódy

Merge pull request #19370 from tiborvass/bump_v1.10.0

Bump version to v1.10.0
Jess Frazelle před 9 roky
rodič
revize
f154b9246f
100 změnil soubory, kde provedl 1954 přidání a 783 odebrání
  1. 135 0
      CHANGELOG.md
  2. 2 2
      Dockerfile
  3. 1 1
      Dockerfile.armhf
  4. 8 8
      Dockerfile.ppc64le
  5. 2 2
      Dockerfile.s390x
  6. 1 1
      README.md
  7. 1 1
      VERSION
  8. 6 0
      api/client/attach.go
  9. 4 4
      api/client/build.go
  10. 27 0
      api/client/cli.go
  11. 2 2
      api/client/create.go
  12. 6 0
      api/client/exec.go
  13. 3 25
      api/client/hijack.go
  14. 5 0
      api/client/info.go
  15. 80 76
      api/client/login.go
  16. 4 3
      api/client/logout.go
  17. 1 1
      api/client/pull.go
  18. 1 1
      api/client/push.go
  19. 6 0
      api/client/run.go
  20. 1 1
      api/client/search.go
  21. 6 0
      api/client/start.go
  22. 1 1
      api/client/trust.go
  23. 1 1
      api/client/update.go
  24. 58 2
      api/client/utils.go
  25. 1 1
      api/server/middleware.go
  26. 2 1
      api/server/router/build/build_routes.go
  27. 21 0
      api/server/router/local/image.go
  28. 1 1
      api/server/router/system/system.go
  29. 63 34
      container/container_unix.go
  30. 10 9
      container/history.go
  31. 91 0
      container/memory_store.go
  32. 106 0
      container/memory_store_test.go
  33. 3 0
      container/monitor.go
  34. 8 0
      container/state.go
  35. 28 0
      container/store.go
  36. 1 1
      contrib/builder/deb/debian-jessie/Dockerfile
  37. 1 1
      contrib/builder/deb/debian-stretch/Dockerfile
  38. 2 1
      contrib/builder/deb/debian-wheezy/Dockerfile
  39. 10 5
      contrib/builder/deb/generate.sh
  40. 1 1
      contrib/builder/deb/ubuntu-precise/Dockerfile
  41. 1 1
      contrib/builder/deb/ubuntu-trusty/Dockerfile
  42. 1 1
      contrib/builder/deb/ubuntu-wily/Dockerfile
  43. 2 1
      contrib/builder/rpm/centos-7/Dockerfile
  44. 2 1
      contrib/builder/rpm/fedora-22/Dockerfile
  45. 2 1
      contrib/builder/rpm/fedora-23/Dockerfile
  46. 45 0
      contrib/builder/rpm/generate.sh
  47. 2 1
      contrib/builder/rpm/opensuse-13.2/Dockerfile
  48. 27 0
      contrib/builder/rpm/oraclelinux-6/Dockerfile
  49. 2 1
      contrib/builder/rpm/oraclelinux-7/Dockerfile
  50. 139 36
      contrib/completion/bash/docker
  51. 52 9
      contrib/completion/zsh/_docker
  52. 1 0
      contrib/init/systemd/docker.service
  53. 7 0
      contrib/init/sysvinit-debian/docker.default
  54. 1 1
      contrib/syntax/vim/syntax/dockerfile.vim
  55. 83 22
      daemon/config.go
  56. 42 9
      daemon/config_test.go
  57. 26 26
      daemon/config_unix.go
  58. 2 2
      daemon/config_windows.go
  59. 36 34
      daemon/container_operations_unix.go
  60. 1 1
      daemon/container_operations_windows.go
  61. 6 1
      daemon/create.go
  62. 2 14
      daemon/create_unix.go
  63. 1 10
      daemon/create_windows.go
  64. 82 81
      daemon/daemon.go
  65. 8 11
      daemon/daemon_test.go
  66. 58 28
      daemon/daemon_unix.go
  67. 4 4
      daemon/daemon_windows.go
  68. 6 7
      daemon/delete.go
  69. 1 1
      daemon/delete_test.go
  70. 12 1
      daemon/exec.go
  71. 8 2
      daemon/exec/exec.go
  72. 0 3
      daemon/execdriver/native/create.go
  73. 11 1
      daemon/execdriver/native/seccomp_default.go
  74. 0 56
      daemon/execdriver/native/tmpfs.go
  75. 3 13
      daemon/graphdriver/aufs/aufs.go
  76. 0 82
      daemon/graphdriver/aufs/aufs_test.go
  77. 3 0
      daemon/graphdriver/driver_linux.go
  78. 20 32
      daemon/image_delete.go
  79. 12 10
      daemon/info.go
  80. 3 6
      daemon/links_test.go
  81. 21 5
      daemon/list.go
  82. 31 16
      daemon/logger/copier.go
  83. 37 1
      daemon/logger/copier_test.go
  84. 5 0
      daemon/mounts.go
  85. 2 1
      daemon/volumes.go
  86. 11 19
      distribution/pull.go
  87. 14 2
      distribution/pull_v2.go
  88. 10 2
      distribution/push.go
  89. 7 2
      distribution/push_v2.go
  90. 7 0
      distribution/registry.go
  91. 60 15
      distribution/xfer/transfer.go
  92. 38 0
      distribution/xfer/transfer_test.go
  93. 5 0
      docker/client.go
  94. 16 11
      docker/common.go
  95. 16 7
      docker/daemon.go
  96. 204 2
      docker/daemon_test.go
  97. 43 0
      docker/daemon_unix_test.go
  98. 2 3
      docs/admin/ambassador_pattern_linking.md
  99. 0 0
      docs/admin/b2d_volume_images/add_cd.png
  100. 0 0
      docs/admin/b2d_volume_images/add_new_controller.png

+ 135 - 0
CHANGELOG.md

@@ -5,6 +5,141 @@ information on the list of deprecated flags and APIs please have a look at
 https://docs.docker.com/misc/deprecated/ where target removal dates can also
 https://docs.docker.com/misc/deprecated/ where target removal dates can also
 be found.
 be found.
 
 
+## 1.10.0 (2016-02-04)
+
+**IMPORTANT**: Docker 1.10 uses a new content-addressable storage for images and layers.
+A migration is performed the first time docker is run, and can take a significant amount of time depending on the number of images present.
+Refer to this page on the wiki for more information: https://github.com/docker/docker/wiki/Engine-v1.10.0-content-addressability-migration
+We also released a cool migration utility that enables you to perform the migration before updating to reduce downtime.
+Engine 1.10 migrator can be found on Docker Hub: https://hub.docker.com/r/docker/v1.10-migrator/
+
+### Runtime
+
++ New `docker update` command that allows updating resource constraints on running containers [#15078](https://github.com/docker/docker/pull/15078)
++ Add `--tmpfs` flag to `docker run` to create a tmpfs mount in a container [#13587](https://github.com/docker/docker/pull/13587)
++ Add `--format` flag to `docker images` command [#17692](https://github.com/docker/docker/pull/17692)
++ Allow to set daemon configuration in a file and hot-reload it with the `SIGHUP` signal [#18587](https://github.com/docker/docker/pull/18587)
++ Updated docker events to include more meta-data and event types [#18888](https://github.com/docker/docker/pull/18888)  
+  This change is backward compatible in the API, but not on the CLI.
++ Add `--blkio-weight-device` flag to `docker run` [#13959](https://github.com/docker/docker/pull/13959)
++ Add `--device-read-bps` and `--device-write-bps` flags to `docker run` [#14466](https://github.com/docker/docker/pull/14466)
++ Add `--device-read-iops` and `--device-write-iops` flags to `docker run` [#15879](https://github.com/docker/docker/pull/15879)
++ Add `--oom-score-adj` flag to `docker run` [#16277](https://github.com/docker/docker/pull/16277)
++ Add `--detach-keys` flag to `attach`, `run`, `start` and `exec` commands to override the default key sequence that detaches from a container  [#15666](https://github.com/docker/docker/pull/15666)
++ Add `--shm-size` flag to `run`, `create` and `build` to set the size of `/dev/shm` [#16168](https://github.com/docker/docker/pull/16168)
++ Show the number of running, stopped, and paused containers in `docker info` [#19249](https://github.com/docker/docker/pull/19249)
++ Show the `OSType` and `Architecture` in `docker info` [#17478](https://github.com/docker/docker/pull/17478)
++ Add `--cgroup-parent` flag on `daemon` to set cgroup parent for all containers [#19062](https://github.com/docker/docker/pull/19062)
++ Add `-L` flag to docker cp to follow symlinks [#16613](https://github.com/docker/docker/pull/16613)
++ New `status=dead` filter for `docker ps` [#17908](https://github.com/docker/docker/pull/17908)
+* Change `docker run` exit codes to distinguish between runtime and application errors [#14012](https://github.com/docker/docker/pull/14012)
+* Enhance `docker events --since` and `--until` to support nanoseconds and timezones [#17495](https://github.com/docker/docker/pull/17495)
+* Add `--all`/`-a` flag to `stats` to include both running and stopped containers [#16742](https://github.com/docker/docker/pull/16742)
+* Change the default cgroup-driver to `cgroupfs` [#17704](https://github.com/docker/docker/pull/17704)
+* Emit a "tag" event when tagging an image with `build -t` [#17115](https://github.com/docker/docker/pull/17115)
+* Best effort for linked containers' start order when starting the daemon [#18208](https://github.com/docker/docker/pull/18208)
+* Add ability to add multiple tags on `build` [#15780](https://github.com/docker/docker/pull/15780)
+* Permit `OPTIONS` request against any url, thus fixing issue with CORS [#19569](https://github.com/docker/docker/pull/19569)
+- Fix the `--quiet` flag on `docker build` to actually be quiet [#17428](https://github.com/docker/docker/pull/17428)
+- Fix `docker images --filter dangling=false` to now show all non-dangling images [#19326](https://github.com/docker/docker/pull/19326)
+- Fix race condition causing autorestart turning off on restart [#17629](https://github.com/docker/docker/pull/17629)
+- Recognize GPFS filesystems [#19216](https://github.com/docker/docker/pull/19216)
+- Fix obscure bug preventing to start containers [#19751](https://github.com/docker/docker/pull/19751)
+- Forbid `exec` during container restart [#19722](https://github.com/docker/docker/pull/19722)
+- devicemapper: Increasing `--storage-opt dm.basesize` will now increase the base device size on daemon restart [#19123](https://github.com/docker/docker/pull/19123)
+
+### Security
+
++ Add `--userns-remap` flag to `daemon` to support user namespaces (previously in experimental) [#19187](https://github.com/docker/docker/pull/19187)
++ Add support for custom seccomp profiles in `--security-opt` [#17989](https://github.com/docker/docker/pull/17989)
++ Add default seccomp profile [#18780](https://github.com/docker/docker/pull/18780)
++ Add `--authorization-plugin` flag to `daemon` to customize ACLs [#15365](https://github.com/docker/docker/pull/15365)
++ Docker Content Trust now supports the ability to read and write user delegations [#18887](https://github.com/docker/docker/pull/18887)  
+  This is an optional, opt-in feature that requires the explicit use of the Notary command-line utility in order to be enabled.  
+  Enabling delegation support in a specific repository will break the ability of Docker 1.9 and 1.8 to pull from that repository, if content trust is enabled.
+* Allow SELinux to run in a container when using the BTRFS storage driver [#16452](https://github.com/docker/docker/pull/16452)
+
+### Distribution
+
+* Use content-addressable storage for images and layers [#17924](https://github.com/docker/docker/pull/17924)  
+  Note that a migration is performed the first time docker is run; it can take a significant amount of time depending on the number of images and containers present.  
+  Images no longer depend on the parent chain but contain a list of layer references.  
+  `docker load`/`docker save` tarballs now also contain content-addressable image configurations.
+  For more information: https://github.com/docker/docker/wiki/Engine-v1.10.0-content-addressability-migration  
+* Add support for the new [manifest format ("schema2")](https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-2.md) [#18785](https://github.com/docker/docker/pull/18785)
+* Lots of improvements for push and pull: performance++, retries on failed downloads, cancelling on client disconnect [#18353](https://github.com/docker/docker/pull/18353), [#18418](https://github.com/docker/docker/pull/18418), [#19109](https://github.com/docker/docker/pull/19109), [#18353](https://github.com/docker/docker/pull/18353)
+* Limit v1 protocol fallbacks [#18590](https://github.com/docker/docker/pull/18590)
+- Fix issue where docker could hang indefinitely waiting for a nonexistent process to pull an image [#19743](https://github.com/docker/docker/pull/19743)
+
+### Networking
+
++ Use DNS-based discovery instead of `/etc/hosts` [#19198](https://github.com/docker/docker/pull/19198)
++ Support for network-scoped alias using `--net-alias` on `run` and `--alias` on `network connect` [#19242](https://github.com/docker/docker/pull/19242)
++ Add `--ip` and `--ip6` on `run` and `network connect` to support custom IP addresses for a container in a network [#19001](https://github.com/docker/docker/pull/19001)
++ Add `--ipam-opt` to `network create` for passing custom IPAM options [#17316](https://github.com/docker/docker/pull/17316)
++ Add `--internal` flag to `network create` to restrict external access to and from the network [#19276](https://github.com/docker/docker/pull/19276)
++ Add `kv.path` option to `--cluster-store-opt` [#19167](https://github.com/docker/docker/pull/19167)
++ Add `discovery.heartbeat` and `discovery.ttl` options to `--cluster-store-opt` to configure discovery TTL and heartbeat timer [#18204](https://github.com/docker/docker/pull/18204)
++ Add `--format` flag to `network inspect` [#17481](https://github.com/docker/docker/pull/17481)
++ Add `--link` to `network connect` to provide a container-local alias [#19229](https://github.com/docker/docker/pull/19229)
++ Support for Capability exchange with remote IPAM plugins [#18775](https://github.com/docker/docker/pull/18775)
++ Add `--force` to `network disconnect` to force container to be disconnected from network [#19317](https://github.com/docker/docker/pull/19317)
+* Support for multi-host networking using built-in overlay driver for all engine supported kernels: 3.10+ [#18775](https://github.com/docker/docker/pull/18775)
+* `--link` is now supported on `docker run` for containers in user-defined network [#19229](https://github.com/docker/docker/pull/19229)
+* Enhance `docker network rm` to allow removing multiple networks [#17489](https://github.com/docker/docker/pull/17489)
+* Include container names in `network inspect` [#17615](https://github.com/docker/docker/pull/17615)
+* Include auto-generated subnets for user-defined networks in `network inspect` [#17316](https://github.com/docker/docker/pull/17316)
+* Add `--filter` flag to `network ls` to hide predefined networks [#17782](https://github.com/docker/docker/pull/17782)
+* Add support for network connect/disconnect to stopped containers [#18906](https://github.com/docker/docker/pull/18906)
+* Add network ID to container inspect [#19323](https://github.com/docker/docker/pull/19323)
+- Fix MTU issue where Docker would not start with two or more default routes [#18108](https://github.com/docker/docker/pull/18108)
+- Fix duplicate IP address for containers [#18106](https://github.com/docker/docker/pull/18106)
+- Fix issue preventing sometimes docker from creating the bridge network [#19338](https://github.com/docker/docker/pull/19338)
+- Do not substitute 127.0.0.1 name server when using `--net=host` [#19573](https://github.com/docker/docker/pull/19573)
+
+### Logging
+
++ New logging driver for Splunk [#16488](https://github.com/docker/docker/pull/16488)
++ Add support for syslog over TCP+TLS [#18998](https://github.com/docker/docker/pull/18998)
+* Enhance `docker logs --since` and `--until` to support nanoseconds and time [#17495](https://github.com/docker/docker/pull/17495)
+* Enhance AWS logs to auto-detect region [#16640](https://github.com/docker/docker/pull/16640)
+
+### Volumes
+
++ Add support to set the mount propagation mode for a volume [#17034](https://github.com/docker/docker/pull/17034)
+* Add `ls` and `inspect` endpoints to volume plugin API [#16534](https://github.com/docker/docker/pull/16534)  
+  Existing plugins need to make use of these new APIs to satisfy users' expectation  
+  For that, please use the new MIME type `application/vnd.docker.plugins.v1.2+json` [#19549](https://github.com/docker/docker/pull/19549)
+- Fix data not being copied to named volumes [#19175](https://github.com/docker/docker/pull/19175)
+- Fix issues preventing volume drivers from being containerized [#19500](https://github.com/docker/docker/pull/19500)
+- Fix `docker volumes ls --dangling=false` to now show all non-dangling volumes [#19671](https://github.com/docker/docker/pull/19671)
+- Do not remove named volumes on container removal [#19568](https://github.com/docker/docker/pull/19568)
+- Allow external volume drivers to host anonymous volumes [#19190](https://github.com/docker/docker/pull/19190)
+
+### Builder
+
++ Add support for `**` in `.dockerignore` to wildcard multiple levels of directories [#17090](https://github.com/docker/docker/pull/17090)
+- Fix handling of UTF-8 characters in Dockerfiles [#17055](https://github.com/docker/docker/pull/17055)
+- Fix permissions problem when reading from STDIN [#19283](https://github.com/docker/docker/pull/19283)
+
+### Client
+
++ Add support for overriding the API version to use via an `DOCKER_API_VERSION` environment-variable [#15964](https://github.com/docker/docker/pull/15964)
+- Fix a bug preventing Windows clients to log in to Docker Hub [#19891](https://github.com/docker/docker/pull/19891)
+
+### Misc
+
+* systemd: Set TasksMax in addition to LimitNPROC in systemd service file [#19391](https://github.com/docker/docker/pull/19391)
+
+### Deprecations
+
+* Remove LXC support. The LXC driver was deprecated in Docker 1.8, and has now been removed [#17700](https://github.com/docker/docker/pull/17700)
+* Remove `--exec-driver` daemon flag, because it is no longer in use [#17700](https://github.com/docker/docker/pull/17700)
+* Remove old deprecated single-dashed long CLI flags (such as `-rm`; use `--rm` instead) [#17724](https://github.com/docker/docker/pull/17724)
+* Deprecate HostConfig at API container start [#17799](https://github.com/docker/docker/pull/17799)
+* Deprecate docker packages for newly EOL'd Linux distributions: Fedora 21 and Ubuntu 15.04 (Vivid) [#18794](https://github.com/docker/docker/pull/18794), [#18809](https://github.com/docker/docker/pull/18809)
+* Deprecate `-f` flag for docker tag [#18350](https://github.com/docker/docker/pull/18350)
+
 ## 1.9.1 (2015-11-21)
 ## 1.9.1 (2015-11-21)
 
 
 ### Runtime
 ### Runtime

+ 2 - 2
Dockerfile

@@ -155,7 +155,7 @@ RUN set -x \
 # both. This allows integration-cli tests to cover push/pull with both schema1
 # both. This allows integration-cli tests to cover push/pull with both schema1
 # and schema2 manifests.
 # and schema2 manifests.
 ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
 ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
-ENV REGISTRY_COMMIT cb08de17d74bef86ce6c5abe8b240e282f5750be
+ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
 RUN set -x \
 RUN set -x \
 	&& export GOPATH="$(mktemp -d)" \
 	&& export GOPATH="$(mktemp -d)" \
 	&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
 	&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
@@ -168,7 +168,7 @@ RUN set -x \
 	&& rm -rf "$GOPATH"
 	&& rm -rf "$GOPATH"
 
 
 # Install notary server
 # Install notary server
-ENV NOTARY_VERSION docker-v1.10-3
+ENV NOTARY_VERSION docker-v1.10-5
 RUN set -x \
 RUN set -x \
 	&& export GOPATH="$(mktemp -d)" \
 	&& export GOPATH="$(mktemp -d)" \
 	&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
 	&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \

+ 1 - 1
Dockerfile.armhf

@@ -145,7 +145,7 @@ RUN set -x \
 	&& rm -rf "$GOPATH"
 	&& rm -rf "$GOPATH"
 
 
 # Install notary server
 # Install notary server
-ENV NOTARY_VERSION docker-v1.10-2
+ENV NOTARY_VERSION docker-v1.10-5
 RUN set -x \
 RUN set -x \
 	&& export GOPATH="$(mktemp -d)" \
 	&& export GOPATH="$(mktemp -d)" \
 	&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
 	&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \

+ 8 - 8
Dockerfile.ppc64le

@@ -116,14 +116,14 @@ RUN set -x \
 	&& rm -rf "$GOPATH"
 	&& rm -rf "$GOPATH"
 
 
 # Install notary server
 # Install notary server
-ENV NOTARY_COMMIT 8e8122eb5528f621afcd4e2854c47302f17392f7
-RUN set -x \
-	&& export GOPATH="$(mktemp -d)" \
-	&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
-	&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_COMMIT") \
-	&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
-		go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
-	&& rm -rf "$GOPATH"
+#ENV NOTARY_VERSION docker-v1.10-5
+#RUN set -x \
+#	&& export GOPATH="$(mktemp -d)" \
+#	&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
+#	&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
+#	&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
+#		go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
+#	&& rm -rf "$GOPATH"
 
 
 # Get the "docker-py" source so we can run their integration tests
 # Get the "docker-py" source so we can run their integration tests
 ENV DOCKER_PY_COMMIT e2878cbcc3a7eef99917adc1be252800b0e41ece
 ENV DOCKER_PY_COMMIT e2878cbcc3a7eef99917adc1be252800b0e41ece

+ 2 - 2
Dockerfile.s390x

@@ -116,11 +116,11 @@ RUN set -x \
 	&& rm -rf "$GOPATH"
 	&& rm -rf "$GOPATH"
 
 
 # Install notary server
 # Install notary server
-ENV NOTARY_COMMIT 8e8122eb5528f621afcd4e2854c47302f17392f7
+ENV NOTARY_VERSION docker-v1.10-5
 RUN set -x \
 RUN set -x \
 	&& export GOPATH="$(mktemp -d)" \
 	&& export GOPATH="$(mktemp -d)" \
 	&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
 	&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
-	&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_COMMIT") \
+	&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
 	&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
 	&& GOPATH="$GOPATH/src/github.com/docker/notary/Godeps/_workspace:$GOPATH" \
 		go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
 		go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
 	&& rm -rf "$GOPATH"
 	&& rm -rf "$GOPATH"

+ 1 - 1
README.md

@@ -167,7 +167,7 @@ Under the hood
 Under the hood, Docker is built on the following components:
 Under the hood, Docker is built on the following components:
 
 
 * The
 * The
-  [cgroups](https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt)
+  [cgroups](https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt)
   and
   and
   [namespaces](http://man7.org/linux/man-pages/man7/namespaces.7.html)
   [namespaces](http://man7.org/linux/man-pages/man7/namespaces.7.html)
   capabilities of the Linux kernel
   capabilities of the Linux kernel

+ 1 - 1
VERSION

@@ -1 +1 @@
-1.10.0-dev
+1.10.0

+ 6 - 0
api/client/attach.go

@@ -75,6 +75,12 @@ func (cli *DockerCli) CmdAttach(args ...string) error {
 		return err
 		return err
 	}
 	}
 	defer resp.Close()
 	defer resp.Close()
+	if in != nil && c.Config.Tty {
+		if err := cli.setRawTerminal(); err != nil {
+			return err
+		}
+		defer cli.restoreTerminal(in)
+	}
 
 
 	if err := cli.holdHijackedConnection(c.Config.Tty, in, cli.out, cli.err, resp); err != nil {
 	if err := cli.holdHijackedConnection(c.Config.Tty, in, cli.out, cli.err, resp); err != nil {
 		return err
 		return err

+ 4 - 4
api/client/build.go

@@ -82,9 +82,6 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
 		err      error
 		err      error
 	)
 	)
 
 
-	_, err = exec.LookPath("git")
-	hasGit := err == nil
-
 	specifiedContext := cmd.Arg(0)
 	specifiedContext := cmd.Arg(0)
 
 
 	var (
 	var (
@@ -105,7 +102,7 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
 	switch {
 	switch {
 	case specifiedContext == "-":
 	case specifiedContext == "-":
 		context, relDockerfile, err = getContextFromReader(cli.in, *dockerfileName)
 		context, relDockerfile, err = getContextFromReader(cli.in, *dockerfileName)
-	case urlutil.IsGitURL(specifiedContext) && hasGit:
+	case urlutil.IsGitURL(specifiedContext):
 		tempDir, relDockerfile, err = getContextFromGitURL(specifiedContext, *dockerfileName)
 		tempDir, relDockerfile, err = getContextFromGitURL(specifiedContext, *dockerfileName)
 	case urlutil.IsURL(specifiedContext):
 	case urlutil.IsURL(specifiedContext):
 		context, relDockerfile, err = getContextFromURL(progBuff, specifiedContext, *dockerfileName)
 		context, relDockerfile, err = getContextFromURL(progBuff, specifiedContext, *dockerfileName)
@@ -510,6 +507,9 @@ func getContextFromReader(r io.ReadCloser, dockerfileName string) (out io.ReadCl
 // path of the dockerfile in that context directory, and a non-nil error on
 // path of the dockerfile in that context directory, and a non-nil error on
 // success.
 // success.
 func getContextFromGitURL(gitURL, dockerfileName string) (absContextDir, relDockerfile string, err error) {
 func getContextFromGitURL(gitURL, dockerfileName string) (absContextDir, relDockerfile string, err error) {
+	if _, err := exec.LookPath("git"); err != nil {
+		return "", "", fmt.Errorf("unable to find 'git': %v", err)
+	}
 	if absContextDir, err = gitutils.Clone(gitURL); err != nil {
 	if absContextDir, err = gitutils.Clone(gitURL); err != nil {
 		return "", "", fmt.Errorf("unable to 'git clone' to temporary context directory: %v", err)
 		return "", "", fmt.Errorf("unable to 'git clone' to temporary context directory: %v", err)
 	}
 	}

+ 27 - 0
api/client/cli.go

@@ -44,6 +44,8 @@ type DockerCli struct {
 	isTerminalOut bool
 	isTerminalOut bool
 	// client is the http client that performs all API operations
 	// client is the http client that performs all API operations
 	client client.APIClient
 	client client.APIClient
+	// state holds the terminal state
+	state *term.State
 }
 }
 
 
 // Initialize calls the init function that will setup the configuration for the client
 // Initialize calls the init function that will setup the configuration for the client
@@ -79,6 +81,31 @@ func (cli *DockerCli) ImagesFormat() string {
 	return cli.configFile.ImagesFormat
 	return cli.configFile.ImagesFormat
 }
 }
 
 
+func (cli *DockerCli) setRawTerminal() error {
+	if cli.isTerminalIn && os.Getenv("NORAW") == "" {
+		state, err := term.SetRawTerminal(cli.inFd)
+		if err != nil {
+			return err
+		}
+		cli.state = state
+	}
+	return nil
+}
+
+func (cli *DockerCli) restoreTerminal(in io.Closer) error {
+	if cli.state != nil {
+		term.RestoreTerminal(cli.inFd, cli.state)
+	}
+	// WARNING: DO NOT REMOVE THE OS CHECK !!!
+	// For some reason this Close call blocks on darwin..
+	// As the client exists right after, simply discard the close
+	// until we find a better solution.
+	if in != nil && runtime.GOOS != "darwin" {
+		return in.Close()
+	}
+	return nil
+}
+
 // NewDockerCli returns a DockerCli instance with IO output and error streams set by in, out and err.
 // NewDockerCli returns a DockerCli instance with IO output and error streams set by in, out and err.
 // The key file, protocol (i.e. unix) and address are passed in as strings, along with the tls.Config. If the tls.Config
 // The key file, protocol (i.e. unix) and address are passed in as strings, along with the tls.Config. If the tls.Config
 // is set the client scheme will be set to https.
 // is set the client scheme will be set to https.

+ 2 - 2
api/client/create.go

@@ -40,8 +40,8 @@ func (cli *DockerCli) pullImageCustomOut(image string, out io.Writer) error {
 		return err
 		return err
 	}
 	}
 
 
-	// Resolve the Auth config relevant for this server
-	encodedAuth, err := cli.encodeRegistryAuth(repoInfo.Index)
+	authConfig := cli.resolveAuthConfig(cli.configFile.AuthConfigs, repoInfo.Index)
+	encodedAuth, err := encodeAuthToBase64(authConfig)
 	if err != nil {
 	if err != nil {
 		return err
 		return err
 	}
 	}

+ 6 - 0
api/client/exec.go

@@ -87,6 +87,12 @@ func (cli *DockerCli) CmdExec(args ...string) error {
 		return err
 		return err
 	}
 	}
 	defer resp.Close()
 	defer resp.Close()
+	if in != nil && execConfig.Tty {
+		if err := cli.setRawTerminal(); err != nil {
+			return err
+		}
+		defer cli.restoreTerminal(in)
+	}
 	errCh = promise.Go(func() error {
 	errCh = promise.Go(func() error {
 		return cli.holdHijackedConnection(execConfig.Tty, in, out, stderr, resp)
 		return cli.holdHijackedConnection(execConfig.Tty, in, out, stderr, resp)
 	})
 	})

+ 3 - 25
api/client/hijack.go

@@ -2,41 +2,19 @@ package client
 
 
 import (
 import (
 	"io"
 	"io"
-	"os"
 
 
 	"github.com/Sirupsen/logrus"
 	"github.com/Sirupsen/logrus"
 	"github.com/docker/docker/pkg/stdcopy"
 	"github.com/docker/docker/pkg/stdcopy"
-	"github.com/docker/docker/pkg/term"
 	"github.com/docker/engine-api/types"
 	"github.com/docker/engine-api/types"
 )
 )
 
 
-func (cli *DockerCli) holdHijackedConnection(setRawTerminal bool, inputStream io.ReadCloser, outputStream, errorStream io.Writer, resp types.HijackedResponse) error {
-	var (
-		err      error
-		oldState *term.State
-	)
-	if inputStream != nil && setRawTerminal && cli.isTerminalIn && os.Getenv("NORAW") == "" {
-		oldState, err = term.SetRawTerminal(cli.inFd)
-		if err != nil {
-			return err
-		}
-		defer term.RestoreTerminal(cli.inFd, oldState)
-	}
-
+func (cli *DockerCli) holdHijackedConnection(tty bool, inputStream io.ReadCloser, outputStream, errorStream io.Writer, resp types.HijackedResponse) error {
+	var err error
 	receiveStdout := make(chan error, 1)
 	receiveStdout := make(chan error, 1)
 	if outputStream != nil || errorStream != nil {
 	if outputStream != nil || errorStream != nil {
 		go func() {
 		go func() {
-			defer func() {
-				if inputStream != nil {
-					if setRawTerminal && cli.isTerminalIn {
-						term.RestoreTerminal(cli.inFd, oldState)
-					}
-					inputStream.Close()
-				}
-			}()
-
 			// When TTY is ON, use regular copy
 			// When TTY is ON, use regular copy
-			if setRawTerminal && outputStream != nil {
+			if tty && outputStream != nil {
 				_, err = io.Copy(outputStream, resp.Reader)
 				_, err = io.Copy(outputStream, resp.Reader)
 			} else {
 			} else {
 				_, err = stdcopy.StdCopy(outputStream, errorStream, resp.Reader)
 				_, err = stdcopy.StdCopy(outputStream, errorStream, resp.Reader)

+ 5 - 0
api/client/info.go

@@ -42,6 +42,11 @@ func (cli *DockerCli) CmdInfo(args ...string) error {
 		}
 		}
 
 
 	}
 	}
+	if info.SystemStatus != nil {
+		for _, pair := range info.SystemStatus {
+			fmt.Fprintf(cli.out, "%s: %s\n", pair[0], pair[1])
+		}
+	}
 	ioutils.FprintfIfNotEmpty(cli.out, "Execution Driver: %s\n", info.ExecutionDriver)
 	ioutils.FprintfIfNotEmpty(cli.out, "Execution Driver: %s\n", info.ExecutionDriver)
 	ioutils.FprintfIfNotEmpty(cli.out, "Logging Driver: %s\n", info.LoggingDriver)
 	ioutils.FprintfIfNotEmpty(cli.out, "Logging Driver: %s\n", info.LoggingDriver)
 
 

+ 80 - 76
api/client/login.go

@@ -11,7 +11,6 @@ import (
 	Cli "github.com/docker/docker/cli"
 	Cli "github.com/docker/docker/cli"
 	flag "github.com/docker/docker/pkg/mflag"
 	flag "github.com/docker/docker/pkg/mflag"
 	"github.com/docker/docker/pkg/term"
 	"github.com/docker/docker/pkg/term"
-	"github.com/docker/docker/registry"
 	"github.com/docker/engine-api/client"
 	"github.com/docker/engine-api/client"
 	"github.com/docker/engine-api/types"
 	"github.com/docker/engine-api/types"
 )
 )
@@ -22,14 +21,12 @@ import (
 //
 //
 // Usage: docker login SERVER
 // Usage: docker login SERVER
 func (cli *DockerCli) CmdLogin(args ...string) error {
 func (cli *DockerCli) CmdLogin(args ...string) error {
-	cmd := Cli.Subcmd("login", []string{"[SERVER]"}, Cli.DockerCommands["login"].Description+".\nIf no server is specified \""+registry.IndexServer+"\" is the default.", true)
+	cmd := Cli.Subcmd("login", []string{"[SERVER]"}, Cli.DockerCommands["login"].Description+".\nIf no server is specified, the default is defined by the daemon.", true)
 	cmd.Require(flag.Max, 1)
 	cmd.Require(flag.Max, 1)
 
 
-	var username, password, email string
-
-	cmd.StringVar(&username, []string{"u", "-username"}, "", "Username")
-	cmd.StringVar(&password, []string{"p", "-password"}, "", "Password")
-	cmd.StringVar(&email, []string{"e", "-email"}, "", "Email")
+	flUser := cmd.String([]string{"u", "-username"}, "", "Username")
+	flPassword := cmd.String([]string{"p", "-password"}, "", "Password")
+	flEmail := cmd.String([]string{"e", "-email"}, "", "Email")
 
 
 	cmd.ParseFlags(args, true)
 	cmd.ParseFlags(args, true)
 
 
@@ -38,106 +35,113 @@ func (cli *DockerCli) CmdLogin(args ...string) error {
 		cli.in = os.Stdin
 		cli.in = os.Stdin
 	}
 	}
 
 
-	serverAddress := registry.IndexServer
+	var serverAddress string
 	if len(cmd.Args()) > 0 {
 	if len(cmd.Args()) > 0 {
 		serverAddress = cmd.Arg(0)
 		serverAddress = cmd.Arg(0)
+	} else {
+		serverAddress = cli.electAuthServer()
 	}
 	}
 
 
-	promptDefault := func(prompt string, configDefault string) {
-		if configDefault == "" {
-			fmt.Fprintf(cli.out, "%s: ", prompt)
-		} else {
-			fmt.Fprintf(cli.out, "%s (%s): ", prompt, configDefault)
-		}
+	authConfig, err := cli.configureAuth(*flUser, *flPassword, *flEmail, serverAddress)
+	if err != nil {
+		return err
 	}
 	}
 
 
-	readInput := func(in io.Reader, out io.Writer) string {
-		reader := bufio.NewReader(in)
-		line, _, err := reader.ReadLine()
-		if err != nil {
-			fmt.Fprintln(out, err.Error())
-			os.Exit(1)
+	response, err := cli.client.RegistryLogin(authConfig)
+	if err != nil {
+		if client.IsErrUnauthorized(err) {
+			delete(cli.configFile.AuthConfigs, serverAddress)
+			if err2 := cli.configFile.Save(); err2 != nil {
+				fmt.Fprintf(cli.out, "WARNING: could not save config file: %v\n", err2)
+			}
 		}
 		}
-		return string(line)
+		return err
+	}
+
+	if err := cli.configFile.Save(); err != nil {
+		return fmt.Errorf("Error saving config file: %v", err)
+	}
+	fmt.Fprintf(cli.out, "WARNING: login credentials saved in %s\n", cli.configFile.Filename())
+
+	if response.Status != "" {
+		fmt.Fprintf(cli.out, "%s\n", response.Status)
 	}
 	}
+	return nil
+}
+
+func (cli *DockerCli) promptWithDefault(prompt string, configDefault string) {
+	if configDefault == "" {
+		fmt.Fprintf(cli.out, "%s: ", prompt)
+	} else {
+		fmt.Fprintf(cli.out, "%s (%s): ", prompt, configDefault)
+	}
+}
 
 
+func (cli *DockerCli) configureAuth(flUser, flPassword, flEmail, serverAddress string) (types.AuthConfig, error) {
 	authconfig, ok := cli.configFile.AuthConfigs[serverAddress]
 	authconfig, ok := cli.configFile.AuthConfigs[serverAddress]
 	if !ok {
 	if !ok {
 		authconfig = types.AuthConfig{}
 		authconfig = types.AuthConfig{}
 	}
 	}
 
 
-	if username == "" {
-		promptDefault("Username", authconfig.Username)
-		username = readInput(cli.in, cli.out)
-		username = strings.TrimSpace(username)
-		if username == "" {
-			username = authconfig.Username
+	if flUser == "" {
+		cli.promptWithDefault("Username", authconfig.Username)
+		flUser = readInput(cli.in, cli.out)
+		flUser = strings.TrimSpace(flUser)
+		if flUser == "" {
+			flUser = authconfig.Username
 		}
 		}
 	}
 	}
-	// Assume that a different username means they may not want to use
-	// the password or email from the config file, so prompt them
-	if username != authconfig.Username {
-		if password == "" {
-			oldState, err := term.SaveState(cli.inFd)
-			if err != nil {
-				return err
-			}
-			fmt.Fprintf(cli.out, "Password: ")
-			term.DisableEcho(cli.inFd, oldState)
 
 
-			password = readInput(cli.in, cli.out)
-			fmt.Fprint(cli.out, "\n")
+	if flPassword == "" {
+		oldState, err := term.SaveState(cli.inFd)
+		if err != nil {
+			return authconfig, err
+		}
+		fmt.Fprintf(cli.out, "Password: ")
+		term.DisableEcho(cli.inFd, oldState)
 
 
-			term.RestoreTerminal(cli.inFd, oldState)
-			if password == "" {
-				return fmt.Errorf("Error : Password Required")
-			}
+		flPassword = readInput(cli.in, cli.out)
+		fmt.Fprint(cli.out, "\n")
+
+		term.RestoreTerminal(cli.inFd, oldState)
+		if flPassword == "" {
+			return authconfig, fmt.Errorf("Error : Password Required")
 		}
 		}
+	}
 
 
-		if email == "" {
-			promptDefault("Email", authconfig.Email)
-			email = readInput(cli.in, cli.out)
-			if email == "" {
-				email = authconfig.Email
+	// Assume that a different username means they may not want to use
+	// the email from the config file, so prompt it
+	if flUser != authconfig.Username {
+		if flEmail == "" {
+			cli.promptWithDefault("Email", authconfig.Email)
+			flEmail = readInput(cli.in, cli.out)
+			if flEmail == "" {
+				flEmail = authconfig.Email
 			}
 			}
 		}
 		}
 	} else {
 	} else {
 		// However, if they don't override the username use the
 		// However, if they don't override the username use the
-		// password or email from the cmd line if specified. IOW, allow
+		// email from the cmd line if specified. IOW, allow
 		// then to change/override them.  And if not specified, just
 		// then to change/override them.  And if not specified, just
 		// use what's in the config file
 		// use what's in the config file
-		if password == "" {
-			password = authconfig.Password
-		}
-		if email == "" {
-			email = authconfig.Email
+		if flEmail == "" {
+			flEmail = authconfig.Email
 		}
 		}
 	}
 	}
-	authconfig.Username = username
-	authconfig.Password = password
-	authconfig.Email = email
+	authconfig.Username = flUser
+	authconfig.Password = flPassword
+	authconfig.Email = flEmail
 	authconfig.ServerAddress = serverAddress
 	authconfig.ServerAddress = serverAddress
 	cli.configFile.AuthConfigs[serverAddress] = authconfig
 	cli.configFile.AuthConfigs[serverAddress] = authconfig
+	return authconfig, nil
+}
 
 
-	auth := cli.configFile.AuthConfigs[serverAddress]
-	response, err := cli.client.RegistryLogin(auth)
+func readInput(in io.Reader, out io.Writer) string {
+	reader := bufio.NewReader(in)
+	line, _, err := reader.ReadLine()
 	if err != nil {
 	if err != nil {
-		if client.IsErrUnauthorized(err) {
-			delete(cli.configFile.AuthConfigs, serverAddress)
-			if err2 := cli.configFile.Save(); err2 != nil {
-				fmt.Fprintf(cli.out, "WARNING: could not save config file: %v\n", err2)
-			}
-		}
-		return err
-	}
-
-	if err := cli.configFile.Save(); err != nil {
-		return fmt.Errorf("Error saving config file: %v", err)
+		fmt.Fprintln(out, err.Error())
+		os.Exit(1)
 	}
 	}
-	fmt.Fprintf(cli.out, "WARNING: login credentials saved in %s\n", cli.configFile.Filename())
-
-	if response.Status != "" {
-		fmt.Fprintf(cli.out, "%s\n", response.Status)
-	}
-	return nil
+	return string(line)
 }
 }

+ 4 - 3
api/client/logout.go

@@ -5,7 +5,6 @@ import (
 
 
 	Cli "github.com/docker/docker/cli"
 	Cli "github.com/docker/docker/cli"
 	flag "github.com/docker/docker/pkg/mflag"
 	flag "github.com/docker/docker/pkg/mflag"
-	"github.com/docker/docker/registry"
 )
 )
 
 
 // CmdLogout logs a user out from a Docker registry.
 // CmdLogout logs a user out from a Docker registry.
@@ -14,14 +13,16 @@ import (
 //
 //
 // Usage: docker logout [SERVER]
 // Usage: docker logout [SERVER]
 func (cli *DockerCli) CmdLogout(args ...string) error {
 func (cli *DockerCli) CmdLogout(args ...string) error {
-	cmd := Cli.Subcmd("logout", []string{"[SERVER]"}, Cli.DockerCommands["logout"].Description+".\nIf no server is specified \""+registry.IndexServer+"\" is the default.", true)
+	cmd := Cli.Subcmd("logout", []string{"[SERVER]"}, Cli.DockerCommands["logout"].Description+".\nIf no server is specified, the default is defined by the daemon.", true)
 	cmd.Require(flag.Max, 1)
 	cmd.Require(flag.Max, 1)
 
 
 	cmd.ParseFlags(args, true)
 	cmd.ParseFlags(args, true)
 
 
-	serverAddress := registry.IndexServer
+	var serverAddress string
 	if len(cmd.Args()) > 0 {
 	if len(cmd.Args()) > 0 {
 		serverAddress = cmd.Arg(0)
 		serverAddress = cmd.Arg(0)
+	} else {
+		serverAddress = cli.electAuthServer()
 	}
 	}
 
 
 	if _, ok := cli.configFile.AuthConfigs[serverAddress]; !ok {
 	if _, ok := cli.configFile.AuthConfigs[serverAddress]; !ok {

+ 1 - 1
api/client/pull.go

@@ -54,7 +54,7 @@ func (cli *DockerCli) CmdPull(args ...string) error {
 		return err
 		return err
 	}
 	}
 
 
-	authConfig := registry.ResolveAuthConfig(cli.configFile.AuthConfigs, repoInfo.Index)
+	authConfig := cli.resolveAuthConfig(cli.configFile.AuthConfigs, repoInfo.Index)
 	requestPrivilege := cli.registryAuthenticationPrivilegedFunc(repoInfo.Index, "pull")
 	requestPrivilege := cli.registryAuthenticationPrivilegedFunc(repoInfo.Index, "pull")
 
 
 	if isTrusted() && !ref.HasDigest() {
 	if isTrusted() && !ref.HasDigest() {

+ 1 - 1
api/client/push.go

@@ -42,7 +42,7 @@ func (cli *DockerCli) CmdPush(args ...string) error {
 		return err
 		return err
 	}
 	}
 	// Resolve the Auth config relevant for this server
 	// Resolve the Auth config relevant for this server
-	authConfig := registry.ResolveAuthConfig(cli.configFile.AuthConfigs, repoInfo.Index)
+	authConfig := cli.resolveAuthConfig(cli.configFile.AuthConfigs, repoInfo.Index)
 
 
 	requestPrivilege := cli.registryAuthenticationPrivilegedFunc(repoInfo.Index, "push")
 	requestPrivilege := cli.registryAuthenticationPrivilegedFunc(repoInfo.Index, "push")
 	if isTrusted() {
 	if isTrusted() {

+ 6 - 0
api/client/run.go

@@ -207,6 +207,12 @@ func (cli *DockerCli) CmdRun(args ...string) error {
 		if err != nil {
 		if err != nil {
 			return err
 			return err
 		}
 		}
+		if in != nil && config.Tty {
+			if err := cli.setRawTerminal(); err != nil {
+				return err
+			}
+			defer cli.restoreTerminal(in)
+		}
 		errCh = promise.Go(func() error {
 		errCh = promise.Go(func() error {
 			return cli.holdHijackedConnection(config.Tty, in, out, stderr, resp)
 			return cli.holdHijackedConnection(config.Tty, in, out, stderr, resp)
 		})
 		})

+ 1 - 1
api/client/search.go

@@ -36,7 +36,7 @@ func (cli *DockerCli) CmdSearch(args ...string) error {
 		return err
 		return err
 	}
 	}
 
 
-	authConfig := registry.ResolveAuthConfig(cli.configFile.AuthConfigs, indexInfo)
+	authConfig := cli.resolveAuthConfig(cli.configFile.AuthConfigs, indexInfo)
 	requestPrivilege := cli.registryAuthenticationPrivilegedFunc(indexInfo, "search")
 	requestPrivilege := cli.registryAuthenticationPrivilegedFunc(indexInfo, "search")
 
 
 	encodedAuth, err := encodeAuthToBase64(authConfig)
 	encodedAuth, err := encodeAuthToBase64(authConfig)

+ 6 - 0
api/client/start.go

@@ -96,6 +96,12 @@ func (cli *DockerCli) CmdStart(args ...string) error {
 			return err
 			return err
 		}
 		}
 		defer resp.Close()
 		defer resp.Close()
+		if in != nil && c.Config.Tty {
+			if err := cli.setRawTerminal(); err != nil {
+				return err
+			}
+			defer cli.restoreTerminal(in)
+		}
 
 
 		cErr := promise.Go(func() error {
 		cErr := promise.Go(func() error {
 			return cli.holdHijackedConnection(c.Config.Tty, in, cli.out, cli.err, resp)
 			return cli.holdHijackedConnection(c.Config.Tty, in, cli.out, cli.err, resp)

+ 1 - 1
api/client/trust.go

@@ -234,7 +234,7 @@ func (cli *DockerCli) trustedReference(ref reference.NamedTagged) (reference.Can
 	}
 	}
 
 
 	// Resolve the Auth config relevant for this server
 	// Resolve the Auth config relevant for this server
-	authConfig := registry.ResolveAuthConfig(cli.configFile.AuthConfigs, repoInfo.Index)
+	authConfig := cli.resolveAuthConfig(cli.configFile.AuthConfigs, repoInfo.Index)
 
 
 	notaryRepo, err := cli.getNotaryRepository(repoInfo, authConfig)
 	notaryRepo, err := cli.getNotaryRepository(repoInfo, authConfig)
 	if err != nil {
 	if err != nil {

+ 1 - 1
api/client/update.go

@@ -23,7 +23,7 @@ func (cli *DockerCli) CmdUpdate(args ...string) error {
 	flCPUShares := cmd.Int64([]string{"#c", "-cpu-shares"}, 0, "CPU shares (relative weight)")
 	flCPUShares := cmd.Int64([]string{"#c", "-cpu-shares"}, 0, "CPU shares (relative weight)")
 	flMemoryString := cmd.String([]string{"m", "-memory"}, "", "Memory limit")
 	flMemoryString := cmd.String([]string{"m", "-memory"}, "", "Memory limit")
 	flMemoryReservation := cmd.String([]string{"-memory-reservation"}, "", "Memory soft limit")
 	flMemoryReservation := cmd.String([]string{"-memory-reservation"}, "", "Memory soft limit")
-	flMemorySwap := cmd.String([]string{"-memory-swap"}, "", "Total memory (memory + swap), '-1' to disable swap")
+	flMemorySwap := cmd.String([]string{"-memory-swap"}, "", "Swap limit equal to memory plus swap: '-1' to enable unlimited swap")
 	flKernelMemory := cmd.String([]string{"-kernel-memory"}, "", "Kernel memory limit")
 	flKernelMemory := cmd.String([]string{"-kernel-memory"}, "", "Kernel memory limit")
 
 
 	cmd.Require(flag.Min, 1)
 	cmd.Require(flag.Min, 1)

+ 58 - 2
api/client/utils.go

@@ -7,6 +7,7 @@ import (
 	"os"
 	"os"
 	gosignal "os/signal"
 	gosignal "os/signal"
 	"runtime"
 	"runtime"
+	"strings"
 	"time"
 	"time"
 
 
 	"github.com/Sirupsen/logrus"
 	"github.com/Sirupsen/logrus"
@@ -18,6 +19,20 @@ import (
 	registrytypes "github.com/docker/engine-api/types/registry"
 	registrytypes "github.com/docker/engine-api/types/registry"
 )
 )
 
 
+func (cli *DockerCli) electAuthServer() string {
+	// The daemon `/info` endpoint informs us of the default registry being
+	// used. This is essential in cross-platforms environment, where for
+	// example a Linux client might be interacting with a Windows daemon, hence
+	// the default registry URL might be Windows specific.
+	serverAddress := registry.IndexServer
+	if info, err := cli.client.Info(); err != nil {
+		fmt.Fprintf(cli.out, "Warning: failed to get default registry endpoint from daemon (%v). Using system default: %s\n", err, serverAddress)
+	} else {
+		serverAddress = info.IndexServerAddress
+	}
+	return serverAddress
+}
+
 // encodeAuthToBase64 serializes the auth configuration as JSON base64 payload
 // encodeAuthToBase64 serializes the auth configuration as JSON base64 payload
 func encodeAuthToBase64(authConfig types.AuthConfig) (string, error) {
 func encodeAuthToBase64(authConfig types.AuthConfig) (string, error) {
 	buf, err := json.Marshal(authConfig)
 	buf, err := json.Marshal(authConfig)
@@ -35,10 +50,12 @@ func (cli *DockerCli) encodeRegistryAuth(index *registrytypes.IndexInfo) (string
 func (cli *DockerCli) registryAuthenticationPrivilegedFunc(index *registrytypes.IndexInfo, cmdName string) client.RequestPrivilegeFunc {
 func (cli *DockerCli) registryAuthenticationPrivilegedFunc(index *registrytypes.IndexInfo, cmdName string) client.RequestPrivilegeFunc {
 	return func() (string, error) {
 	return func() (string, error) {
 		fmt.Fprintf(cli.out, "\nPlease login prior to %s:\n", cmdName)
 		fmt.Fprintf(cli.out, "\nPlease login prior to %s:\n", cmdName)
-		if err := cli.CmdLogin(registry.GetAuthConfigKey(index)); err != nil {
+		indexServer := registry.GetAuthConfigKey(index)
+		authConfig, err := cli.configureAuth("", "", "", indexServer)
+		if err != nil {
 			return "", err
 			return "", err
 		}
 		}
-		return cli.encodeRegistryAuth(index)
+		return encodeAuthToBase64(authConfig)
 	}
 	}
 }
 }
 
 
@@ -138,3 +155,42 @@ func (cli *DockerCli) getTtySize() (int, int) {
 	}
 	}
 	return int(ws.Height), int(ws.Width)
 	return int(ws.Height), int(ws.Width)
 }
 }
+
+// resolveAuthConfig is like registry.ResolveAuthConfig, but if using the
+// default index, it uses the default index name for the daemon's platform,
+// not the client's platform.
+func (cli *DockerCli) resolveAuthConfig(authConfigs map[string]types.AuthConfig, index *registrytypes.IndexInfo) types.AuthConfig {
+	configKey := index.Name
+	if index.Official {
+		configKey = cli.electAuthServer()
+	}
+
+	// First try the happy case
+	if c, found := authConfigs[configKey]; found || index.Official {
+		return c
+	}
+
+	convertToHostname := func(url string) string {
+		stripped := url
+		if strings.HasPrefix(url, "http://") {
+			stripped = strings.Replace(url, "http://", "", 1)
+		} else if strings.HasPrefix(url, "https://") {
+			stripped = strings.Replace(url, "https://", "", 1)
+		}
+
+		nameParts := strings.SplitN(stripped, "/", 2)
+
+		return nameParts[0]
+	}
+
+	// Maybe they have a legacy config file, we will iterate the keys converting
+	// them to the new format and testing
+	for registry, ac := range authConfigs {
+		if configKey == convertToHostname(registry) {
+			return ac
+		}
+	}
+
+	// When all else fails, return an empty auth config
+	return types.AuthConfig{}
+}

+ 1 - 1
api/server/middleware.go

@@ -147,7 +147,7 @@ func versionMiddleware(handler httputils.APIFunc) httputils.APIFunc {
 			return errors.ErrorCodeNewerClientVersion.WithArgs(apiVersion, api.DefaultVersion)
 			return errors.ErrorCodeNewerClientVersion.WithArgs(apiVersion, api.DefaultVersion)
 		}
 		}
 		if apiVersion.LessThan(api.MinVersion) {
 		if apiVersion.LessThan(api.MinVersion) {
-			return errors.ErrorCodeOldClientVersion.WithArgs(apiVersion, api.DefaultVersion)
+			return errors.ErrorCodeOldClientVersion.WithArgs(apiVersion, api.MinVersion)
 		}
 		}
 
 
 		w.Header().Set("Server", "Docker/"+dockerversion.Version+" ("+runtime.GOOS+")")
 		w.Header().Set("Server", "Docker/"+dockerversion.Version+" ("+runtime.GOOS+")")

+ 2 - 1
api/server/router/build/build_routes.go

@@ -241,10 +241,11 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
 	if closeNotifier, ok := w.(http.CloseNotifier); ok {
 	if closeNotifier, ok := w.(http.CloseNotifier); ok {
 		finished := make(chan struct{})
 		finished := make(chan struct{})
 		defer close(finished)
 		defer close(finished)
+		clientGone := closeNotifier.CloseNotify()
 		go func() {
 		go func() {
 			select {
 			select {
 			case <-finished:
 			case <-finished:
-			case <-closeNotifier.CloseNotify():
+			case <-clientGone:
 				logrus.Infof("Client disconnected, cancelling job: build")
 				logrus.Infof("Client disconnected, cancelling job: build")
 				b.Cancel()
 				b.Cancel()
 			}
 			}

+ 21 - 0
api/server/router/local/image.go

@@ -7,9 +7,11 @@ import (
 	"fmt"
 	"fmt"
 	"io"
 	"io"
 	"net/http"
 	"net/http"
+	"net/url"
 	"strings"
 	"strings"
 
 
 	"github.com/docker/distribution/digest"
 	"github.com/docker/distribution/digest"
+	"github.com/docker/distribution/registry/api/errcode"
 	"github.com/docker/docker/api/server/httputils"
 	"github.com/docker/docker/api/server/httputils"
 	"github.com/docker/docker/builder/dockerfile"
 	"github.com/docker/docker/builder/dockerfile"
 	derr "github.com/docker/docker/errors"
 	derr "github.com/docker/docker/errors"
@@ -137,6 +139,12 @@ func (s *router) postImagesCreate(ctx context.Context, w http.ResponseWriter, r
 				err = s.daemon.PullImage(ref, metaHeaders, authConfig, output)
 				err = s.daemon.PullImage(ref, metaHeaders, authConfig, output)
 			}
 			}
 		}
 		}
+		// Check the error from pulling an image to make sure the request
+		// was authorized. Modify the status if the request was
+		// unauthorized to respond with 401 rather than 500.
+		if err != nil && isAuthorizedError(err) {
+			err = errcode.ErrorCodeUnauthorized.WithMessage(fmt.Sprintf("Authentication is required: %s", err))
+		}
 	} else { //import
 	} else { //import
 		var newRef reference.Named
 		var newRef reference.Named
 		if repo != "" {
 		if repo != "" {
@@ -373,3 +381,16 @@ func (s *router) getImagesSearch(ctx context.Context, w http.ResponseWriter, r *
 	}
 	}
 	return httputils.WriteJSON(w, http.StatusOK, query.Results)
 	return httputils.WriteJSON(w, http.StatusOK, query.Results)
 }
 }
+
+func isAuthorizedError(err error) bool {
+	if urlError, ok := err.(*url.Error); ok {
+		err = urlError.Err
+	}
+
+	if dError, ok := err.(errcode.Error); ok {
+		if dError.ErrorCode() == errcode.ErrorCodeUnauthorized {
+			return true
+		}
+	}
+	return false
+}

+ 1 - 1
api/server/router/system/system.go

@@ -20,7 +20,7 @@ func NewRouter(b Backend) router.Router {
 	}
 	}
 
 
 	r.routes = []router.Route{
 	r.routes = []router.Route{
-		local.NewOptionsRoute("/", optionsHandler),
+		local.NewOptionsRoute("/{anyroute:.*}", optionsHandler),
 		local.NewGetRoute("/_ping", pingHandler),
 		local.NewGetRoute("/_ping", pingHandler),
 		local.NewGetRoute("/events", r.getEvents),
 		local.NewGetRoute("/events", r.getEvents),
 		local.NewGetRoute("/info", r.getInfo),
 		local.NewGetRoute("/info", r.getInfo),

+ 63 - 34
container/container_unix.go

@@ -21,7 +21,7 @@ import (
 	runconfigopts "github.com/docker/docker/runconfig/opts"
 	runconfigopts "github.com/docker/docker/runconfig/opts"
 	"github.com/docker/docker/utils"
 	"github.com/docker/docker/utils"
 	"github.com/docker/docker/volume"
 	"github.com/docker/docker/volume"
-	"github.com/docker/engine-api/types/container"
+	containertypes "github.com/docker/engine-api/types/container"
 	"github.com/docker/engine-api/types/network"
 	"github.com/docker/engine-api/types/network"
 	"github.com/docker/go-connections/nat"
 	"github.com/docker/go-connections/nat"
 	"github.com/docker/libnetwork"
 	"github.com/docker/libnetwork"
@@ -129,18 +129,26 @@ func (container *Container) buildPortMapInfo(ep libnetwork.Endpoint) error {
 		return derr.ErrorCodeEmptyNetwork
 		return derr.ErrorCodeEmptyNetwork
 	}
 	}
 
 
+	if len(networkSettings.Ports) == 0 {
+		pm, err := getEndpointPortMapInfo(ep)
+		if err != nil {
+			return err
+		}
+		networkSettings.Ports = pm
+	}
+	return nil
+}
+
+func getEndpointPortMapInfo(ep libnetwork.Endpoint) (nat.PortMap, error) {
+	pm := nat.PortMap{}
 	driverInfo, err := ep.DriverInfo()
 	driverInfo, err := ep.DriverInfo()
 	if err != nil {
 	if err != nil {
-		return err
+		return pm, err
 	}
 	}
 
 
 	if driverInfo == nil {
 	if driverInfo == nil {
 		// It is not an error for epInfo to be nil
 		// It is not an error for epInfo to be nil
-		return nil
-	}
-
-	if networkSettings.Ports == nil {
-		networkSettings.Ports = nat.PortMap{}
+		return pm, nil
 	}
 	}
 
 
 	if expData, ok := driverInfo[netlabel.ExposedPorts]; ok {
 	if expData, ok := driverInfo[netlabel.ExposedPorts]; ok {
@@ -148,30 +156,45 @@ func (container *Container) buildPortMapInfo(ep libnetwork.Endpoint) error {
 			for _, tp := range exposedPorts {
 			for _, tp := range exposedPorts {
 				natPort, err := nat.NewPort(tp.Proto.String(), strconv.Itoa(int(tp.Port)))
 				natPort, err := nat.NewPort(tp.Proto.String(), strconv.Itoa(int(tp.Port)))
 				if err != nil {
 				if err != nil {
-					return derr.ErrorCodeParsingPort.WithArgs(tp.Port, err)
+					return pm, derr.ErrorCodeParsingPort.WithArgs(tp.Port, err)
 				}
 				}
-				networkSettings.Ports[natPort] = nil
+				pm[natPort] = nil
 			}
 			}
 		}
 		}
 	}
 	}
 
 
 	mapData, ok := driverInfo[netlabel.PortMap]
 	mapData, ok := driverInfo[netlabel.PortMap]
 	if !ok {
 	if !ok {
-		return nil
+		return pm, nil
 	}
 	}
 
 
 	if portMapping, ok := mapData.([]types.PortBinding); ok {
 	if portMapping, ok := mapData.([]types.PortBinding); ok {
 		for _, pp := range portMapping {
 		for _, pp := range portMapping {
 			natPort, err := nat.NewPort(pp.Proto.String(), strconv.Itoa(int(pp.Port)))
 			natPort, err := nat.NewPort(pp.Proto.String(), strconv.Itoa(int(pp.Port)))
 			if err != nil {
 			if err != nil {
-				return err
+				return pm, err
 			}
 			}
 			natBndg := nat.PortBinding{HostIP: pp.HostIP.String(), HostPort: strconv.Itoa(int(pp.HostPort))}
 			natBndg := nat.PortBinding{HostIP: pp.HostIP.String(), HostPort: strconv.Itoa(int(pp.HostPort))}
-			networkSettings.Ports[natPort] = append(networkSettings.Ports[natPort], natBndg)
+			pm[natPort] = append(pm[natPort], natBndg)
 		}
 		}
 	}
 	}
 
 
-	return nil
+	return pm, nil
+}
+
+func getSandboxPortMapInfo(sb libnetwork.Sandbox) nat.PortMap {
+	pm := nat.PortMap{}
+	if sb == nil {
+		return pm
+	}
+
+	for _, ep := range sb.Endpoints() {
+		pm, _ = getEndpointPortMapInfo(ep)
+		if len(pm) > 0 {
+			break
+		}
+	}
+	return pm
 }
 }
 
 
 // BuildEndpointInfo sets endpoint-related fields on container.NetworkSettings based on the provided network and endpoint.
 // BuildEndpointInfo sets endpoint-related fields on container.NetworkSettings based on the provided network and endpoint.
@@ -265,7 +288,7 @@ func (container *Container) BuildJoinOptions(n libnetwork.Network) ([]libnetwork
 }
 }
 
 
 // BuildCreateEndpointOptions builds endpoint options from a given network.
 // BuildCreateEndpointOptions builds endpoint options from a given network.
-func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network) ([]libnetwork.EndpointOption, error) {
+func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network, epConfig *network.EndpointSettings, sb libnetwork.Sandbox) ([]libnetwork.EndpointOption, error) {
 	var (
 	var (
 		portSpecs     = make(nat.PortSet)
 		portSpecs     = make(nat.PortSet)
 		bindings      = make(nat.PortMap)
 		bindings      = make(nat.PortMap)
@@ -278,7 +301,7 @@ func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network) ([]
 		createOptions = append(createOptions, libnetwork.CreateOptionAnonymous())
 		createOptions = append(createOptions, libnetwork.CreateOptionAnonymous())
 	}
 	}
 
 
-	if epConfig, ok := container.NetworkSettings.Networks[n.Name()]; ok {
+	if epConfig != nil {
 		ipam := epConfig.IPAMConfig
 		ipam := epConfig.IPAMConfig
 		if ipam != nil && (ipam.IPv4Address != "" || ipam.IPv6Address != "") {
 		if ipam != nil && (ipam.IPv4Address != "" || ipam.IPv6Address != "") {
 			createOptions = append(createOptions,
 			createOptions = append(createOptions,
@@ -290,14 +313,33 @@ func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network) ([]
 		}
 		}
 	}
 	}
 
 
-	if !container.HostConfig.NetworkMode.IsUserDefined() {
+	if !containertypes.NetworkMode(n.Name()).IsUserDefined() {
 		createOptions = append(createOptions, libnetwork.CreateOptionDisableResolution())
 		createOptions = append(createOptions, libnetwork.CreateOptionDisableResolution())
 	}
 	}
 
 
-	// Other configs are applicable only for the endpoint in the network
+	// configs that are applicable only for the endpoint in the network
 	// to which container was connected to on docker run.
 	// to which container was connected to on docker run.
-	if n.Name() != container.HostConfig.NetworkMode.NetworkName() &&
-		!(n.Name() == "bridge" && container.HostConfig.NetworkMode.IsDefault()) {
+	// Ideally all these network-specific endpoint configurations must be moved under
+	// container.NetworkSettings.Networks[n.Name()]
+	if n.Name() == container.HostConfig.NetworkMode.NetworkName() ||
+		(n.Name() == "bridge" && container.HostConfig.NetworkMode.IsDefault()) {
+		if container.Config.MacAddress != "" {
+			mac, err := net.ParseMAC(container.Config.MacAddress)
+			if err != nil {
+				return nil, err
+			}
+
+			genericOption := options.Generic{
+				netlabel.MacAddress: mac,
+			}
+
+			createOptions = append(createOptions, libnetwork.EndpointOptionGeneric(genericOption))
+		}
+	}
+
+	// Port-mapping rules belong to the container & applicable only to non-internal networks
+	portmaps := getSandboxPortMapInfo(sb)
+	if n.Info().Internal() || len(portmaps) > 0 {
 		return createOptions, nil
 		return createOptions, nil
 	}
 	}
 
 
@@ -357,19 +399,6 @@ func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network) ([]
 		libnetwork.CreateOptionPortMapping(pbList),
 		libnetwork.CreateOptionPortMapping(pbList),
 		libnetwork.CreateOptionExposedPorts(exposeList))
 		libnetwork.CreateOptionExposedPorts(exposeList))
 
 
-	if container.Config.MacAddress != "" {
-		mac, err := net.ParseMAC(container.Config.MacAddress)
-		if err != nil {
-			return nil, err
-		}
-
-		genericOption := options.Generic{
-			netlabel.MacAddress: mac,
-		}
-
-		createOptions = append(createOptions, libnetwork.EndpointOptionGeneric(genericOption))
-	}
-
 	return createOptions, nil
 	return createOptions, nil
 }
 }
 
 
@@ -577,7 +606,7 @@ func (container *Container) IpcMounts() []execdriver.Mount {
 	return mounts
 	return mounts
 }
 }
 
 
-func updateCommand(c *execdriver.Command, resources container.Resources) {
+func updateCommand(c *execdriver.Command, resources containertypes.Resources) {
 	c.Resources.BlkioWeight = resources.BlkioWeight
 	c.Resources.BlkioWeight = resources.BlkioWeight
 	c.Resources.CPUShares = resources.CPUShares
 	c.Resources.CPUShares = resources.CPUShares
 	c.Resources.CPUPeriod = resources.CPUPeriod
 	c.Resources.CPUPeriod = resources.CPUPeriod
@@ -591,7 +620,7 @@ func updateCommand(c *execdriver.Command, resources container.Resources) {
 }
 }
 
 
 // UpdateContainer updates resources of a container.
 // UpdateContainer updates resources of a container.
-func (container *Container) UpdateContainer(hostConfig *container.HostConfig) error {
+func (container *Container) UpdateContainer(hostConfig *containertypes.HostConfig) error {
 	container.Lock()
 	container.Lock()
 
 
 	resources := hostConfig.Resources
 	resources := hostConfig.Resources

+ 10 - 9
daemon/history.go → container/history.go

@@ -1,34 +1,35 @@
-package daemon
+package container
 
 
-import (
-	"sort"
-
-	"github.com/docker/docker/container"
-)
+import "sort"
 
 
 // History is a convenience type for storing a list of containers,
 // History is a convenience type for storing a list of containers,
-// ordered by creation date.
-type History []*container.Container
+// sorted by creation date in descendant order.
+type History []*Container
 
 
+// Len returns the number of containers in the history.
 func (history *History) Len() int {
 func (history *History) Len() int {
 	return len(*history)
 	return len(*history)
 }
 }
 
 
+// Less compares two containers and returns true if the second one
+// was created before the first one.
 func (history *History) Less(i, j int) bool {
 func (history *History) Less(i, j int) bool {
 	containers := *history
 	containers := *history
 	return containers[j].Created.Before(containers[i].Created)
 	return containers[j].Created.Before(containers[i].Created)
 }
 }
 
 
+// Swap switches containers i and j positions in the history.
 func (history *History) Swap(i, j int) {
 func (history *History) Swap(i, j int) {
 	containers := *history
 	containers := *history
 	containers[i], containers[j] = containers[j], containers[i]
 	containers[i], containers[j] = containers[j], containers[i]
 }
 }
 
 
 // Add the given container to history.
 // Add the given container to history.
-func (history *History) Add(container *container.Container) {
+func (history *History) Add(container *Container) {
 	*history = append(*history, container)
 	*history = append(*history, container)
 }
 }
 
 
+// sort orders the history by creation date in descendant order.
 func (history *History) sort() {
 func (history *History) sort() {
 	sort.Sort(history)
 	sort.Sort(history)
 }
 }

+ 91 - 0
container/memory_store.go

@@ -0,0 +1,91 @@
+package container
+
+import "sync"
+
+// memoryStore implements a Store in memory.
+type memoryStore struct {
+	s map[string]*Container
+	sync.Mutex
+}
+
+// NewMemoryStore initializes a new memory store.
+func NewMemoryStore() Store {
+	return &memoryStore{
+		s: make(map[string]*Container),
+	}
+}
+
+// Add appends a new container to the memory store.
+// It overrides the id if it existed before.
+func (c *memoryStore) Add(id string, cont *Container) {
+	c.Lock()
+	c.s[id] = cont
+	c.Unlock()
+}
+
+// Get returns a container from the store by id.
+func (c *memoryStore) Get(id string) *Container {
+	c.Lock()
+	res := c.s[id]
+	c.Unlock()
+	return res
+}
+
+// Delete removes a container from the store by id.
+func (c *memoryStore) Delete(id string) {
+	c.Lock()
+	delete(c.s, id)
+	c.Unlock()
+}
+
+// List returns a sorted list of containers from the store.
+// The containers are ordered by creation date.
+func (c *memoryStore) List() []*Container {
+	containers := new(History)
+	c.Lock()
+	for _, cont := range c.s {
+		containers.Add(cont)
+	}
+	c.Unlock()
+	containers.sort()
+	return *containers
+}
+
+// Size returns the number of containers in the store.
+func (c *memoryStore) Size() int {
+	c.Lock()
+	defer c.Unlock()
+	return len(c.s)
+}
+
+// First returns the first container found in the store by a given filter.
+func (c *memoryStore) First(filter StoreFilter) *Container {
+	c.Lock()
+	defer c.Unlock()
+	for _, cont := range c.s {
+		if filter(cont) {
+			return cont
+		}
+	}
+	return nil
+}
+
+// ApplyAll calls the reducer function with every container in the store.
+// This operation is asyncronous in the memory store.
+func (c *memoryStore) ApplyAll(apply StoreReducer) {
+	c.Lock()
+	defer c.Unlock()
+
+	wg := new(sync.WaitGroup)
+	for _, cont := range c.s {
+		wg.Add(1)
+		go func(container *Container) {
+			apply(container)
+			wg.Done()
+		}(cont)
+	}
+
+	wg.Wait()
+}
+
+var _ Store = &memoryStore{}

+ 106 - 0
container/memory_store_test.go

@@ -0,0 +1,106 @@
+package container
+
+import (
+	"testing"
+	"time"
+)
+
+func TestNewMemoryStore(t *testing.T) {
+	s := NewMemoryStore()
+	m, ok := s.(*memoryStore)
+	if !ok {
+		t.Fatalf("store is not a memory store %v", s)
+	}
+	if m.s == nil {
+		t.Fatal("expected store map to not be nil")
+	}
+}
+
+func TestAddContainers(t *testing.T) {
+	s := NewMemoryStore()
+	s.Add("id", NewBaseContainer("id", "root"))
+	if s.Size() != 1 {
+		t.Fatalf("expected store size 1, got %v", s.Size())
+	}
+}
+
+func TestGetContainer(t *testing.T) {
+	s := NewMemoryStore()
+	s.Add("id", NewBaseContainer("id", "root"))
+	c := s.Get("id")
+	if c == nil {
+		t.Fatal("expected container to not be nil")
+	}
+}
+
+func TestDeleteContainer(t *testing.T) {
+	s := NewMemoryStore()
+	s.Add("id", NewBaseContainer("id", "root"))
+	s.Delete("id")
+	if c := s.Get("id"); c != nil {
+		t.Fatalf("expected container to be nil after removal, got %v", c)
+	}
+
+	if s.Size() != 0 {
+		t.Fatalf("expected store size to be 0, got %v", s.Size())
+	}
+}
+
+func TestListContainers(t *testing.T) {
+	s := NewMemoryStore()
+
+	cont := NewBaseContainer("id", "root")
+	cont.Created = time.Now()
+	cont2 := NewBaseContainer("id2", "root")
+	cont2.Created = time.Now().Add(24 * time.Hour)
+
+	s.Add("id", cont)
+	s.Add("id2", cont2)
+
+	list := s.List()
+	if len(list) != 2 {
+		t.Fatalf("expected list size 2, got %v", len(list))
+	}
+	if list[0].ID != "id2" {
+		t.Fatalf("expected older container to be first, got %v", list[0].ID)
+	}
+}
+
+func TestFirstContainer(t *testing.T) {
+	s := NewMemoryStore()
+
+	s.Add("id", NewBaseContainer("id", "root"))
+	s.Add("id2", NewBaseContainer("id2", "root"))
+
+	first := s.First(func(cont *Container) bool {
+		return cont.ID == "id2"
+	})
+
+	if first == nil {
+		t.Fatal("expected container to not be nil")
+	}
+	if first.ID != "id2" {
+		t.Fatalf("expected id2, got %v", first)
+	}
+}
+
+func TestApplyAllContainer(t *testing.T) {
+	s := NewMemoryStore()
+
+	s.Add("id", NewBaseContainer("id", "root"))
+	s.Add("id2", NewBaseContainer("id2", "root"))
+
+	s.ApplyAll(func(cont *Container) {
+		if cont.ID == "id2" {
+			cont.ID = "newID"
+		}
+	})
+
+	cont := s.Get("id2")
+	if cont == nil {
+		t.Fatal("expected container to not be nil")
+	}
+	if cont.ID != "newID" {
+		t.Fatalf("expected newID, got %v", cont)
+	}
+}

+ 3 - 0
container/monitor.go

@@ -369,6 +369,9 @@ func (m *containerMonitor) resetContainer(lock bool) {
 			select {
 			select {
 			case <-time.After(loggerCloseTimeout):
 			case <-time.After(loggerCloseTimeout):
 				logrus.Warnf("Logger didn't exit in time: logs may be truncated")
 				logrus.Warnf("Logger didn't exit in time: logs may be truncated")
+				container.LogCopier.Close()
+				// always waits for the LogCopier to finished before closing
+				<-exit
 			case <-exit:
 			case <-exit:
 			}
 			}
 		}
 		}

+ 8 - 0
container/state.go

@@ -247,6 +247,14 @@ func (s *State) IsPaused() bool {
 	return res
 	return res
 }
 }
 
 
+// IsRestarting returns whether the container is restarting or not.
+func (s *State) IsRestarting() bool {
+	s.Lock()
+	res := s.Restarting
+	s.Unlock()
+	return res
+}
+
 // SetRemovalInProgress sets the container state as being removed.
 // SetRemovalInProgress sets the container state as being removed.
 func (s *State) SetRemovalInProgress() error {
 func (s *State) SetRemovalInProgress() error {
 	s.Lock()
 	s.Lock()

+ 28 - 0
container/store.go

@@ -0,0 +1,28 @@
+package container
+
+// StoreFilter defines a function to filter
+// container in the store.
+type StoreFilter func(*Container) bool
+
+// StoreReducer defines a function to
+// manipulate containers in the store
+type StoreReducer func(*Container)
+
+// Store defines an interface that
+// any container store must implement.
+type Store interface {
+	// Add appends a new container to the store.
+	Add(string, *Container)
+	// Get returns a container from the store by the identifier it was stored with.
+	Get(string) *Container
+	// Delete removes a container from the store by the identifier it was stored with.
+	Delete(string)
+	// List returns a list of containers from the store.
+	List() []*Container
+	// Size returns the number of containers in the store.
+	Size() int
+	// First returns the first container found in the store by a given filter.
+	First(StoreFilter) *Container
+	// ApplyAll calls the reducer function with every container in the store.
+	ApplyAll(StoreReducer)
+}

+ 1 - 1
contrib/builder/deb/debian-jessie/Dockerfile

@@ -4,7 +4,7 @@
 
 
 FROM debian:jessie
 FROM debian:jessie
 
 
-RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libsqlite3-dev  libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
+RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev  libsqlite3-dev pkg-config libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
 
 
 ENV GO_VERSION 1.5.3
 ENV GO_VERSION 1.5.3
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local

+ 1 - 1
contrib/builder/deb/debian-stretch/Dockerfile

@@ -4,7 +4,7 @@
 
 
 FROM debian:stretch
 FROM debian:stretch
 
 
-RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libsqlite3-dev libseccomp-dev libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
+RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev libsqlite3-dev pkg-config libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
 
 
 ENV GO_VERSION 1.5.3
 ENV GO_VERSION 1.5.3
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local

+ 2 - 1
contrib/builder/deb/debian-wheezy/Dockerfile

@@ -4,7 +4,8 @@
 
 
 FROM debian:wheezy-backports
 FROM debian:wheezy-backports
 
 
-RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools/wheezy-backports build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libsqlite3-dev  libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
+RUN apt-get update && apt-get install -y -t wheezy-backports btrfs-tools --no-install-recommends && rm -rf /var/lib/apt/lists/*
+RUN apt-get update && apt-get install -y apparmor bash-completion  build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev  libsqlite3-dev pkg-config  --no-install-recommends && rm -rf /var/lib/apt/lists/*
 
 
 ENV GO_VERSION 1.5.3
 ENV GO_VERSION 1.5.3
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local

+ 10 - 5
contrib/builder/deb/generate.sh

@@ -57,12 +57,13 @@ for version in "${versions[@]}"; do
 		libapparmor-dev # for "sys/apparmor.h"
 		libapparmor-dev # for "sys/apparmor.h"
 		libdevmapper-dev # for "libdevmapper.h"
 		libdevmapper-dev # for "libdevmapper.h"
 		libltdl-dev # for pkcs11 "ltdl.h"
 		libltdl-dev # for pkcs11 "ltdl.h"
-		libsqlite3-dev # for "sqlite3.h"
 		libseccomp-dev  # for "seccomp.h" & "libseccomp.so"
 		libseccomp-dev  # for "seccomp.h" & "libseccomp.so"
+		libsqlite3-dev # for "sqlite3.h"
+		pkg-config # for detecting things like libsystemd-journal dynamically
 	)
 	)
 	# packaging for "sd-journal.h" and libraries varies
 	# packaging for "sd-journal.h" and libraries varies
 	case "$suite" in
 	case "$suite" in
-		precise) ;;
+		precise|wheezy) ;;
 		sid|stretch|wily) packages+=( libsystemd-dev );;
 		sid|stretch|wily) packages+=( libsystemd-dev );;
 		*) packages+=( libsystemd-journal-dev );;
 		*) packages+=( libsystemd-journal-dev );;
 	esac
 	esac
@@ -96,9 +97,13 @@ for version in "${versions[@]}"; do
 	fi
 	fi
 
 
 	if [ "$suite" = 'wheezy' ]; then
 	if [ "$suite" = 'wheezy' ]; then
-		# pull btrfs-toold from backports
-		backports="/$suite-backports"
-		packages=( "${packages[@]/btrfs-tools/btrfs-tools$backports}" )
+		# pull a couple packages from backports explicitly
+		# (build failures otherwise)
+		backportsPackages=( btrfs-tools libsystemd-journal-dev )
+		for pkg in "${backportsPackages[@]}"; do
+			packages=( "${packages[@]/$pkg}" )
+		done
+		echo "RUN apt-get update && apt-get install -y -t $suite-backports ${backportsPackages[*]} --no-install-recommends && rm -rf /var/lib/apt/lists/*" >> "$version/Dockerfile"
 	fi
 	fi
 
 
 	echo "RUN apt-get update && apt-get install -y ${packages[*]} --no-install-recommends && rm -rf /var/lib/apt/lists/*" >> "$version/Dockerfile"
 	echo "RUN apt-get update && apt-get install -y ${packages[*]} --no-install-recommends && rm -rf /var/lib/apt/lists/*" >> "$version/Dockerfile"

+ 1 - 1
contrib/builder/deb/ubuntu-precise/Dockerfile

@@ -4,7 +4,7 @@
 
 
 FROM ubuntu:precise
 FROM ubuntu:precise
 
 
-RUN apt-get update && apt-get install -y apparmor bash-completion  build-essential curl ca-certificates debhelper dh-apparmor  git libapparmor-dev  libltdl-dev libsqlite3-dev  --no-install-recommends && rm -rf /var/lib/apt/lists/*
+RUN apt-get update && apt-get install -y apparmor bash-completion  build-essential curl ca-certificates debhelper dh-apparmor  git libapparmor-dev  libltdl-dev  libsqlite3-dev pkg-config --no-install-recommends && rm -rf /var/lib/apt/lists/*
 
 
 ENV GO_VERSION 1.5.3
 ENV GO_VERSION 1.5.3
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local

+ 1 - 1
contrib/builder/deb/ubuntu-trusty/Dockerfile

@@ -4,7 +4,7 @@
 
 
 FROM ubuntu:trusty
 FROM ubuntu:trusty
 
 
-RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libsqlite3-dev  libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
+RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev  libsqlite3-dev pkg-config libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
 
 
 ENV GO_VERSION 1.5.3
 ENV GO_VERSION 1.5.3
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local

+ 1 - 1
contrib/builder/deb/ubuntu-wily/Dockerfile

@@ -4,7 +4,7 @@
 
 
 FROM ubuntu:wily
 FROM ubuntu:wily
 
 
-RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libsqlite3-dev libseccomp-dev libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
+RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev libsqlite3-dev pkg-config libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
 
 
 ENV GO_VERSION 1.5.3
 ENV GO_VERSION 1.5.3
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local

+ 2 - 1
contrib/builder/rpm/centos-7/Dockerfile

@@ -6,7 +6,7 @@ FROM centos:7
 
 
 RUN yum groupinstall -y "Development Tools"
 RUN yum groupinstall -y "Development Tools"
 RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
 RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
-RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static  libselinux-devel libtool-ltdl-devel selinux-policy selinux-policy-devel sqlite-devel tar
+RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static  libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar
 
 
 ENV GO_VERSION 1.5.3
 ENV GO_VERSION 1.5.3
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
@@ -15,3 +15,4 @@ ENV PATH $PATH:/usr/local/go/bin
 ENV AUTO_GOPATH 1
 ENV AUTO_GOPATH 1
 
 
 ENV DOCKER_BUILDTAGS selinux
 ENV DOCKER_BUILDTAGS selinux
+

+ 2 - 1
contrib/builder/rpm/fedora-22/Dockerfile

@@ -5,7 +5,7 @@
 FROM fedora:22
 FROM fedora:22
 
 
 RUN dnf install -y @development-tools fedora-packager
 RUN dnf install -y @development-tools fedora-packager
-RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel selinux-policy selinux-policy-devel sqlite-devel tar
+RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar
 
 
 ENV SECCOMP_VERSION v2.2.3
 ENV SECCOMP_VERSION v2.2.3
 RUN buildDeps=' \
 RUN buildDeps=' \
@@ -35,3 +35,4 @@ ENV PATH $PATH:/usr/local/go/bin
 ENV AUTO_GOPATH 1
 ENV AUTO_GOPATH 1
 
 
 ENV DOCKER_BUILDTAGS seccomp selinux
 ENV DOCKER_BUILDTAGS seccomp selinux
+

+ 2 - 1
contrib/builder/rpm/fedora-23/Dockerfile

@@ -5,7 +5,7 @@
 FROM fedora:23
 FROM fedora:23
 
 
 RUN dnf install -y @development-tools fedora-packager
 RUN dnf install -y @development-tools fedora-packager
-RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel selinux-policy selinux-policy-devel sqlite-devel tar
+RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar
 
 
 ENV SECCOMP_VERSION v2.2.3
 ENV SECCOMP_VERSION v2.2.3
 RUN buildDeps=' \
 RUN buildDeps=' \
@@ -35,3 +35,4 @@ ENV PATH $PATH:/usr/local/go/bin
 ENV AUTO_GOPATH 1
 ENV AUTO_GOPATH 1
 
 
 ENV DOCKER_BUILDTAGS seccomp selinux
 ENV DOCKER_BUILDTAGS seccomp selinux
+

+ 45 - 0
contrib/builder/rpm/generate.sh

@@ -51,6 +51,7 @@ for version in "${versions[@]}"; do
 			;;
 			;;
 		oraclelinux:*)
 		oraclelinux:*)
 			# get "Development Tools" packages and dependencies
 			# get "Development Tools" packages and dependencies
+			# we also need yum-utils for yum-config-manager to pull the latest repo file
 			echo 'RUN yum groupinstall -y "Development Tools"' >> "$version/Dockerfile"
 			echo 'RUN yum groupinstall -y "Development Tools"' >> "$version/Dockerfile"
 			;;
 			;;
 		opensuse:*)
 		opensuse:*)
@@ -70,9 +71,11 @@ for version in "${versions[@]}"; do
 		libseccomp-devel # for "seccomp.h" & "libseccomp.so"
 		libseccomp-devel # for "seccomp.h" & "libseccomp.so"
 		libselinux-devel # for "libselinux.so"
 		libselinux-devel # for "libselinux.so"
 		libtool-ltdl-devel # for pkcs11 "ltdl.h"
 		libtool-ltdl-devel # for pkcs11 "ltdl.h"
+		pkgconfig # for the pkg-config command
 		selinux-policy
 		selinux-policy
 		selinux-policy-devel
 		selinux-policy-devel
 		sqlite-devel # for "sqlite3.h"
 		sqlite-devel # for "sqlite3.h"
+		systemd-devel # for "sd-journal.h" and libraries
 		tar # older versions of dev-tools do not have tar
 		tar # older versions of dev-tools do not have tar
 	)
 	)
 
 
@@ -83,6 +86,13 @@ for version in "${versions[@]}"; do
 			;;
 			;;
 	esac
 	esac
 
 
+	case "$from" in
+		oraclelinux:6)
+			# doesn't use systemd, doesn't have a devel package for it
+			packages=( "${packages[@]/systemd-devel}" )
+			;;
+	esac
+
 	# opensuse & oraclelinx:6 do not have the right libseccomp libs
 	# opensuse & oraclelinx:6 do not have the right libseccomp libs
 	# centos:7 and oraclelinux:7 have a libseccomp < 2.2.1 :(
 	# centos:7 and oraclelinux:7 have a libseccomp < 2.2.1 :(
 	case "$from" in
 	case "$from" in
@@ -97,6 +107,11 @@ for version in "${versions[@]}"; do
 	case "$from" in
 	case "$from" in
 		opensuse:*)
 		opensuse:*)
 			packages=( "${packages[@]/btrfs-progs-devel/libbtrfs-devel}" )
 			packages=( "${packages[@]/btrfs-progs-devel/libbtrfs-devel}" )
+			packages=( "${packages[@]/pkgconfig/pkg-config}" )
+			if [[ "$from" == "opensuse:13."* ]]; then
+				packages+=( systemd-rpm-macros )
+			fi
+
 			# use zypper
 			# use zypper
 			echo "RUN zypper --non-interactive install ${packages[*]}" >> "$version/Dockerfile"
 			echo "RUN zypper --non-interactive install ${packages[*]}" >> "$version/Dockerfile"
 			;;
 			;;
@@ -140,6 +155,18 @@ for version in "${versions[@]}"; do
 		*) ;;
 		*) ;;
 	esac
 	esac
 
 
+	case "$from" in
+		oraclelinux:6)
+			# We need a known version of the kernel-uek-devel headers to set CGO_CPPFLAGS, so grab the UEKR4 GA version
+			# This requires using yum-config-manager from yum-utils to enable the UEKR4 yum repo
+			echo "RUN yum install -y yum-utils && curl -o /etc/yum.repos.d/public-yum-ol6.repo http://yum.oracle.com/public-yum-ol6.repo && yum-config-manager -q --enable ol6_UEKR4"  >> "$version/Dockerfile"
+			echo "RUN yum install -y kernel-uek-devel-4.1.12-32.el6uek"  >> "$version/Dockerfile"
+			echo >> "$version/Dockerfile"
+			;;
+		*) ;;
+	esac
+
+
 	awk '$1 == "ENV" && $2 == "GO_VERSION" { print; exit }' ../../../Dockerfile >> "$version/Dockerfile"
 	awk '$1 == "ENV" && $2 == "GO_VERSION" { print; exit }' ../../../Dockerfile >> "$version/Dockerfile"
 	echo 'RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local' >> "$version/Dockerfile"
 	echo 'RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local' >> "$version/Dockerfile"
 	echo 'ENV PATH $PATH:/usr/local/go/bin' >> "$version/Dockerfile"
 	echo 'ENV PATH $PATH:/usr/local/go/bin' >> "$version/Dockerfile"
@@ -154,4 +181,22 @@ for version in "${versions[@]}"; do
 	buildTags=$( echo "selinux $extraBuildTags" | xargs -n1 | sort -n | tr '\n' ' ' | sed -e 's/[[:space:]]*$//' )
 	buildTags=$( echo "selinux $extraBuildTags" | xargs -n1 | sort -n | tr '\n' ' ' | sed -e 's/[[:space:]]*$//' )
 
 
 	echo "ENV DOCKER_BUILDTAGS $buildTags" >> "$version/Dockerfile"
 	echo "ENV DOCKER_BUILDTAGS $buildTags" >> "$version/Dockerfile"
+	echo >> "$version/Dockerfile"
+
+	case "$from" in
+                oraclelinux:6)
+                        # We need to set the CGO_CPPFLAGS environment to use the updated UEKR4 headers with all the userns stuff.
+                        # The ordering is very important and should not be changed.
+                        echo 'ENV CGO_CPPFLAGS -D__EXPORTED_HEADERS__ \'  >> "$version/Dockerfile"
+                        echo '                 -I/usr/src/kernels/4.1.12-32.el6uek.x86_64/arch/x86/include/generated/uapi \'  >> "$version/Dockerfile"
+                        echo '                 -I/usr/src/kernels/4.1.12-32.el6uek.x86_64/arch/x86/include/uapi \'  >> "$version/Dockerfile"
+                        echo '                 -I/usr/src/kernels/4.1.12-32.el6uek.x86_64/include/generated/uapi \'  >> "$version/Dockerfile"
+                        echo '                 -I/usr/src/kernels/4.1.12-32.el6uek.x86_64/include/uapi \'  >> "$version/Dockerfile"
+                        echo '                 -I/usr/src/kernels/4.1.12-32.el6uek.x86_64/include'  >> "$version/Dockerfile"
+                        echo >> "$version/Dockerfile"
+                        ;;
+                *) ;;
+        esac
+
+
 done
 done

+ 2 - 1
contrib/builder/rpm/opensuse-13.2/Dockerfile

@@ -5,7 +5,7 @@
 FROM opensuse:13.2
 FROM opensuse:13.2
 
 
 RUN zypper --non-interactive install ca-certificates* curl gzip rpm-build
 RUN zypper --non-interactive install ca-certificates* curl gzip rpm-build
-RUN zypper --non-interactive install libbtrfs-devel device-mapper-devel glibc-static  libselinux-devel libtool-ltdl-devel selinux-policy selinux-policy-devel sqlite-devel tar
+RUN zypper --non-interactive install libbtrfs-devel device-mapper-devel glibc-static  libselinux-devel libtool-ltdl-devel pkg-config selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar systemd-rpm-macros
 
 
 ENV GO_VERSION 1.5.3
 ENV GO_VERSION 1.5.3
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
@@ -14,3 +14,4 @@ ENV PATH $PATH:/usr/local/go/bin
 ENV AUTO_GOPATH 1
 ENV AUTO_GOPATH 1
 
 
 ENV DOCKER_BUILDTAGS selinux
 ENV DOCKER_BUILDTAGS selinux
+

+ 27 - 0
contrib/builder/rpm/oraclelinux-6/Dockerfile

@@ -0,0 +1,27 @@
+#
+# THIS FILE IS AUTOGENERATED; SEE "contrib/builder/rpm/amd64/generate.sh"!
+#
+
+FROM oraclelinux:6
+
+RUN yum groupinstall -y "Development Tools"
+RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static  libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel  tar
+
+RUN yum install -y yum-utils && curl -o /etc/yum.repos.d/public-yum-ol6.repo http://yum.oracle.com/public-yum-ol6.repo && yum-config-manager -q --enable ol6_UEKR4
+RUN yum install -y kernel-uek-devel-4.1.12-32.el6uek
+
+ENV GO_VERSION 1.5.3
+RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
+ENV PATH $PATH:/usr/local/go/bin
+
+ENV AUTO_GOPATH 1
+
+ENV DOCKER_BUILDTAGS selinux
+
+ENV CGO_CPPFLAGS -D__EXPORTED_HEADERS__ \
+                 -I/usr/src/kernels/4.1.12-32.el6uek.x86_64/arch/x86/include/generated/uapi \
+                 -I/usr/src/kernels/4.1.12-32.el6uek.x86_64/arch/x86/include/uapi \
+                 -I/usr/src/kernels/4.1.12-32.el6uek.x86_64/include/generated/uapi \
+                 -I/usr/src/kernels/4.1.12-32.el6uek.x86_64/include/uapi \
+                 -I/usr/src/kernels/4.1.12-32.el6uek.x86_64/include
+

+ 2 - 1
contrib/builder/rpm/oraclelinux-7/Dockerfile

@@ -5,7 +5,7 @@
 FROM oraclelinux:7
 FROM oraclelinux:7
 
 
 RUN yum groupinstall -y "Development Tools"
 RUN yum groupinstall -y "Development Tools"
-RUN yum install -y --enablerepo=ol7_optional_latest btrfs-progs-devel device-mapper-devel glibc-static  libselinux-devel libtool-ltdl-devel selinux-policy selinux-policy-devel sqlite-devel tar
+RUN yum install -y --enablerepo=ol7_optional_latest btrfs-progs-devel device-mapper-devel glibc-static  libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar
 
 
 ENV GO_VERSION 1.5.3
 ENV GO_VERSION 1.5.3
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
 RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
@@ -14,3 +14,4 @@ ENV PATH $PATH:/usr/local/go/bin
 ENV AUTO_GOPATH 1
 ENV AUTO_GOPATH 1
 
 
 ENV DOCKER_BUILDTAGS selinux
 ENV DOCKER_BUILDTAGS selinux
+

+ 139 - 36
contrib/completion/bash/docker

@@ -220,6 +220,32 @@ __docker_pos_first_nonflag() {
 	echo $counter
 	echo $counter
 }
 }
 
 
+# If we are currently completing the value of a map option (key=value)
+# which matches the extglob given as an argument, returns key.
+# This function is needed for key-specific completions.
+# TODO use this in all "${words[$cword-2]}$prev=" occurrences
+__docker_map_key_of_current_option() {
+	local glob="$1"
+
+	local key glob_pos
+	if [ "$cur" = "=" ] ; then        # key= case
+		key="$prev"
+		glob_pos=$((cword - 2))
+	elif [[ $cur == *=* ]] ; then     # key=value case (OSX)
+		key=${cur%=*}
+		glob_pos=$((cword - 1))
+	elif [ "$prev" = "=" ] ; then
+		key=${words[$cword - 2]}  # key=value case
+		glob_pos=$((cword - 3))
+	else
+		return
+	fi
+
+	[ "${words[$glob_pos]}" = "=" ] && ((glob_pos--))  # --option=key=value syntax
+
+	[[ ${words[$glob_pos]} == @($glob) ]] && echo "$key"
+}
+
 # Returns the value of the first option matching option_glob.
 # Returns the value of the first option matching option_glob.
 # Valid values for option_glob are option names like '--log-level' and
 # Valid values for option_glob are option names like '--log-level' and
 # globs like '--log-level|-l'
 # globs like '--log-level|-l'
@@ -383,7 +409,7 @@ __docker_complete_log_options() {
 	local gelf_options="env gelf-address labels tag"
 	local gelf_options="env gelf-address labels tag"
 	local journald_options="env labels"
 	local journald_options="env labels"
 	local json_file_options="env labels max-file max-size"
 	local json_file_options="env labels max-file max-size"
-	local syslog_options="syslog-address syslog-facility tag"
+	local syslog_options="syslog-address syslog-tls-ca-cert syslog-tls-cert syslog-tls-key syslog-tls-skip-verify syslog-facility tag"
 	local splunk_options="env labels splunk-caname splunk-capath splunk-index splunk-insecureskipverify splunk-source splunk-sourcetype splunk-token splunk-url tag"
 	local splunk_options="env labels splunk-caname splunk-capath splunk-index splunk-insecureskipverify splunk-source splunk-sourcetype splunk-token splunk-url tag"
 
 
 	local all_options="$fluentd_options $gelf_options $journald_options $json_file_options $syslog_options $splunk_options"
 	local all_options="$fluentd_options $gelf_options $journald_options $json_file_options $syslog_options $splunk_options"
@@ -431,8 +457,9 @@ __docker_complete_log_driver_options() {
 			return
 			return
 			;;
 			;;
 		*syslog-address=*)
 		*syslog-address=*)
-			COMPREPLY=( $( compgen -W "tcp udp unix" -S "://" -- "${cur#=}" ) )
+			COMPREPLY=( $( compgen -W "tcp:// tcp+tls:// udp:// unix://" -- "${cur#=}" ) )
 			__docker_nospace
 			__docker_nospace
+			__ltrim_colon_completions "${cur}"
 			return
 			return
 			;;
 			;;
 		*syslog-facility=*)
 		*syslog-facility=*)
@@ -460,15 +487,23 @@ __docker_complete_log_driver_options() {
 			" -- "${cur#=}" ) )
 			" -- "${cur#=}" ) )
 			return
 			return
 			;;
 			;;
+		*syslog-tls-@(ca-cert|cert|key)=*)
+			_filedir
+			return
+			;;
+		*syslog-tls-skip-verify=*)
+			COMPREPLY=( $( compgen -W "true" -- "${cur#=}" ) )
+			return
+			;;
 		*splunk-url=*)
 		*splunk-url=*)
 			COMPREPLY=( $( compgen -W "http:// https://" -- "${cur#=}" ) )
 			COMPREPLY=( $( compgen -W "http:// https://" -- "${cur#=}" ) )
-			compopt -o nospace
+			__docker_nospace
 			__ltrim_colon_completions "${cur}"
 			__ltrim_colon_completions "${cur}"
 			return
 			return
 			;;
 			;;
 		*splunk-insecureskipverify=*)
 		*splunk-insecureskipverify=*)
 			COMPREPLY=( $( compgen -W "true false" -- "${cur#=}" ) )
 			COMPREPLY=( $( compgen -W "true false" -- "${cur#=}" ) )
-			compopt -o nospace
+			__docker_nospace
 			return
 			return
 			;;
 			;;
 	esac
 	esac
@@ -644,7 +679,7 @@ _docker_commit() {
 _docker_cp() {
 _docker_cp() {
 	case "$cur" in
 	case "$cur" in
 		-*)
 		-*)
-			COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
+			COMPREPLY=( $( compgen -W "--follow-link -L --help" -- "$cur" ) )
 			;;
 			;;
 		*)
 		*)
 			local counter=$(__docker_pos_first_nonflag)
 			local counter=$(__docker_pos_first_nonflag)
@@ -735,6 +770,7 @@ _docker_daemon() {
 		--registry-mirror
 		--registry-mirror
 		--storage-driver -s
 		--storage-driver -s
 		--storage-opt
 		--storage-opt
+		--userns-remap
 	"
 	"
 
 
 	case "$prev" in
 	case "$prev" in
@@ -748,7 +784,7 @@ _docker_daemon() {
 			return
 			return
 			;;
 			;;
 		--cluster-store-opt)
 		--cluster-store-opt)
-			COMPREPLY=( $( compgen -W "kv.cacertfile kv.certfile kv.keyfile" -S = -- "$cur" ) )
+			COMPREPLY=( $( compgen -W "discovery.heartbeat discovery.ttl kv.cacertfile kv.certfile kv.keyfile kv.path" -S = -- "$cur" ) )
 			__docker_nospace
 			__docker_nospace
 			return
 			return
 			;;
 			;;
@@ -810,6 +846,15 @@ _docker_daemon() {
 			__docker_complete_log_options
 			__docker_complete_log_options
 			return
 			return
 			;;
 			;;
+		--userns-remap)
+			if [[ $cur == *:* ]] ; then
+				COMPREPLY=( $(compgen -g -- "${cur#*:}") )
+			else
+				COMPREPLY=( $(compgen -u -S : -- "$cur") )
+				__docker_nospace
+			fi
+			return
+			;;
 		$(__docker_to_extglob "$options_with_args") )
 		$(__docker_to_extglob "$options_with_args") )
 			return
 			return
 			;;
 			;;
@@ -860,37 +905,30 @@ _docker_diff() {
 }
 }
 
 
 _docker_events() {
 _docker_events() {
-	case "$prev" in
-		--filter|-f)
-			COMPREPLY=( $( compgen -S = -W "container event image" -- "$cur" ) )
-			__docker_nospace
-			return
-			;;
-		--since|--until)
-			return
-			;;
-	esac
-
-	case "${words[$cword-2]}$prev=" in
-		*container=*)
-			cur="${cur#=}"
+	local filter=$(__docker_map_key_of_current_option '-f|--filter')
+	case "$filter" in
+		container)
+			cur="${cur##*=}"
 			__docker_complete_containers_all
 			__docker_complete_containers_all
 			return
 			return
 			;;
 			;;
-		*event=*)
+		event)
 			COMPREPLY=( $( compgen -W "
 			COMPREPLY=( $( compgen -W "
 				attach
 				attach
 				commit
 				commit
+				connect
 				copy
 				copy
 				create
 				create
 				delete
 				delete
 				destroy
 				destroy
 				die
 				die
+				disconnect
 				exec_create
 				exec_create
 				exec_start
 				exec_start
 				export
 				export
 				import
 				import
 				kill
 				kill
+				mount
 				oom
 				oom
 				pause
 				pause
 				pull
 				pull
@@ -902,16 +940,43 @@ _docker_events() {
 				stop
 				stop
 				tag
 				tag
 				top
 				top
+				unmount
 				unpause
 				unpause
 				untag
 				untag
-			" -- "${cur#=}" ) )
+				update
+			" -- "${cur##*=}" ) )
 			return
 			return
 			;;
 			;;
-		*image=*)
-			cur="${cur#=}"
+		image)
+			cur="${cur##*=}"
 			__docker_complete_images
 			__docker_complete_images
 			return
 			return
 			;;
 			;;
+		network)
+			cur="${cur##*=}"
+			__docker_complete_networks
+			return
+			;;
+		type)
+			COMPREPLY=( $( compgen -W "container image network volume" -- "${cur##*=}" ) )
+			return
+			;;
+		volume)
+			cur="${cur##*=}"
+			__docker_complete_volumes
+			return
+			;;
+	esac
+
+	case "$prev" in
+		--filter|-f)
+			COMPREPLY=( $( compgen -S = -W "container event image label network type volume" -- "$cur" ) )
+			__docker_nospace
+			return
+			;;
+		--since|--until)
+			return
+			;;
 	esac
 	esac
 
 
 	case "$cur" in
 	case "$cur" in
@@ -978,10 +1043,8 @@ _docker_history() {
 _docker_images() {
 _docker_images() {
 	case "$prev" in
 	case "$prev" in
 		--filter|-f)
 		--filter|-f)
-			COMPREPLY=( $( compgen -W "dangling=true label=" -- "$cur" ) )
-			if [ "$COMPREPLY" = "label=" ]; then
-				__docker_nospace
-			fi
+			COMPREPLY=( $( compgen -S = -W "dangling label" -- "$cur" ) )
+			__docker_nospace
 			return
 			return
 			;;
 			;;
                 --format)
                 --format)
@@ -1153,12 +1216,41 @@ _docker_logs() {
 }
 }
 
 
 _docker_network_connect() {
 _docker_network_connect() {
+	local options_with_args="
+		--alias
+		--ip
+		--ip6
+		--link
+	"
+
+	local boolean_options="
+		--help
+	"
+
+	case "$prev" in
+		--link)
+			case "$cur" in
+				*:*)
+					;;
+				*)
+					__docker_complete_containers_running
+					COMPREPLY=( $( compgen -W "${COMPREPLY[*]}" -S ':' ) )
+					__docker_nospace
+					;;
+			esac
+			return
+			;;
+		$(__docker_to_extglob "$options_with_args") )
+			return
+			;;
+	esac
+
 	case "$cur" in
 	case "$cur" in
 		-*)
 		-*)
-			COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
+			COMPREPLY=( $( compgen -W "$boolean_options $options_with_args" -- "$cur" ) )
 			;;
 			;;
 		*)
 		*)
-			local counter=$(__docker_pos_first_nonflag)
+			local counter=$( __docker_pos_first_nonflag $( __docker_to_alternatives "$options_with_args" ) )
 			if [ $cword -eq $counter ]; then
 			if [ $cword -eq $counter ]; then
 				__docker_complete_networks
 				__docker_complete_networks
 			elif [ $cword -eq $(($counter + 1)) ]; then
 			elif [ $cword -eq $(($counter + 1)) ]; then
@@ -1170,7 +1262,7 @@ _docker_network_connect() {
 
 
 _docker_network_create() {
 _docker_network_create() {
 	case "$prev" in
 	case "$prev" in
-		--aux-address|--gateway|--ip-range|--opt|-o|--subnet)
+		--aux-address|--gateway|--ip-range|--ipam-opt|--opt|-o|--subnet)
 			return
 			return
 			;;
 			;;
 		--ipam-driver)
 		--ipam-driver)
@@ -1189,7 +1281,7 @@ _docker_network_create() {
 
 
 	case "$cur" in
 	case "$cur" in
 		-*)
 		-*)
-			COMPREPLY=( $( compgen -W "--aux-address --driver -d --gateway --help --ip-range --ipam-driver --opt -o --subnet" -- "$cur" ) )
+			COMPREPLY=( $( compgen -W "--aux-address --driver -d --gateway --help --internal --ip-range --ipam-driver --ipam-opt --opt -o --subnet" -- "$cur" ) )
 			;;
 			;;
 	esac
 	esac
 }
 }
@@ -1350,7 +1442,7 @@ _docker_ps() {
 			return
 			return
 			;;
 			;;
 		*status=*)
 		*status=*)
-			COMPREPLY=( $( compgen -W "exited paused restarting running" -- "${cur#=}" ) )
+			COMPREPLY=( $( compgen -W "created dead exited paused restarting running" -- "${cur#=}" ) )
 			return
 			return
 			;;
 			;;
 	esac
 	esac
@@ -1365,7 +1457,7 @@ _docker_ps() {
 _docker_pull() {
 _docker_pull() {
 	case "$cur" in
 	case "$cur" in
 		-*)
 		-*)
-			COMPREPLY=( $( compgen -W "--all-tags -a --help" -- "$cur" ) )
+			COMPREPLY=( $( compgen -W "--all-tags -a --disable-content-trust=false --help" -- "$cur" ) )
 			;;
 			;;
 		*)
 		*)
 			local counter=$(__docker_pos_first_nonflag)
 			local counter=$(__docker_pos_first_nonflag)
@@ -1387,7 +1479,7 @@ _docker_pull() {
 _docker_push() {
 _docker_push() {
 	case "$cur" in
 	case "$cur" in
 		-*)
 		-*)
-			COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
+			COMPREPLY=( $( compgen -W "--disable-content-trust=false --help" -- "$cur" ) )
 			;;
 			;;
 		*)
 		*)
 			local counter=$(__docker_pos_first_nonflag)
 			local counter=$(__docker_pos_first_nonflag)
@@ -1488,6 +1580,8 @@ _docker_run() {
 		--expose
 		--expose
 		--group-add
 		--group-add
 		--hostname -h
 		--hostname -h
+		--ip
+		--ip6
 		--ipc
 		--ipc
 		--isolation
 		--isolation
 		--kernel-memory
 		--kernel-memory
@@ -1503,6 +1597,7 @@ _docker_run() {
 		--memory-reservation
 		--memory-reservation
 		--name
 		--name
 		--net
 		--net
+		--net-alias
 		--oom-score-adj
 		--oom-score-adj
 		--pid
 		--pid
 		--publish -p
 		--publish -p
@@ -1903,7 +1998,15 @@ _docker_volume_inspect() {
 _docker_volume_ls() {
 _docker_volume_ls() {
 	case "$prev" in
 	case "$prev" in
 		--filter|-f)
 		--filter|-f)
-			COMPREPLY=( $( compgen -W "dangling=true" -- "$cur" ) )
+			COMPREPLY=( $( compgen -S = -W "dangling" -- "$cur" ) )
+			__docker_nospace
+			return
+			;;
+	esac
+
+	case "${words[$cword-2]}$prev=" in
+		*dangling=*)
+			COMPREPLY=( $( compgen -W "true false" -- "${cur#=}" ) )
 			return
 			return
 			;;
 			;;
 	esac
 	esac

+ 52 - 9
contrib/completion/zsh/_docker

@@ -204,7 +204,7 @@ __docker_get_log_options() {
     gelf_options=("env" "gelf-address" "labels" "tag")
     gelf_options=("env" "gelf-address" "labels" "tag")
     journald_options=("env" "labels")
     journald_options=("env" "labels")
     json_file_options=("env" "labels" "max-file" "max-size")
     json_file_options=("env" "labels" "max-file" "max-size")
-    syslog_options=("syslog-address" "syslog-facility" "tag")
+    syslog_options=("syslog-address" "syslog-tls-ca-cert" "syslog-tls-cert" "syslog-tls-key" "syslog-tls-skip-verify" "syslog-facility" "tag")
     splunk_options=("env" "labels" "splunk-caname" "splunk-capath" "splunk-index" "splunk-insecureskipverify" "splunk-source" "splunk-sourcetype" "splunk-token" "splunk-url" "tag")
     splunk_options=("env" "labels" "splunk-caname" "splunk-capath" "splunk-index" "splunk-insecureskipverify" "splunk-source" "splunk-sourcetype" "splunk-token" "splunk-url" "tag")
 
 
     [[ $log_driver = (awslogs|all) ]] && _describe -t awslogs-options "awslogs options" awslogs_options "$@" && ret=0
     [[ $log_driver = (awslogs|all) ]] && _describe -t awslogs-options "awslogs options" awslogs_options "$@" && ret=0
@@ -231,6 +231,17 @@ __docker_log_options() {
     return ret
     return ret
 }
 }
 
 
+__docker_complete_detach_keys() {
+    [[ $PREFIX = -* ]] && return 1
+    integer ret=1
+
+    compset -P "*,"
+    keys=(${:-{a-z}})
+    ctrl_keys=(${:-ctrl-{{a-z},{@,'[','\\','^',']',_}}})
+    _describe -t detach_keys "[a-z]" keys -qS "," && ret=0
+    _describe -t detach_keys-ctrl "'ctrl-' + 'a-z @ [ \\\\ ] ^ _'" ctrl_keys -qS "," && ret=0
+}
+
 __docker_networks() {
 __docker_networks() {
     [[ $PREFIX = -* ]] && return 1
     [[ $PREFIX = -* ]] && return 1
     integer ret=1
     integer ret=1
@@ -291,24 +302,46 @@ __docker_network_subcommand() {
     opts_help=("(: -)--help[Print usage]")
     opts_help=("(: -)--help[Print usage]")
 
 
     case "$words[1]" in
     case "$words[1]" in
-        (connect|disconnect)
+        (connect)
             _arguments $(__docker_arguments) \
             _arguments $(__docker_arguments) \
                 $opts_help \
                 $opts_help \
+                "($help)*--alias=[Add network-scoped alias for the container]:alias: " \
+                "($help)--ip=[Container IPv4 address]:IPv4: " \
+                "($help)--ip6=[Container IPv6 address]:IPv6: " \
+                "($help)*--link=[Add a link to another container]:link:->link" \
                 "($help -)1:network:__docker_networks" \
                 "($help -)1:network:__docker_networks" \
-                "($help -)2:containers:__docker_runningcontainers" && ret=0
+                "($help -)2:containers:__docker_containers" && ret=0
+
+            case $state in
+                (link)
+                    if compset -P "*:"; then
+                        _wanted alias expl "Alias" compadd -E "" && ret=0
+                    else
+                        __docker_runningcontainers -qS ":" && ret=0
+                    fi
+                    ;;
+            esac
             ;;
             ;;
         (create)
         (create)
             _arguments $(__docker_arguments) -A '-*' \
             _arguments $(__docker_arguments) -A '-*' \
                 $opts_help \
                 $opts_help \
+                "($help)*--aux-address[Auxiliary ipv4 or ipv6 addresses used by network driver]:key=IP: " \
                 "($help -d --driver)"{-d=,--driver=}"[Driver to manage the Network]:driver:(null host bridge overlay)" \
                 "($help -d --driver)"{-d=,--driver=}"[Driver to manage the Network]:driver:(null host bridge overlay)" \
+                "($help)*--gateway=[ipv4 or ipv6 Gateway for the master subnet]:IP: " \
+                "($help)--internal[Restricts external access to the network]" \
+                "($help)*--ip-range=[Allocate container ip from a sub-range]:IP/mask: " \
                 "($help)--ipam-driver=[IP Address Management Driver]:driver:(default)" \
                 "($help)--ipam-driver=[IP Address Management Driver]:driver:(default)" \
+                "($help)*--ipam-opt=[Set custom IPAM plugin options]:opt=value: " \
+                "($help)*"{-o=,--opt=}"[Set driver specific options]:opt=value: " \
                 "($help)*--subnet=[Subnet in CIDR format that represents a network segment]:IP/mask: " \
                 "($help)*--subnet=[Subnet in CIDR format that represents a network segment]:IP/mask: " \
-                "($help)*--ip-range=[Allocate container ip from a sub-range]:IP/mask: " \
-                "($help)*--gateway=[ipv4 or ipv6 Gateway for the master subnet]:IP: " \
-                "($help)*--aux-address[Auxiliary ipv4 or ipv6 addresses used by network driver]:key=IP: " \
-                "($help)*"{-o=,--opt=}"[Set driver specific options]:key=value: " \
                 "($help -)1:Network Name: " && ret=0
                 "($help -)1:Network Name: " && ret=0
             ;;
             ;;
+        (disconnect)
+            _arguments $(__docker_arguments) \
+                $opts_help \
+                "($help -)1:network:__docker_networks" \
+                "($help -)2:containers:__docker_containers" && ret=0
+            ;;
         (inspect)
         (inspect)
             _arguments $(__docker_arguments) \
             _arguments $(__docker_arguments) \
                 $opts_help \
                 $opts_help \
@@ -485,6 +518,8 @@ __docker_subcommand() {
         "($help)*--group-add=[Add additional groups to run as]:group:_groups"
         "($help)*--group-add=[Add additional groups to run as]:group:_groups"
         "($help -h --hostname)"{-h=,--hostname=}"[Container host name]:hostname:_hosts"
         "($help -h --hostname)"{-h=,--hostname=}"[Container host name]:hostname:_hosts"
         "($help -i --interactive)"{-i,--interactive}"[Keep stdin open even if not attached]"
         "($help -i --interactive)"{-i,--interactive}"[Keep stdin open even if not attached]"
+        "($help)--ip=[Container IPv4 address]:IPv4: "
+        "($help)--ip6=[Container IPv6 address]:IPv6: "
         "($help)--ipc=[IPC namespace to use]:IPC namespace: "
         "($help)--ipc=[IPC namespace to use]:IPC namespace: "
         "($help)*--link=[Add link to another container]:link:->link"
         "($help)*--link=[Add link to another container]:link:->link"
         "($help)*"{-l=,--label=}"[Set meta data on a container]:label: "
         "($help)*"{-l=,--label=}"[Set meta data on a container]:label: "
@@ -493,6 +528,7 @@ __docker_subcommand() {
         "($help)--mac-address=[Container MAC address]:MAC address: "
         "($help)--mac-address=[Container MAC address]:MAC address: "
         "($help)--name=[Container name]:name: "
         "($help)--name=[Container name]:name: "
         "($help)--net=[Connect a container to a network]:network mode:(bridge none container host)"
         "($help)--net=[Connect a container to a network]:network mode:(bridge none container host)"
+        "($help)*--net-alias=[Add network-scoped alias for the container]:alias: "
         "($help)--oom-kill-disable[Disable OOM Killer]"
         "($help)--oom-kill-disable[Disable OOM Killer]"
         "($help)--oom-score-adj[Tune the host's OOM preferences for containers (accepts -1000 to 1000)]"
         "($help)--oom-score-adj[Tune the host's OOM preferences for containers (accepts -1000 to 1000)]"
         "($help -P --publish-all)"{-P,--publish-all}"[Publish all exposed ports]"
         "($help -P --publish-all)"{-P,--publish-all}"[Publish all exposed ports]"
@@ -515,11 +551,15 @@ __docker_subcommand() {
         "($help)--kernel-memory=[Kernel memory limit in bytes.]:Memory limit: "
         "($help)--kernel-memory=[Kernel memory limit in bytes.]:Memory limit: "
         "($help)--memory-reservation=[Memory soft limit]:Memory limit: "
         "($help)--memory-reservation=[Memory soft limit]:Memory limit: "
     )
     )
+    opts_attach_exec_run_start=(
+        "($help)--detach-keys=[Specify the escape key sequence used to detach a container]:sequence:__docker_complete_detach_keys"
+    )
 
 
     case "$words[1]" in
     case "$words[1]" in
         (attach)
         (attach)
             _arguments $(__docker_arguments) \
             _arguments $(__docker_arguments) \
                 $opts_help \
                 $opts_help \
+                $opts_attach_exec_run_start \
                 "($help)--no-stdin[Do not attach stdin]" \
                 "($help)--no-stdin[Do not attach stdin]" \
                 "($help)--sig-proxy[Proxy all received signals to the process (non-TTY mode only)]" \
                 "($help)--sig-proxy[Proxy all received signals to the process (non-TTY mode only)]" \
                 "($help -):containers:__docker_runningcontainers" && ret=0
                 "($help -):containers:__docker_runningcontainers" && ret=0
@@ -552,6 +592,7 @@ __docker_subcommand() {
         (cp)
         (cp)
             _arguments $(__docker_arguments) \
             _arguments $(__docker_arguments) \
                 $opts_help \
                 $opts_help \
+                "($help -L --follow-link)"{-L,--follow-link}"[Always follow symbol link in SRC_PATH]" \
                 "($help -)1:container:->container" \
                 "($help -)1:container:->container" \
                 "($help -)2:hostpath:_files" && ret=0
                 "($help -)2:hostpath:_files" && ret=0
             case $state in
             case $state in
@@ -650,7 +691,7 @@ __docker_subcommand() {
                     if compset -P '*='; then
                     if compset -P '*='; then
                         _files && ret=0
                         _files && ret=0
                     else
                     else
-                        opts=('kv.cacertfile' 'kv.certfile' 'kv.keyfile')
+                        opts=('discovery.heartbeat' 'discovery.ttl' 'kv.cacertfile' 'kv.certfile' 'kv.keyfile' 'kv.path')
                         _describe -t cluster-store-opts "Cluster Store Options" opts -qS "=" && ret=0
                         _describe -t cluster-store-opts "Cluster Store Options" opts -qS "=" && ret=0
                     fi
                     fi
                     ;;
                     ;;
@@ -680,6 +721,7 @@ __docker_subcommand() {
             local state
             local state
             _arguments $(__docker_arguments) \
             _arguments $(__docker_arguments) \
                 $opts_help \
                 $opts_help \
+                $opts_attach_exec_run_start \
                 "($help -d --detach)"{-d,--detach}"[Detached mode: leave the container running in the background]" \
                 "($help -d --detach)"{-d,--detach}"[Detached mode: leave the container running in the background]" \
                 "($help -i --interactive)"{-i,--interactive}"[Keep stdin open even if not attached]" \
                 "($help -i --interactive)"{-i,--interactive}"[Keep stdin open even if not attached]" \
                 "($help)--privileged[Give extended Linux capabilities to the command]" \
                 "($help)--privileged[Give extended Linux capabilities to the command]" \
@@ -874,6 +916,7 @@ __docker_subcommand() {
                 $opts_build_create_run_update \
                 $opts_build_create_run_update \
                 $opts_create_run \
                 $opts_create_run \
                 $opts_create_run_update \
                 $opts_create_run_update \
+                $opts_attach_exec_run_start \
                 "($help -d --detach)"{-d,--detach}"[Detached mode: leave the container running in the background]" \
                 "($help -d --detach)"{-d,--detach}"[Detached mode: leave the container running in the background]" \
                 "($help)--rm[Remove intermediate containers when it exits]" \
                 "($help)--rm[Remove intermediate containers when it exits]" \
                 "($help)--sig-proxy[Proxy all received signals to the process (non-TTY mode only)]" \
                 "($help)--sig-proxy[Proxy all received signals to the process (non-TTY mode only)]" \
@@ -910,6 +953,7 @@ __docker_subcommand() {
         (start)
         (start)
             _arguments $(__docker_arguments) \
             _arguments $(__docker_arguments) \
                 $opts_help \
                 $opts_help \
+                $opts_attach_exec_run_start \
                 "($help -a --attach)"{-a,--attach}"[Attach container's stdout/stderr and forward all signals]" \
                 "($help -a --attach)"{-a,--attach}"[Attach container's stdout/stderr and forward all signals]" \
                 "($help -i --interactive)"{-i,--interactive}"[Attach container's stding]" \
                 "($help -i --interactive)"{-i,--interactive}"[Attach container's stding]" \
                 "($help -)*:containers:__docker_stoppedcontainers" && ret=0
                 "($help -)*:containers:__docker_stoppedcontainers" && ret=0
@@ -924,7 +968,6 @@ __docker_subcommand() {
         (tag)
         (tag)
             _arguments $(__docker_arguments) \
             _arguments $(__docker_arguments) \
                 $opts_help \
                 $opts_help \
-                "($help -f --force)"{-f,--force}"[force]"\
                 "($help -):source:__docker_images"\
                 "($help -):source:__docker_images"\
                 "($help -):destination:__docker_repositories_with_tags" && ret=0
                 "($help -):destination:__docker_repositories_with_tags" && ret=0
             ;;
             ;;

+ 1 - 0
contrib/init/systemd/docker.service

@@ -11,6 +11,7 @@ MountFlags=slave
 LimitNOFILE=1048576
 LimitNOFILE=1048576
 LimitNPROC=1048576
 LimitNPROC=1048576
 LimitCORE=infinity
 LimitCORE=infinity
+TasksMax=1048576
 TimeoutStartSec=0
 TimeoutStartSec=0
 
 
 [Install]
 [Install]

+ 7 - 0
contrib/init/sysvinit-debian/docker.default

@@ -1,5 +1,12 @@
 # Docker Upstart and SysVinit configuration file
 # Docker Upstart and SysVinit configuration file
 
 
+#
+# THIS FILE DOES NOT APPLY TO SYSTEMD
+#
+#   Please see the documentation for "systemd drop-ins":
+#   https://docs.docker.com/engine/articles/systemd/
+#
+
 # Customize location of Docker binary (especially for development testing).
 # Customize location of Docker binary (especially for development testing).
 #DOCKER="/usr/local/bin/docker"
 #DOCKER="/usr/local/bin/docker"
 
 

+ 1 - 1
contrib/syntax/vim/syntax/dockerfile.vim

@@ -11,7 +11,7 @@ let b:current_syntax = "dockerfile"
 
 
 syntax case ignore
 syntax case ignore
 
 
-syntax match dockerfileKeyword /\v^\s*(ONBUILD\s+)?(ADD|CMD|ENTRYPOINT|ENV|EXPOSE|FROM|MAINTAINER|RUN|USER|LABEL|VOLUME|WORKDIR|COPY|STOPSIGNAL)\s/
+syntax match dockerfileKeyword /\v^\s*(ONBUILD\s+)?(ADD|CMD|ENTRYPOINT|ENV|EXPOSE|FROM|MAINTAINER|RUN|USER|LABEL|VOLUME|WORKDIR|COPY|STOPSIGNAL|ARG)\s/
 highlight link dockerfileKeyword Keyword
 highlight link dockerfileKeyword Keyword
 
 
 syntax region dockerfileString start=/\v"/ skip=/\v\\./ end=/\v"/
 syntax region dockerfileString start=/\v"/ skip=/\v\\./ end=/\v"/

+ 83 - 22
daemon/config.go

@@ -21,6 +21,15 @@ const (
 	disableNetworkBridge = "none"
 	disableNetworkBridge = "none"
 )
 )
 
 
+// flatOptions contains configuration keys
+// that MUST NOT be parsed as deep structures.
+// Use this to differentiate these options
+// with others like the ones in CommonTLSOptions.
+var flatOptions = map[string]bool{
+	"cluster-store-opts": true,
+	"log-opts":           true,
+}
+
 // LogConfig represents the default log configuration.
 // LogConfig represents the default log configuration.
 // It includes json tags to deserialize configuration from a file
 // It includes json tags to deserialize configuration from a file
 // using the same names that the flags in the command line uses.
 // using the same names that the flags in the command line uses.
@@ -45,7 +54,6 @@ type CommonTLSOptions struct {
 type CommonConfig struct {
 type CommonConfig struct {
 	AuthorizationPlugins []string            `json:"authorization-plugins,omitempty"` // AuthorizationPlugins holds list of authorization plugins
 	AuthorizationPlugins []string            `json:"authorization-plugins,omitempty"` // AuthorizationPlugins holds list of authorization plugins
 	AutoRestart          bool                `json:"-"`
 	AutoRestart          bool                `json:"-"`
-	Bridge               bridgeConfig        `json:"-"` // Bridge holds bridge network specific configuration.
 	Context              map[string][]string `json:"-"`
 	Context              map[string][]string `json:"-"`
 	DisableBridge        bool                `json:"-"`
 	DisableBridge        bool                `json:"-"`
 	DNS                  []string            `json:"dns,omitempty"`
 	DNS                  []string            `json:"dns,omitempty"`
@@ -56,7 +64,6 @@ type CommonConfig struct {
 	GraphDriver          string              `json:"storage-driver,omitempty"`
 	GraphDriver          string              `json:"storage-driver,omitempty"`
 	GraphOptions         []string            `json:"storage-opts,omitempty"`
 	GraphOptions         []string            `json:"storage-opts,omitempty"`
 	Labels               []string            `json:"labels,omitempty"`
 	Labels               []string            `json:"labels,omitempty"`
-	LogConfig            LogConfig           `json:"log-config,omitempty"`
 	Mtu                  int                 `json:"mtu,omitempty"`
 	Mtu                  int                 `json:"mtu,omitempty"`
 	Pidfile              string              `json:"pidfile,omitempty"`
 	Pidfile              string              `json:"pidfile,omitempty"`
 	Root                 string              `json:"graph,omitempty"`
 	Root                 string              `json:"graph,omitempty"`
@@ -76,14 +83,20 @@ type CommonConfig struct {
 	// reachable by other hosts.
 	// reachable by other hosts.
 	ClusterAdvertise string `json:"cluster-advertise,omitempty"`
 	ClusterAdvertise string `json:"cluster-advertise,omitempty"`
 
 
-	Debug      bool             `json:"debug,omitempty"`
-	Hosts      []string         `json:"hosts,omitempty"`
-	LogLevel   string           `json:"log-level,omitempty"`
-	TLS        bool             `json:"tls,omitempty"`
-	TLSVerify  bool             `json:"tls-verify,omitempty"`
-	TLSOptions CommonTLSOptions `json:"tls-opts,omitempty"`
+	Debug     bool     `json:"debug,omitempty"`
+	Hosts     []string `json:"hosts,omitempty"`
+	LogLevel  string   `json:"log-level,omitempty"`
+	TLS       bool     `json:"tls,omitempty"`
+	TLSVerify bool     `json:"tlsverify,omitempty"`
+
+	// Embedded structs that allow config
+	// deserialization without the full struct.
+	CommonTLSOptions
+	LogConfig
+	bridgeConfig // bridgeConfig holds bridge network specific configuration.
 
 
 	reloadLock sync.Mutex
 	reloadLock sync.Mutex
+	valuesSet  map[string]interface{}
 }
 }
 
 
 // InstallCommonFlags adds command-line options to the top-level flag parser for
 // InstallCommonFlags adds command-line options to the top-level flag parser for
@@ -112,6 +125,16 @@ func (config *Config) InstallCommonFlags(cmd *flag.FlagSet, usageFn func(string)
 	cmd.Var(opts.NewNamedMapOpts("cluster-store-opts", config.ClusterOpts, nil), []string{"-cluster-store-opt"}, usageFn("Set cluster store options"))
 	cmd.Var(opts.NewNamedMapOpts("cluster-store-opts", config.ClusterOpts, nil), []string{"-cluster-store-opt"}, usageFn("Set cluster store options"))
 }
 }
 
 
+// IsValueSet returns true if a configuration value
+// was explicitly set in the configuration file.
+func (config *Config) IsValueSet(name string) bool {
+	if config.valuesSet == nil {
+		return false
+	}
+	_, ok := config.valuesSet[name]
+	return ok
+}
+
 func parseClusterAdvertiseSettings(clusterStore, clusterAdvertise string) (string, error) {
 func parseClusterAdvertiseSettings(clusterStore, clusterAdvertise string) (string, error) {
 	if clusterAdvertise == "" {
 	if clusterAdvertise == "" {
 		return "", errDiscoveryDisabled
 		return "", errDiscoveryDisabled
@@ -165,6 +188,7 @@ func getConflictFreeConfiguration(configFile string, flags *flag.FlagSet) (*Conf
 		return nil, err
 		return nil, err
 	}
 	}
 
 
+	var config Config
 	var reader io.Reader
 	var reader io.Reader
 	if flags != nil {
 	if flags != nil {
 		var jsonConfig map[string]interface{}
 		var jsonConfig map[string]interface{}
@@ -173,41 +197,78 @@ func getConflictFreeConfiguration(configFile string, flags *flag.FlagSet) (*Conf
 			return nil, err
 			return nil, err
 		}
 		}
 
 
-		if err := findConfigurationConflicts(jsonConfig, flags); err != nil {
+		configSet := configValuesSet(jsonConfig)
+
+		if err := findConfigurationConflicts(configSet, flags); err != nil {
 			return nil, err
 			return nil, err
 		}
 		}
+
+		config.valuesSet = configSet
 	}
 	}
 
 
-	var config Config
 	reader = bytes.NewReader(b)
 	reader = bytes.NewReader(b)
 	err = json.NewDecoder(reader).Decode(&config)
 	err = json.NewDecoder(reader).Decode(&config)
 	return &config, err
 	return &config, err
 }
 }
 
 
-// findConfigurationConflicts iterates over the provided flags searching for
-// duplicated configurations. It returns an error with all the conflicts if
-// it finds any.
-func findConfigurationConflicts(config map[string]interface{}, flags *flag.FlagSet) error {
-	var conflicts []string
+// configValuesSet returns the configuration values explicitly set in the file.
+func configValuesSet(config map[string]interface{}) map[string]interface{} {
 	flatten := make(map[string]interface{})
 	flatten := make(map[string]interface{})
 	for k, v := range config {
 	for k, v := range config {
-		if m, ok := v.(map[string]interface{}); ok {
+		if m, isMap := v.(map[string]interface{}); isMap && !flatOptions[k] {
 			for km, vm := range m {
 			for km, vm := range m {
 				flatten[km] = vm
 				flatten[km] = vm
 			}
 			}
-		} else {
-			flatten[k] = v
+			continue
+		}
+
+		flatten[k] = v
+	}
+	return flatten
+}
+
+// findConfigurationConflicts iterates over the provided flags searching for
+// duplicated configurations and unknown keys. It returns an error with all the conflicts if
+// it finds any.
+func findConfigurationConflicts(config map[string]interface{}, flags *flag.FlagSet) error {
+	// 1. Search keys from the file that we don't recognize as flags.
+	unknownKeys := make(map[string]interface{})
+	for key, value := range config {
+		flagName := "-" + key
+		if flag := flags.Lookup(flagName); flag == nil {
+			unknownKeys[key] = value
+		}
+	}
+
+	// 2. Discard values that implement NamedOption.
+	// Their configuration name differs from their flag name, like `labels` and `label`.
+	unknownNamedConflicts := func(f *flag.Flag) {
+		if namedOption, ok := f.Value.(opts.NamedOption); ok {
+			if _, valid := unknownKeys[namedOption.Name()]; valid {
+				delete(unknownKeys, namedOption.Name())
+			}
 		}
 		}
 	}
 	}
+	flags.VisitAll(unknownNamedConflicts)
 
 
+	if len(unknownKeys) > 0 {
+		var unknown []string
+		for key := range unknownKeys {
+			unknown = append(unknown, key)
+		}
+		return fmt.Errorf("the following directives don't match any configuration option: %s", strings.Join(unknown, ", "))
+	}
+
+	var conflicts []string
 	printConflict := func(name string, flagValue, fileValue interface{}) string {
 	printConflict := func(name string, flagValue, fileValue interface{}) string {
 		return fmt.Sprintf("%s: (from flag: %v, from file: %v)", name, flagValue, fileValue)
 		return fmt.Sprintf("%s: (from flag: %v, from file: %v)", name, flagValue, fileValue)
 	}
 	}
 
 
-	collectConflicts := func(f *flag.Flag) {
+	// 3. Search keys that are present as a flag and as a file option.
+	duplicatedConflicts := func(f *flag.Flag) {
 		// search option name in the json configuration payload if the value is a named option
 		// search option name in the json configuration payload if the value is a named option
 		if namedOption, ok := f.Value.(opts.NamedOption); ok {
 		if namedOption, ok := f.Value.(opts.NamedOption); ok {
-			if optsValue, ok := flatten[namedOption.Name()]; ok {
+			if optsValue, ok := config[namedOption.Name()]; ok {
 				conflicts = append(conflicts, printConflict(namedOption.Name(), f.Value.String(), optsValue))
 				conflicts = append(conflicts, printConflict(namedOption.Name(), f.Value.String(), optsValue))
 			}
 			}
 		} else {
 		} else {
@@ -215,7 +276,7 @@ func findConfigurationConflicts(config map[string]interface{}, flags *flag.FlagS
 			for _, name := range f.Names {
 			for _, name := range f.Names {
 				name = strings.TrimLeft(name, "-")
 				name = strings.TrimLeft(name, "-")
 
 
-				if value, ok := flatten[name]; ok {
+				if value, ok := config[name]; ok {
 					conflicts = append(conflicts, printConflict(name, f.Value.String(), value))
 					conflicts = append(conflicts, printConflict(name, f.Value.String(), value))
 					break
 					break
 				}
 				}
@@ -223,7 +284,7 @@ func findConfigurationConflicts(config map[string]interface{}, flags *flag.FlagS
 		}
 		}
 	}
 	}
 
 
-	flags.Visit(collectConflicts)
+	flags.Visit(duplicatedConflicts)
 
 
 	if len(conflicts) > 0 {
 	if len(conflicts) > 0 {
 		return fmt.Errorf("the following directives are specified both as a flag and in the configuration file: %s", strings.Join(conflicts, ", "))
 		return fmt.Errorf("the following directives are specified both as a flag and in the configuration file: %s", strings.Join(conflicts, ", "))

+ 42 - 9
daemon/config_test.go

@@ -89,21 +89,16 @@ func TestFindConfigurationConflicts(t *testing.T) {
 	config := map[string]interface{}{"authorization-plugins": "foobar"}
 	config := map[string]interface{}{"authorization-plugins": "foobar"}
 	flags := mflag.NewFlagSet("test", mflag.ContinueOnError)
 	flags := mflag.NewFlagSet("test", mflag.ContinueOnError)
 
 
-	err := findConfigurationConflicts(config, flags)
-	if err != nil {
-		t.Fatal(err)
-	}
-
-	flags.String([]string{"authorization-plugins"}, "", "")
-	if err := flags.Set("authorization-plugins", "asdf"); err != nil {
+	flags.String([]string{"-authorization-plugins"}, "", "")
+	if err := flags.Set("-authorization-plugins", "asdf"); err != nil {
 		t.Fatal(err)
 		t.Fatal(err)
 	}
 	}
 
 
-	err = findConfigurationConflicts(config, flags)
+	err := findConfigurationConflicts(config, flags)
 	if err == nil {
 	if err == nil {
 		t.Fatal("expected error, got nil")
 		t.Fatal("expected error, got nil")
 	}
 	}
-	if !strings.Contains(err.Error(), "authorization-plugins") {
+	if !strings.Contains(err.Error(), "authorization-plugins: (from flag: asdf, from file: foobar)") {
 		t.Fatalf("expected authorization-plugins conflict, got %v", err)
 		t.Fatalf("expected authorization-plugins conflict, got %v", err)
 	}
 	}
 }
 }
@@ -175,3 +170,41 @@ func TestDaemonConfigurationMergeConflictsWithInnerStructs(t *testing.T) {
 		t.Fatalf("expected tlscacert conflict, got %v", err)
 		t.Fatalf("expected tlscacert conflict, got %v", err)
 	}
 	}
 }
 }
+
+func TestFindConfigurationConflictsWithUnknownKeys(t *testing.T) {
+	config := map[string]interface{}{"tls-verify": "true"}
+	flags := mflag.NewFlagSet("test", mflag.ContinueOnError)
+
+	flags.Bool([]string{"-tlsverify"}, false, "")
+	err := findConfigurationConflicts(config, flags)
+	if err == nil {
+		t.Fatal("expected error, got nil")
+	}
+	if !strings.Contains(err.Error(), "the following directives don't match any configuration option: tls-verify") {
+		t.Fatalf("expected tls-verify conflict, got %v", err)
+	}
+}
+
+func TestFindConfigurationConflictsWithMergedValues(t *testing.T) {
+	var hosts []string
+	config := map[string]interface{}{"hosts": "tcp://127.0.0.1:2345"}
+	base := mflag.NewFlagSet("base", mflag.ContinueOnError)
+	base.Var(opts.NewNamedListOptsRef("hosts", &hosts, nil), []string{"H", "-host"}, "")
+
+	flags := mflag.NewFlagSet("test", mflag.ContinueOnError)
+	mflag.Merge(flags, base)
+
+	err := findConfigurationConflicts(config, flags)
+	if err != nil {
+		t.Fatal(err)
+	}
+
+	flags.Set("-host", "unix:///var/run/docker.sock")
+	err = findConfigurationConflicts(config, flags)
+	if err == nil {
+		t.Fatal("expected error, got nil")
+	}
+	if !strings.Contains(err.Error(), "hosts: (from flag: [unix:///var/run/docker.sock], from file: tcp://127.0.0.1:2345)") {
+		t.Fatalf("expected hosts conflict, got %v", err)
+	}
+}

+ 26 - 26
daemon/config_unix.go

@@ -37,19 +37,19 @@ type Config struct {
 // bridgeConfig stores all the bridge driver specific
 // bridgeConfig stores all the bridge driver specific
 // configuration.
 // configuration.
 type bridgeConfig struct {
 type bridgeConfig struct {
-	EnableIPv6                  bool
-	EnableIPTables              bool
-	EnableIPForward             bool
-	EnableIPMasq                bool
-	EnableUserlandProxy         bool
-	DefaultIP                   net.IP
-	Iface                       string
-	IP                          string
-	FixedCIDR                   string
-	FixedCIDRv6                 string
-	DefaultGatewayIPv4          net.IP
-	DefaultGatewayIPv6          net.IP
-	InterContainerCommunication bool
+	EnableIPv6                  bool   `json:"ipv6,omitempty"`
+	EnableIPTables              bool   `json:"iptables,omitempty"`
+	EnableIPForward             bool   `json:"ip-forward,omitempty"`
+	EnableIPMasq                bool   `json:"ip-mask,omitempty"`
+	EnableUserlandProxy         bool   `json:"userland-proxy,omitempty"`
+	DefaultIP                   net.IP `json:"ip,omitempty"`
+	Iface                       string `json:"bridge,omitempty"`
+	IP                          string `json:"bip,omitempty"`
+	FixedCIDR                   string `json:"fixed-cidr,omitempty"`
+	FixedCIDRv6                 string `json:"fixed-cidr-v6,omitempty"`
+	DefaultGatewayIPv4          net.IP `json:"default-gateway,omitempty"`
+	DefaultGatewayIPv6          net.IP `json:"default-gateway-v6,omitempty"`
+	InterContainerCommunication bool   `json:"icc,omitempty"`
 }
 }
 
 
 // InstallFlags adds command-line options to the top-level flag parser for
 // InstallFlags adds command-line options to the top-level flag parser for
@@ -65,19 +65,19 @@ func (config *Config) InstallFlags(cmd *flag.FlagSet, usageFn func(string) strin
 	cmd.StringVar(&config.SocketGroup, []string{"G", "-group"}, "docker", usageFn("Group for the unix socket"))
 	cmd.StringVar(&config.SocketGroup, []string{"G", "-group"}, "docker", usageFn("Group for the unix socket"))
 	config.Ulimits = make(map[string]*units.Ulimit)
 	config.Ulimits = make(map[string]*units.Ulimit)
 	cmd.Var(runconfigopts.NewUlimitOpt(&config.Ulimits), []string{"-default-ulimit"}, usageFn("Set default ulimits for containers"))
 	cmd.Var(runconfigopts.NewUlimitOpt(&config.Ulimits), []string{"-default-ulimit"}, usageFn("Set default ulimits for containers"))
-	cmd.BoolVar(&config.Bridge.EnableIPTables, []string{"#iptables", "-iptables"}, true, usageFn("Enable addition of iptables rules"))
-	cmd.BoolVar(&config.Bridge.EnableIPForward, []string{"#ip-forward", "-ip-forward"}, true, usageFn("Enable net.ipv4.ip_forward"))
-	cmd.BoolVar(&config.Bridge.EnableIPMasq, []string{"-ip-masq"}, true, usageFn("Enable IP masquerading"))
-	cmd.BoolVar(&config.Bridge.EnableIPv6, []string{"-ipv6"}, false, usageFn("Enable IPv6 networking"))
-	cmd.StringVar(&config.Bridge.IP, []string{"#bip", "-bip"}, "", usageFn("Specify network bridge IP"))
-	cmd.StringVar(&config.Bridge.Iface, []string{"b", "-bridge"}, "", usageFn("Attach containers to a network bridge"))
-	cmd.StringVar(&config.Bridge.FixedCIDR, []string{"-fixed-cidr"}, "", usageFn("IPv4 subnet for fixed IPs"))
-	cmd.StringVar(&config.Bridge.FixedCIDRv6, []string{"-fixed-cidr-v6"}, "", usageFn("IPv6 subnet for fixed IPs"))
-	cmd.Var(opts.NewIPOpt(&config.Bridge.DefaultGatewayIPv4, ""), []string{"-default-gateway"}, usageFn("Container default gateway IPv4 address"))
-	cmd.Var(opts.NewIPOpt(&config.Bridge.DefaultGatewayIPv6, ""), []string{"-default-gateway-v6"}, usageFn("Container default gateway IPv6 address"))
-	cmd.BoolVar(&config.Bridge.InterContainerCommunication, []string{"#icc", "-icc"}, true, usageFn("Enable inter-container communication"))
-	cmd.Var(opts.NewIPOpt(&config.Bridge.DefaultIP, "0.0.0.0"), []string{"#ip", "-ip"}, usageFn("Default IP when binding container ports"))
-	cmd.BoolVar(&config.Bridge.EnableUserlandProxy, []string{"-userland-proxy"}, true, usageFn("Use userland proxy for loopback traffic"))
+	cmd.BoolVar(&config.bridgeConfig.EnableIPTables, []string{"#iptables", "-iptables"}, true, usageFn("Enable addition of iptables rules"))
+	cmd.BoolVar(&config.bridgeConfig.EnableIPForward, []string{"#ip-forward", "-ip-forward"}, true, usageFn("Enable net.ipv4.ip_forward"))
+	cmd.BoolVar(&config.bridgeConfig.EnableIPMasq, []string{"-ip-masq"}, true, usageFn("Enable IP masquerading"))
+	cmd.BoolVar(&config.bridgeConfig.EnableIPv6, []string{"-ipv6"}, false, usageFn("Enable IPv6 networking"))
+	cmd.StringVar(&config.bridgeConfig.IP, []string{"#bip", "-bip"}, "", usageFn("Specify network bridge IP"))
+	cmd.StringVar(&config.bridgeConfig.Iface, []string{"b", "-bridge"}, "", usageFn("Attach containers to a network bridge"))
+	cmd.StringVar(&config.bridgeConfig.FixedCIDR, []string{"-fixed-cidr"}, "", usageFn("IPv4 subnet for fixed IPs"))
+	cmd.StringVar(&config.bridgeConfig.FixedCIDRv6, []string{"-fixed-cidr-v6"}, "", usageFn("IPv6 subnet for fixed IPs"))
+	cmd.Var(opts.NewIPOpt(&config.bridgeConfig.DefaultGatewayIPv4, ""), []string{"-default-gateway"}, usageFn("Container default gateway IPv4 address"))
+	cmd.Var(opts.NewIPOpt(&config.bridgeConfig.DefaultGatewayIPv6, ""), []string{"-default-gateway-v6"}, usageFn("Container default gateway IPv6 address"))
+	cmd.BoolVar(&config.bridgeConfig.InterContainerCommunication, []string{"#icc", "-icc"}, true, usageFn("Enable inter-container communication"))
+	cmd.Var(opts.NewIPOpt(&config.bridgeConfig.DefaultIP, "0.0.0.0"), []string{"#ip", "-ip"}, usageFn("Default IP when binding container ports"))
+	cmd.BoolVar(&config.bridgeConfig.EnableUserlandProxy, []string{"-userland-proxy"}, true, usageFn("Use userland proxy for loopback traffic"))
 	cmd.BoolVar(&config.EnableCors, []string{"#api-enable-cors", "#-api-enable-cors"}, false, usageFn("Enable CORS headers in the remote API, this is deprecated by --api-cors-header"))
 	cmd.BoolVar(&config.EnableCors, []string{"#api-enable-cors", "#-api-enable-cors"}, false, usageFn("Enable CORS headers in the remote API, this is deprecated by --api-cors-header"))
 	cmd.StringVar(&config.CorsHeaders, []string{"-api-cors-header"}, "", usageFn("Set CORS headers in the remote API"))
 	cmd.StringVar(&config.CorsHeaders, []string{"-api-cors-header"}, "", usageFn("Set CORS headers in the remote API"))
 	cmd.StringVar(&config.CgroupParent, []string{"-cgroup-parent"}, "", usageFn("Set parent cgroup for all containers"))
 	cmd.StringVar(&config.CgroupParent, []string{"-cgroup-parent"}, "", usageFn("Set parent cgroup for all containers"))

+ 2 - 2
daemon/config_windows.go

@@ -15,7 +15,7 @@ var (
 // bridgeConfig stores all the bridge driver specific
 // bridgeConfig stores all the bridge driver specific
 // configuration.
 // configuration.
 type bridgeConfig struct {
 type bridgeConfig struct {
-	VirtualSwitchName string
+	VirtualSwitchName string `json:"bridge,omitempty"`
 }
 }
 
 
 // Config defines the configuration of a docker daemon.
 // Config defines the configuration of a docker daemon.
@@ -37,5 +37,5 @@ func (config *Config) InstallFlags(cmd *flag.FlagSet, usageFn func(string) strin
 	config.InstallCommonFlags(cmd, usageFn)
 	config.InstallCommonFlags(cmd, usageFn)
 
 
 	// Then platform-specific install flags.
 	// Then platform-specific install flags.
-	cmd.StringVar(&config.Bridge.VirtualSwitchName, []string{"b", "-bridge"}, "", "Attach containers to a virtual switch")
+	cmd.StringVar(&config.bridgeConfig.VirtualSwitchName, []string{"b", "-bridge"}, "", "Attach containers to a virtual switch")
 }
 }

+ 36 - 34
daemon/container_operations_unix.go

@@ -21,7 +21,6 @@ import (
 	"github.com/docker/docker/pkg/fileutils"
 	"github.com/docker/docker/pkg/fileutils"
 	"github.com/docker/docker/pkg/idtools"
 	"github.com/docker/docker/pkg/idtools"
 	"github.com/docker/docker/pkg/mount"
 	"github.com/docker/docker/pkg/mount"
-	"github.com/docker/docker/pkg/parsers"
 	"github.com/docker/docker/pkg/stringid"
 	"github.com/docker/docker/pkg/stringid"
 	"github.com/docker/docker/runconfig"
 	"github.com/docker/docker/runconfig"
 	containertypes "github.com/docker/engine-api/types/container"
 	containertypes "github.com/docker/engine-api/types/container"
@@ -209,10 +208,12 @@ func (daemon *Daemon) populateCommand(c *container.Container, env []string) erro
 		BlkioThrottleWriteBpsDevice:  writeBpsDevice,
 		BlkioThrottleWriteBpsDevice:  writeBpsDevice,
 		BlkioThrottleReadIOpsDevice:  readIOpsDevice,
 		BlkioThrottleReadIOpsDevice:  readIOpsDevice,
 		BlkioThrottleWriteIOpsDevice: writeIOpsDevice,
 		BlkioThrottleWriteIOpsDevice: writeIOpsDevice,
-		OomKillDisable:               *c.HostConfig.OomKillDisable,
 		MemorySwappiness:             -1,
 		MemorySwappiness:             -1,
 	}
 	}
 
 
+	if c.HostConfig.OomKillDisable != nil {
+		resources.OomKillDisable = *c.HostConfig.OomKillDisable
+	}
 	if c.HostConfig.MemorySwappiness != nil {
 	if c.HostConfig.MemorySwappiness != nil {
 		resources.MemorySwappiness = *c.HostConfig.MemorySwappiness
 		resources.MemorySwappiness = *c.HostConfig.MemorySwappiness
 	}
 	}
@@ -249,16 +250,8 @@ func (daemon *Daemon) populateCommand(c *container.Container, env []string) erro
 	defaultCgroupParent := "/docker"
 	defaultCgroupParent := "/docker"
 	if daemon.configStore.CgroupParent != "" {
 	if daemon.configStore.CgroupParent != "" {
 		defaultCgroupParent = daemon.configStore.CgroupParent
 		defaultCgroupParent = daemon.configStore.CgroupParent
-	} else {
-		for _, option := range daemon.configStore.ExecOptions {
-			key, val, err := parsers.ParseKeyValueOpt(option)
-			if err != nil || !strings.EqualFold(key, "native.cgroupdriver") {
-				continue
-			}
-			if val == "systemd" {
-				defaultCgroupParent = "system.slice"
-			}
-		}
+	} else if daemon.usingSystemd() {
+		defaultCgroupParent = "system.slice"
 	}
 	}
 	c.Command = &execdriver.Command{
 	c.Command = &execdriver.Command{
 		CommonCommand: execdriver.CommonCommand{
 		CommonCommand: execdriver.CommonCommand{
@@ -513,7 +506,7 @@ func (daemon *Daemon) updateEndpointNetworkSettings(container *container.Contain
 	}
 	}
 
 
 	if container.HostConfig.NetworkMode == containertypes.NetworkMode("bridge") {
 	if container.HostConfig.NetworkMode == containertypes.NetworkMode("bridge") {
-		container.NetworkSettings.Bridge = daemon.configStore.Bridge.Iface
+		container.NetworkSettings.Bridge = daemon.configStore.bridgeConfig.Iface
 	}
 	}
 
 
 	return nil
 	return nil
@@ -658,6 +651,9 @@ func hasUserDefinedIPAddress(epConfig *networktypes.EndpointSettings) bool {
 
 
 // User specified ip address is acceptable only for networks with user specified subnets.
 // User specified ip address is acceptable only for networks with user specified subnets.
 func validateNetworkingConfig(n libnetwork.Network, epConfig *networktypes.EndpointSettings) error {
 func validateNetworkingConfig(n libnetwork.Network, epConfig *networktypes.EndpointSettings) error {
+	if n == nil || epConfig == nil {
+		return nil
+	}
 	if !hasUserDefinedIPAddress(epConfig) {
 	if !hasUserDefinedIPAddress(epConfig) {
 		return nil
 		return nil
 	}
 	}
@@ -704,7 +700,7 @@ func cleanOperationalData(es *networktypes.EndpointSettings) {
 	es.MacAddress = ""
 	es.MacAddress = ""
 }
 }
 
 
-func (daemon *Daemon) updateNetworkConfig(container *container.Container, idOrName string, updateSettings bool) (libnetwork.Network, error) {
+func (daemon *Daemon) updateNetworkConfig(container *container.Container, idOrName string, endpointConfig *networktypes.EndpointSettings, updateSettings bool) (libnetwork.Network, error) {
 	if container.HostConfig.NetworkMode.IsContainer() {
 	if container.HostConfig.NetworkMode.IsContainer() {
 		return nil, runconfig.ErrConflictSharedNetwork
 		return nil, runconfig.ErrConflictSharedNetwork
 	}
 	}
@@ -715,11 +711,24 @@ func (daemon *Daemon) updateNetworkConfig(container *container.Container, idOrNa
 		return nil, nil
 		return nil, nil
 	}
 	}
 
 
+	if !containertypes.NetworkMode(idOrName).IsUserDefined() {
+		if hasUserDefinedIPAddress(endpointConfig) {
+			return nil, runconfig.ErrUnsupportedNetworkAndIP
+		}
+		if endpointConfig != nil && len(endpointConfig.Aliases) > 0 {
+			return nil, runconfig.ErrUnsupportedNetworkAndAlias
+		}
+	}
+
 	n, err := daemon.FindNetwork(idOrName)
 	n, err := daemon.FindNetwork(idOrName)
 	if err != nil {
 	if err != nil {
 		return nil, err
 		return nil, err
 	}
 	}
 
 
+	if err := validateNetworkingConfig(n, endpointConfig); err != nil {
+		return nil, err
+	}
+
 	if updateSettings {
 	if updateSettings {
 		if err := daemon.updateNetworkSettings(container, n); err != nil {
 		if err := daemon.updateNetworkSettings(container, n); err != nil {
 			return nil, err
 			return nil, err
@@ -734,9 +743,12 @@ func (daemon *Daemon) ConnectToNetwork(container *container.Container, idOrName
 		if container.RemovalInProgress || container.Dead {
 		if container.RemovalInProgress || container.Dead {
 			return derr.ErrorCodeRemovalContainer.WithArgs(container.ID)
 			return derr.ErrorCodeRemovalContainer.WithArgs(container.ID)
 		}
 		}
-		if _, err := daemon.updateNetworkConfig(container, idOrName, true); err != nil {
+		if _, err := daemon.updateNetworkConfig(container, idOrName, endpointConfig, true); err != nil {
 			return err
 			return err
 		}
 		}
+		if endpointConfig != nil {
+			container.NetworkSettings.Networks[idOrName] = endpointConfig
+		}
 	} else {
 	} else {
 		if err := daemon.connectToNetwork(container, idOrName, endpointConfig, true); err != nil {
 		if err := daemon.connectToNetwork(container, idOrName, endpointConfig, true); err != nil {
 			return err
 			return err
@@ -749,7 +761,7 @@ func (daemon *Daemon) ConnectToNetwork(container *container.Container, idOrName
 }
 }
 
 
 func (daemon *Daemon) connectToNetwork(container *container.Container, idOrName string, endpointConfig *networktypes.EndpointSettings, updateSettings bool) (err error) {
 func (daemon *Daemon) connectToNetwork(container *container.Container, idOrName string, endpointConfig *networktypes.EndpointSettings, updateSettings bool) (err error) {
-	n, err := daemon.updateNetworkConfig(container, idOrName, updateSettings)
+	n, err := daemon.updateNetworkConfig(container, idOrName, endpointConfig, updateSettings)
 	if err != nil {
 	if err != nil {
 		return err
 		return err
 	}
 	}
@@ -757,25 +769,10 @@ func (daemon *Daemon) connectToNetwork(container *container.Container, idOrName
 		return nil
 		return nil
 	}
 	}
 
 
-	if !containertypes.NetworkMode(idOrName).IsUserDefined() && hasUserDefinedIPAddress(endpointConfig) {
-		return runconfig.ErrUnsupportedNetworkAndIP
-	}
-
-	if !containertypes.NetworkMode(idOrName).IsUserDefined() && len(endpointConfig.Aliases) > 0 {
-		return runconfig.ErrUnsupportedNetworkAndAlias
-	}
-
 	controller := daemon.netController
 	controller := daemon.netController
 
 
-	if err := validateNetworkingConfig(n, endpointConfig); err != nil {
-		return err
-	}
-
-	if endpointConfig != nil {
-		container.NetworkSettings.Networks[n.Name()] = endpointConfig
-	}
-
-	createOptions, err := container.BuildCreateEndpointOptions(n)
+	sb := daemon.getNetworkSandbox(container)
+	createOptions, err := container.BuildCreateEndpointOptions(n, endpointConfig, sb)
 	if err != nil {
 	if err != nil {
 		return err
 		return err
 	}
 	}
@@ -793,11 +790,14 @@ func (daemon *Daemon) connectToNetwork(container *container.Container, idOrName
 		}
 		}
 	}()
 	}()
 
 
+	if endpointConfig != nil {
+		container.NetworkSettings.Networks[n.Name()] = endpointConfig
+	}
+
 	if err := daemon.updateEndpointNetworkSettings(container, n, ep); err != nil {
 	if err := daemon.updateEndpointNetworkSettings(container, n, ep); err != nil {
 		return err
 		return err
 	}
 	}
 
 
-	sb := daemon.getNetworkSandbox(container)
 	if sb == nil {
 	if sb == nil {
 		options, err := daemon.buildSandboxOptions(container, n)
 		options, err := daemon.buildSandboxOptions(container, n)
 		if err != nil {
 		if err != nil {
@@ -1000,6 +1000,8 @@ func (daemon *Daemon) releaseNetwork(container *container.Container) {
 
 
 	sid := container.NetworkSettings.SandboxID
 	sid := container.NetworkSettings.SandboxID
 	settings := container.NetworkSettings.Networks
 	settings := container.NetworkSettings.Networks
+	container.NetworkSettings.Ports = nil
+
 	if sid == "" || len(settings) == 0 {
 	if sid == "" || len(settings) == 0 {
 		return
 		return
 	}
 	}

+ 1 - 1
daemon/container_operations_windows.go

@@ -54,7 +54,7 @@ func (daemon *Daemon) populateCommand(c *container.Container, env []string) erro
 		if !c.Config.NetworkDisabled {
 		if !c.Config.NetworkDisabled {
 			en.Interface = &execdriver.NetworkInterface{
 			en.Interface = &execdriver.NetworkInterface{
 				MacAddress:   c.Config.MacAddress,
 				MacAddress:   c.Config.MacAddress,
-				Bridge:       daemon.configStore.Bridge.VirtualSwitchName,
+				Bridge:       daemon.configStore.bridgeConfig.VirtualSwitchName,
 				PortBindings: c.HostConfig.PortBindings,
 				PortBindings: c.HostConfig.PortBindings,
 
 
 				// TODO Windows. Include IPAddress. There already is a
 				// TODO Windows. Include IPAddress. There already is a

+ 6 - 1
daemon/create.go

@@ -26,6 +26,11 @@ func (daemon *Daemon) ContainerCreate(params types.ContainerCreateConfig) (types
 		return types.ContainerCreateResponse{Warnings: warnings}, err
 		return types.ContainerCreateResponse{Warnings: warnings}, err
 	}
 	}
 
 
+	err = daemon.verifyNetworkingConfig(params.NetworkingConfig)
+	if err != nil {
+		return types.ContainerCreateResponse{}, err
+	}
+
 	if params.HostConfig == nil {
 	if params.HostConfig == nil {
 		params.HostConfig = &containertypes.HostConfig{}
 		params.HostConfig = &containertypes.HostConfig{}
 	}
 	}
@@ -105,7 +110,7 @@ func (daemon *Daemon) create(params types.ContainerCreateConfig) (retC *containe
 		}
 		}
 	}()
 	}()
 
 
-	if err := daemon.createContainerPlatformSpecificSettings(container, params.Config, params.HostConfig, img); err != nil {
+	if err := daemon.createContainerPlatformSpecificSettings(container, params.Config, params.HostConfig); err != nil {
 		return nil, err
 		return nil, err
 	}
 	}
 
 

+ 2 - 14
daemon/create_unix.go

@@ -9,15 +9,13 @@ import (
 	"github.com/Sirupsen/logrus"
 	"github.com/Sirupsen/logrus"
 	"github.com/docker/docker/container"
 	"github.com/docker/docker/container"
 	derr "github.com/docker/docker/errors"
 	derr "github.com/docker/docker/errors"
-	"github.com/docker/docker/image"
 	"github.com/docker/docker/pkg/stringid"
 	"github.com/docker/docker/pkg/stringid"
-	"github.com/docker/docker/volume"
 	containertypes "github.com/docker/engine-api/types/container"
 	containertypes "github.com/docker/engine-api/types/container"
 	"github.com/opencontainers/runc/libcontainer/label"
 	"github.com/opencontainers/runc/libcontainer/label"
 )
 )
 
 
 // createContainerPlatformSpecificSettings performs platform specific container create functionality
 // createContainerPlatformSpecificSettings performs platform specific container create functionality
-func (daemon *Daemon) createContainerPlatformSpecificSettings(container *container.Container, config *containertypes.Config, hostConfig *containertypes.HostConfig, img *image.Image) error {
+func (daemon *Daemon) createContainerPlatformSpecificSettings(container *container.Container, config *containertypes.Config, hostConfig *containertypes.HostConfig) error {
 	if err := daemon.Mount(container); err != nil {
 	if err := daemon.Mount(container); err != nil {
 		return err
 		return err
 	}
 	}
@@ -46,17 +44,7 @@ func (daemon *Daemon) createContainerPlatformSpecificSettings(container *contain
 			return derr.ErrorCodeMountOverFile.WithArgs(path)
 			return derr.ErrorCodeMountOverFile.WithArgs(path)
 		}
 		}
 
 
-		volumeDriver := hostConfig.VolumeDriver
-		if destination != "" && img != nil {
-			if _, ok := img.ContainerConfig.Volumes[destination]; ok {
-				// check for whether bind is not specified and then set to local
-				if _, ok := container.MountPoints[destination]; !ok {
-					volumeDriver = volume.DefaultDriverName
-				}
-			}
-		}
-
-		v, err := daemon.volumes.CreateWithRef(name, volumeDriver, container.ID, nil)
+		v, err := daemon.volumes.CreateWithRef(name, hostConfig.VolumeDriver, container.ID, nil)
 		if err != nil {
 		if err != nil {
 			return err
 			return err
 		}
 		}

+ 1 - 10
daemon/create_windows.go

@@ -4,14 +4,13 @@ import (
 	"fmt"
 	"fmt"
 
 
 	"github.com/docker/docker/container"
 	"github.com/docker/docker/container"
-	"github.com/docker/docker/image"
 	"github.com/docker/docker/pkg/stringid"
 	"github.com/docker/docker/pkg/stringid"
 	"github.com/docker/docker/volume"
 	"github.com/docker/docker/volume"
 	containertypes "github.com/docker/engine-api/types/container"
 	containertypes "github.com/docker/engine-api/types/container"
 )
 )
 
 
 // createContainerPlatformSpecificSettings performs platform specific container create functionality
 // createContainerPlatformSpecificSettings performs platform specific container create functionality
-func (daemon *Daemon) createContainerPlatformSpecificSettings(container *container.Container, config *containertypes.Config, hostConfig *containertypes.HostConfig, img *image.Image) error {
+func (daemon *Daemon) createContainerPlatformSpecificSettings(container *container.Container, config *containertypes.Config, hostConfig *containertypes.HostConfig) error {
 	for spec := range config.Volumes {
 	for spec := range config.Volumes {
 
 
 		mp, err := volume.ParseMountSpec(spec, hostConfig.VolumeDriver)
 		mp, err := volume.ParseMountSpec(spec, hostConfig.VolumeDriver)
@@ -31,14 +30,6 @@ func (daemon *Daemon) createContainerPlatformSpecificSettings(container *contain
 		}
 		}
 
 
 		volumeDriver := hostConfig.VolumeDriver
 		volumeDriver := hostConfig.VolumeDriver
-		if mp.Destination != "" && img != nil {
-			if _, ok := img.ContainerConfig.Volumes[mp.Destination]; ok {
-				// check for whether bind is not specified and then set to local
-				if _, ok := container.MountPoints[mp.Destination]; !ok {
-					volumeDriver = volume.DefaultDriverName
-				}
-			}
-		}
 
 
 		// Create the volume in the volume driver. If it doesn't exist,
 		// Create the volume in the volume driver. If it doesn't exist,
 		// a new one will be created.
 		// a new one will be created.

+ 82 - 81
daemon/daemon.go

@@ -31,6 +31,7 @@ import (
 	containertypes "github.com/docker/engine-api/types/container"
 	containertypes "github.com/docker/engine-api/types/container"
 	eventtypes "github.com/docker/engine-api/types/events"
 	eventtypes "github.com/docker/engine-api/types/events"
 	"github.com/docker/engine-api/types/filters"
 	"github.com/docker/engine-api/types/filters"
+	networktypes "github.com/docker/engine-api/types/network"
 	registrytypes "github.com/docker/engine-api/types/registry"
 	registrytypes "github.com/docker/engine-api/types/registry"
 	"github.com/docker/engine-api/types/strslice"
 	"github.com/docker/engine-api/types/strslice"
 	// register graph drivers
 	// register graph drivers
@@ -99,46 +100,11 @@ func (e ErrImageDoesNotExist) Error() string {
 	return fmt.Sprintf("no such id: %s", e.RefOrID)
 	return fmt.Sprintf("no such id: %s", e.RefOrID)
 }
 }
 
 
-type contStore struct {
-	s map[string]*container.Container
-	sync.Mutex
-}
-
-func (c *contStore) Add(id string, cont *container.Container) {
-	c.Lock()
-	c.s[id] = cont
-	c.Unlock()
-}
-
-func (c *contStore) Get(id string) *container.Container {
-	c.Lock()
-	res := c.s[id]
-	c.Unlock()
-	return res
-}
-
-func (c *contStore) Delete(id string) {
-	c.Lock()
-	delete(c.s, id)
-	c.Unlock()
-}
-
-func (c *contStore) List() []*container.Container {
-	containers := new(History)
-	c.Lock()
-	for _, cont := range c.s {
-		containers.Add(cont)
-	}
-	c.Unlock()
-	containers.sort()
-	return *containers
-}
-
 // Daemon holds information about the Docker daemon.
 // Daemon holds information about the Docker daemon.
 type Daemon struct {
 type Daemon struct {
 	ID                        string
 	ID                        string
 	repository                string
 	repository                string
-	containers                *contStore
+	containers                container.Store
 	execCommands              *exec.Store
 	execCommands              *exec.Store
 	referenceStore            reference.Store
 	referenceStore            reference.Store
 	downloadManager           *xfer.LayerDownloadManager
 	downloadManager           *xfer.LayerDownloadManager
@@ -282,10 +248,6 @@ func (daemon *Daemon) Register(container *container.Container) error {
 		}
 		}
 	}
 	}
 
 
-	if err := daemon.prepareMountPoints(container); err != nil {
-		return err
-	}
-
 	return nil
 	return nil
 }
 }
 
 
@@ -408,6 +370,23 @@ func (daemon *Daemon) restore() error {
 	}
 	}
 	group.Wait()
 	group.Wait()
 
 
+	// any containers that were started above would already have had this done,
+	// however we need to now prepare the mountpoints for the rest of the containers as well.
+	// This shouldn't cause any issue running on the containers that already had this run.
+	// This must be run after any containers with a restart policy so that containerized plugins
+	// can have a chance to be running before we try to initialize them.
+	for _, c := range containers {
+		group.Add(1)
+		go func(c *container.Container) {
+			defer group.Done()
+			if err := daemon.prepareMountPoints(c); err != nil {
+				logrus.Error(err)
+			}
+		}(c)
+	}
+
+	group.Wait()
+
 	if !debug {
 	if !debug {
 		if logrus.GetLevel() == logrus.InfoLevel {
 		if logrus.GetLevel() == logrus.InfoLevel {
 			fmt.Println()
 			fmt.Println()
@@ -601,6 +580,10 @@ func (daemon *Daemon) parents(c *container.Container) map[string]*container.Cont
 func (daemon *Daemon) registerLink(parent, child *container.Container, alias string) error {
 func (daemon *Daemon) registerLink(parent, child *container.Container, alias string) error {
 	fullName := path.Join(parent.Name, alias)
 	fullName := path.Join(parent.Name, alias)
 	if err := daemon.nameIndex.Reserve(fullName, child.ID); err != nil {
 	if err := daemon.nameIndex.Reserve(fullName, child.ID); err != nil {
+		if err == registrar.ErrNameReserved {
+			logrus.Warnf("error registering link for %s, to %s, as alias %s, ignoring: %v", parent.ID, child.ID, alias, err)
+			return nil
+		}
 		return err
 		return err
 	}
 	}
 	daemon.linkIndex.link(parent, child, fullName)
 	daemon.linkIndex.link(parent, child, fullName)
@@ -612,8 +595,8 @@ func (daemon *Daemon) registerLink(parent, child *container.Container, alias str
 func NewDaemon(config *Config, registryService *registry.Service) (daemon *Daemon, err error) {
 func NewDaemon(config *Config, registryService *registry.Service) (daemon *Daemon, err error) {
 	setDefaultMtu(config)
 	setDefaultMtu(config)
 
 
-	// Ensure we have compatible configuration options
-	if err := checkConfigOptions(config); err != nil {
+	// Ensure we have compatible and valid configuration options
+	if err := verifyDaemonSettings(config); err != nil {
 		return nil, err
 		return nil, err
 	}
 	}
 
 
@@ -794,7 +777,7 @@ func NewDaemon(config *Config, registryService *registry.Service) (daemon *Daemo
 
 
 	d.ID = trustKey.PublicKey().KeyID()
 	d.ID = trustKey.PublicKey().KeyID()
 	d.repository = daemonRepo
 	d.repository = daemonRepo
-	d.containers = &contStore{s: make(map[string]*container.Container)}
+	d.containers = container.NewMemoryStore()
 	d.execCommands = exec.NewStore()
 	d.execCommands = exec.NewStore()
 	d.referenceStore = referenceStore
 	d.referenceStore = referenceStore
 	d.distributionMetadataStore = distributionMetadataStore
 	d.distributionMetadataStore = distributionMetadataStore
@@ -873,24 +856,18 @@ func (daemon *Daemon) shutdownContainer(c *container.Container) error {
 func (daemon *Daemon) Shutdown() error {
 func (daemon *Daemon) Shutdown() error {
 	daemon.shutdown = true
 	daemon.shutdown = true
 	if daemon.containers != nil {
 	if daemon.containers != nil {
-		group := sync.WaitGroup{}
 		logrus.Debug("starting clean shutdown of all containers...")
 		logrus.Debug("starting clean shutdown of all containers...")
-		for _, cont := range daemon.List() {
-			if !cont.IsRunning() {
-				continue
+		daemon.containers.ApplyAll(func(c *container.Container) {
+			if !c.IsRunning() {
+				return
 			}
 			}
-			logrus.Debugf("stopping %s", cont.ID)
-			group.Add(1)
-			go func(c *container.Container) {
-				defer group.Done()
-				if err := daemon.shutdownContainer(c); err != nil {
-					logrus.Errorf("Stop container error: %v", err)
-					return
-				}
-				logrus.Debugf("container stopped %s", c.ID)
-			}(cont)
-		}
-		group.Wait()
+			logrus.Debugf("stopping %s", c.ID)
+			if err := daemon.shutdownContainer(c); err != nil {
+				logrus.Errorf("Stop container error: %v", err)
+				return
+			}
+			logrus.Debugf("container stopped %s", c.ID)
+		})
 	}
 	}
 
 
 	// trigger libnetwork Stop only if it's initialized
 	// trigger libnetwork Stop only if it's initialized
@@ -1252,6 +1229,9 @@ func (daemon *Daemon) ImageHistory(name string) ([]*types.ImageHistory, error) {
 func (daemon *Daemon) GetImageID(refOrID string) (image.ID, error) {
 func (daemon *Daemon) GetImageID(refOrID string) (image.ID, error) {
 	// Treat as an ID
 	// Treat as an ID
 	if id, err := digest.ParseDigest(refOrID); err == nil {
 	if id, err := digest.ParseDigest(refOrID); err == nil {
+		if _, err := daemon.imageStore.Get(image.ID(id)); err != nil {
+			return "", ErrImageDoesNotExist{refOrID}
+		}
 		return image.ID(id), nil
 		return image.ID(id), nil
 	}
 	}
 
 
@@ -1314,35 +1294,45 @@ func (daemon *Daemon) GetRemappedUIDGID() (int, int) {
 	return uid, gid
 	return uid, gid
 }
 }
 
 
-// ImageGetCached returns the earliest created image that is a child
+// ImageGetCached returns the most recent created image that is a child
 // of the image with imgID, that had the same config when it was
 // of the image with imgID, that had the same config when it was
 // created. nil is returned if a child cannot be found. An error is
 // created. nil is returned if a child cannot be found. An error is
 // returned if the parent image cannot be found.
 // returned if the parent image cannot be found.
 func (daemon *Daemon) ImageGetCached(imgID image.ID, config *containertypes.Config) (*image.Image, error) {
 func (daemon *Daemon) ImageGetCached(imgID image.ID, config *containertypes.Config) (*image.Image, error) {
-	// Retrieve all images
-	imgs := daemon.Map()
+	// Loop on the children of the given image and check the config
+	getMatch := func(siblings []image.ID) (*image.Image, error) {
+		var match *image.Image
+		for _, id := range siblings {
+			img, err := daemon.imageStore.Get(id)
+			if err != nil {
+				return nil, fmt.Errorf("unable to find image %q", id)
+			}
 
 
-	var siblings []image.ID
-	for id, img := range imgs {
-		if img.Parent == imgID {
-			siblings = append(siblings, id)
+			if runconfig.Compare(&img.ContainerConfig, config) {
+				// check for the most up to date match
+				if match == nil || match.Created.Before(img.Created) {
+					match = img
+				}
+			}
 		}
 		}
+		return match, nil
 	}
 	}
 
 
-	// Loop on the children of the given image and check the config
-	var match *image.Image
-	for _, id := range siblings {
-		img, ok := imgs[id]
-		if !ok {
-			return nil, fmt.Errorf("unable to find image %q", id)
-		}
-		if runconfig.Compare(&img.ContainerConfig, config) {
-			if match == nil || match.Created.Before(img.Created) {
-				match = img
+	// In this case, this is `FROM scratch`, which isn't an actual image.
+	if imgID == "" {
+		images := daemon.imageStore.Map()
+		var siblings []image.ID
+		for id, img := range images {
+			if img.Parent == imgID {
+				siblings = append(siblings, id)
 			}
 			}
 		}
 		}
+		return getMatch(siblings)
 	}
 	}
-	return match, nil
+
+	// find match from child images
+	siblings := daemon.imageStore.Children(imgID)
+	return getMatch(siblings)
 }
 }
 
 
 // tempDir returns the default directory to use for temporary files.
 // tempDir returns the default directory to use for temporary files.
@@ -1440,6 +1430,18 @@ func (daemon *Daemon) verifyContainerSettings(hostConfig *containertypes.HostCon
 	return verifyPlatformContainerSettings(daemon, hostConfig, config)
 	return verifyPlatformContainerSettings(daemon, hostConfig, config)
 }
 }
 
 
+// Checks if the client set configurations for more than one network while creating a container
+func (daemon *Daemon) verifyNetworkingConfig(nwConfig *networktypes.NetworkingConfig) error {
+	if nwConfig == nil || len(nwConfig.EndpointsConfig) <= 1 {
+		return nil
+	}
+	l := make([]string, 0, len(nwConfig.EndpointsConfig))
+	for k := range nwConfig.EndpointsConfig {
+		l = append(l, k)
+	}
+	return derr.ErrorCodeMultipleNetworkConnect.WithArgs(fmt.Sprintf("%v", l))
+}
+
 func configureVolumes(config *Config, rootUID, rootGID int) (*store.VolumeStore, error) {
 func configureVolumes(config *Config, rootUID, rootGID int) (*store.VolumeStore, error) {
 	volumesDriver, err := local.New(config.Root, rootUID, rootGID)
 	volumesDriver, err := local.New(config.Root, rootUID, rootGID)
 	if err != nil {
 	if err != nil {
@@ -1536,13 +1538,12 @@ func (daemon *Daemon) initDiscovery(config *Config) error {
 // daemon according to those changes.
 // daemon according to those changes.
 // This are the settings that Reload changes:
 // This are the settings that Reload changes:
 // - Daemon labels.
 // - Daemon labels.
-// - Cluster discovery (reconfigure and restart).
 func (daemon *Daemon) Reload(config *Config) error {
 func (daemon *Daemon) Reload(config *Config) error {
 	daemon.configStore.reloadLock.Lock()
 	daemon.configStore.reloadLock.Lock()
-	defer daemon.configStore.reloadLock.Unlock()
-
 	daemon.configStore.Labels = config.Labels
 	daemon.configStore.Labels = config.Labels
-	return daemon.reloadClusterDiscovery(config)
+	daemon.configStore.reloadLock.Unlock()
+
+	return nil
 }
 }
 
 
 func (daemon *Daemon) reloadClusterDiscovery(config *Config) error {
 func (daemon *Daemon) reloadClusterDiscovery(config *Config) error {

+ 8 - 11
daemon/daemon_test.go

@@ -61,15 +61,12 @@ func TestGetContainer(t *testing.T) {
 		},
 		},
 	}
 	}
 
 
-	store := &contStore{
-		s: map[string]*container.Container{
-			c1.ID: c1,
-			c2.ID: c2,
-			c3.ID: c3,
-			c4.ID: c4,
-			c5.ID: c5,
-		},
-	}
+	store := container.NewMemoryStore()
+	store.Add(c1.ID, c1)
+	store.Add(c2.ID, c2)
+	store.Add(c3.ID, c3)
+	store.Add(c4.ID, c4)
+	store.Add(c5.ID, c5)
 
 
 	index := truncindex.NewTruncIndex([]string{})
 	index := truncindex.NewTruncIndex([]string{})
 	index.Add(c1.ID)
 	index.Add(c1.ID)
@@ -440,7 +437,7 @@ func TestDaemonDiscoveryReload(t *testing.T) {
 		&discovery.Entry{Host: "127.0.0.1", Port: "5555"},
 		&discovery.Entry{Host: "127.0.0.1", Port: "5555"},
 	}
 	}
 
 
-	if err := daemon.Reload(newConfig); err != nil {
+	if err := daemon.reloadClusterDiscovery(newConfig); err != nil {
 		t.Fatal(err)
 		t.Fatal(err)
 	}
 	}
 	ch, errCh = daemon.discoveryWatcher.Watch(stopCh)
 	ch, errCh = daemon.discoveryWatcher.Watch(stopCh)
@@ -472,7 +469,7 @@ func TestDaemonDiscoveryReloadFromEmptyDiscovery(t *testing.T) {
 		&discovery.Entry{Host: "127.0.0.1", Port: "5555"},
 		&discovery.Entry{Host: "127.0.0.1", Port: "5555"},
 	}
 	}
 
 
-	if err := daemon.Reload(newConfig); err != nil {
+	if err := daemon.reloadClusterDiscovery(newConfig); err != nil {
 		t.Fatal(err)
 		t.Fatal(err)
 	}
 	}
 	stopCh := make(chan struct{})
 	stopCh := make(chan struct{})

+ 58 - 28
daemon/daemon_unix.go

@@ -18,6 +18,7 @@ import (
 	"github.com/docker/docker/image"
 	"github.com/docker/docker/image"
 	"github.com/docker/docker/layer"
 	"github.com/docker/docker/layer"
 	"github.com/docker/docker/pkg/idtools"
 	"github.com/docker/docker/pkg/idtools"
+	"github.com/docker/docker/pkg/parsers"
 	"github.com/docker/docker/pkg/parsers/kernel"
 	"github.com/docker/docker/pkg/parsers/kernel"
 	"github.com/docker/docker/pkg/sysinfo"
 	"github.com/docker/docker/pkg/sysinfo"
 	"github.com/docker/docker/reference"
 	"github.com/docker/docker/reference"
@@ -361,6 +362,24 @@ func verifyContainerResources(resources *containertypes.Resources) ([]string, er
 	return warnings, nil
 	return warnings, nil
 }
 }
 
 
+func usingSystemd(config *Config) bool {
+	for _, option := range config.ExecOptions {
+		key, val, err := parsers.ParseKeyValueOpt(option)
+		if err != nil || !strings.EqualFold(key, "native.cgroupdriver") {
+			continue
+		}
+		if val == "systemd" {
+			return true
+		}
+	}
+
+	return false
+}
+
+func (daemon *Daemon) usingSystemd() bool {
+	return usingSystemd(daemon.configStore)
+}
+
 // verifyPlatformContainerSettings performs platform-specific validation of the
 // verifyPlatformContainerSettings performs platform-specific validation of the
 // hostconfig and config structures.
 // hostconfig and config structures.
 func verifyPlatformContainerSettings(daemon *Daemon, hostConfig *containertypes.HostConfig, config *containertypes.Config) ([]string, error) {
 func verifyPlatformContainerSettings(daemon *Daemon, hostConfig *containertypes.HostConfig, config *containertypes.Config) ([]string, error) {
@@ -407,20 +426,31 @@ func verifyPlatformContainerSettings(daemon *Daemon, hostConfig *containertypes.
 			return warnings, fmt.Errorf("Cannot use the --read-only option when user namespaces are enabled.")
 			return warnings, fmt.Errorf("Cannot use the --read-only option when user namespaces are enabled.")
 		}
 		}
 	}
 	}
+	if hostConfig.CgroupParent != "" && daemon.usingSystemd() {
+		// CgroupParent for systemd cgroup should be named as "xxx.slice"
+		if len(hostConfig.CgroupParent) <= 6 || !strings.HasSuffix(hostConfig.CgroupParent, ".slice") {
+			return warnings, fmt.Errorf("cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"")
+		}
+	}
 	return warnings, nil
 	return warnings, nil
 }
 }
 
 
-// checkConfigOptions checks for mutually incompatible config options
-func checkConfigOptions(config *Config) error {
+// verifyDaemonSettings performs validation of daemon config struct
+func verifyDaemonSettings(config *Config) error {
 	// Check for mutually incompatible config options
 	// Check for mutually incompatible config options
-	if config.Bridge.Iface != "" && config.Bridge.IP != "" {
+	if config.bridgeConfig.Iface != "" && config.bridgeConfig.IP != "" {
 		return fmt.Errorf("You specified -b & --bip, mutually exclusive options. Please specify only one.")
 		return fmt.Errorf("You specified -b & --bip, mutually exclusive options. Please specify only one.")
 	}
 	}
-	if !config.Bridge.EnableIPTables && !config.Bridge.InterContainerCommunication {
+	if !config.bridgeConfig.EnableIPTables && !config.bridgeConfig.InterContainerCommunication {
 		return fmt.Errorf("You specified --iptables=false with --icc=false. ICC=false uses iptables to function. Please set --icc or --iptables to true.")
 		return fmt.Errorf("You specified --iptables=false with --icc=false. ICC=false uses iptables to function. Please set --icc or --iptables to true.")
 	}
 	}
-	if !config.Bridge.EnableIPTables && config.Bridge.EnableIPMasq {
-		config.Bridge.EnableIPMasq = false
+	if !config.bridgeConfig.EnableIPTables && config.bridgeConfig.EnableIPMasq {
+		config.bridgeConfig.EnableIPMasq = false
+	}
+	if config.CgroupParent != "" && usingSystemd(config) {
+		if len(config.CgroupParent) <= 6 || !strings.HasSuffix(config.CgroupParent, ".slice") {
+			return fmt.Errorf("cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\"")
+		}
 	}
 	}
 	return nil
 	return nil
 }
 }
@@ -452,7 +482,7 @@ func configureKernelSecuritySupport(config *Config, driverName string) error {
 }
 }
 
 
 func isBridgeNetworkDisabled(config *Config) bool {
 func isBridgeNetworkDisabled(config *Config) bool {
-	return config.Bridge.Iface == disableNetworkBridge
+	return config.bridgeConfig.Iface == disableNetworkBridge
 }
 }
 
 
 func (daemon *Daemon) networkOptions(dconfig *Config) ([]nwconfig.Option, error) {
 func (daemon *Daemon) networkOptions(dconfig *Config) ([]nwconfig.Option, error) {
@@ -526,9 +556,9 @@ func (daemon *Daemon) initNetworkController(config *Config) (libnetwork.NetworkC
 
 
 func driverOptions(config *Config) []nwconfig.Option {
 func driverOptions(config *Config) []nwconfig.Option {
 	bridgeConfig := options.Generic{
 	bridgeConfig := options.Generic{
-		"EnableIPForwarding":  config.Bridge.EnableIPForward,
-		"EnableIPTables":      config.Bridge.EnableIPTables,
-		"EnableUserlandProxy": config.Bridge.EnableUserlandProxy}
+		"EnableIPForwarding":  config.bridgeConfig.EnableIPForward,
+		"EnableIPTables":      config.bridgeConfig.EnableIPTables,
+		"EnableUserlandProxy": config.bridgeConfig.EnableUserlandProxy}
 	bridgeOption := options.Generic{netlabel.GenericData: bridgeConfig}
 	bridgeOption := options.Generic{netlabel.GenericData: bridgeConfig}
 
 
 	dOptions := []nwconfig.Option{}
 	dOptions := []nwconfig.Option{}
@@ -544,20 +574,20 @@ func initBridgeDriver(controller libnetwork.NetworkController, config *Config) e
 	}
 	}
 
 
 	bridgeName := bridge.DefaultBridgeName
 	bridgeName := bridge.DefaultBridgeName
-	if config.Bridge.Iface != "" {
-		bridgeName = config.Bridge.Iface
+	if config.bridgeConfig.Iface != "" {
+		bridgeName = config.bridgeConfig.Iface
 	}
 	}
 	netOption := map[string]string{
 	netOption := map[string]string{
 		bridge.BridgeName:         bridgeName,
 		bridge.BridgeName:         bridgeName,
 		bridge.DefaultBridge:      strconv.FormatBool(true),
 		bridge.DefaultBridge:      strconv.FormatBool(true),
 		netlabel.DriverMTU:        strconv.Itoa(config.Mtu),
 		netlabel.DriverMTU:        strconv.Itoa(config.Mtu),
-		bridge.EnableIPMasquerade: strconv.FormatBool(config.Bridge.EnableIPMasq),
-		bridge.EnableICC:          strconv.FormatBool(config.Bridge.InterContainerCommunication),
+		bridge.EnableIPMasquerade: strconv.FormatBool(config.bridgeConfig.EnableIPMasq),
+		bridge.EnableICC:          strconv.FormatBool(config.bridgeConfig.InterContainerCommunication),
 	}
 	}
 
 
 	// --ip processing
 	// --ip processing
-	if config.Bridge.DefaultIP != nil {
-		netOption[bridge.DefaultBindingIP] = config.Bridge.DefaultIP.String()
+	if config.bridgeConfig.DefaultIP != nil {
+		netOption[bridge.DefaultBindingIP] = config.bridgeConfig.DefaultIP.String()
 	}
 	}
 
 
 	var (
 	var (
@@ -576,9 +606,9 @@ func initBridgeDriver(controller libnetwork.NetworkController, config *Config) e
 		}
 		}
 	}
 	}
 
 
-	if config.Bridge.IP != "" {
-		ipamV4Conf.PreferredPool = config.Bridge.IP
-		ip, _, err := net.ParseCIDR(config.Bridge.IP)
+	if config.bridgeConfig.IP != "" {
+		ipamV4Conf.PreferredPool = config.bridgeConfig.IP
+		ip, _, err := net.ParseCIDR(config.bridgeConfig.IP)
 		if err != nil {
 		if err != nil {
 			return err
 			return err
 		}
 		}
@@ -587,8 +617,8 @@ func initBridgeDriver(controller libnetwork.NetworkController, config *Config) e
 		logrus.Infof("Default bridge (%s) is assigned with an IP address %s. Daemon option --bip can be used to set a preferred IP address", bridgeName, ipamV4Conf.PreferredPool)
 		logrus.Infof("Default bridge (%s) is assigned with an IP address %s. Daemon option --bip can be used to set a preferred IP address", bridgeName, ipamV4Conf.PreferredPool)
 	}
 	}
 
 
-	if config.Bridge.FixedCIDR != "" {
-		_, fCIDR, err := net.ParseCIDR(config.Bridge.FixedCIDR)
+	if config.bridgeConfig.FixedCIDR != "" {
+		_, fCIDR, err := net.ParseCIDR(config.bridgeConfig.FixedCIDR)
 		if err != nil {
 		if err != nil {
 			return err
 			return err
 		}
 		}
@@ -596,13 +626,13 @@ func initBridgeDriver(controller libnetwork.NetworkController, config *Config) e
 		ipamV4Conf.SubPool = fCIDR.String()
 		ipamV4Conf.SubPool = fCIDR.String()
 	}
 	}
 
 
-	if config.Bridge.DefaultGatewayIPv4 != nil {
-		ipamV4Conf.AuxAddresses["DefaultGatewayIPv4"] = config.Bridge.DefaultGatewayIPv4.String()
+	if config.bridgeConfig.DefaultGatewayIPv4 != nil {
+		ipamV4Conf.AuxAddresses["DefaultGatewayIPv4"] = config.bridgeConfig.DefaultGatewayIPv4.String()
 	}
 	}
 
 
 	var deferIPv6Alloc bool
 	var deferIPv6Alloc bool
-	if config.Bridge.FixedCIDRv6 != "" {
-		_, fCIDRv6, err := net.ParseCIDR(config.Bridge.FixedCIDRv6)
+	if config.bridgeConfig.FixedCIDRv6 != "" {
+		_, fCIDRv6, err := net.ParseCIDR(config.bridgeConfig.FixedCIDRv6)
 		if err != nil {
 		if err != nil {
 			return err
 			return err
 		}
 		}
@@ -632,11 +662,11 @@ func initBridgeDriver(controller libnetwork.NetworkController, config *Config) e
 		}
 		}
 	}
 	}
 
 
-	if config.Bridge.DefaultGatewayIPv6 != nil {
+	if config.bridgeConfig.DefaultGatewayIPv6 != nil {
 		if ipamV6Conf == nil {
 		if ipamV6Conf == nil {
 			ipamV6Conf = &libnetwork.IpamConf{AuxAddresses: make(map[string]string)}
 			ipamV6Conf = &libnetwork.IpamConf{AuxAddresses: make(map[string]string)}
 		}
 		}
-		ipamV6Conf.AuxAddresses["DefaultGatewayIPv6"] = config.Bridge.DefaultGatewayIPv6.String()
+		ipamV6Conf.AuxAddresses["DefaultGatewayIPv6"] = config.bridgeConfig.DefaultGatewayIPv6.String()
 	}
 	}
 
 
 	v4Conf := []*libnetwork.IpamConf{ipamV4Conf}
 	v4Conf := []*libnetwork.IpamConf{ipamV4Conf}
@@ -648,7 +678,7 @@ func initBridgeDriver(controller libnetwork.NetworkController, config *Config) e
 	_, err = controller.NewNetwork("bridge", "bridge",
 	_, err = controller.NewNetwork("bridge", "bridge",
 		libnetwork.NetworkOptionGeneric(options.Generic{
 		libnetwork.NetworkOptionGeneric(options.Generic{
 			netlabel.GenericData: netOption,
 			netlabel.GenericData: netOption,
-			netlabel.EnableIPv6:  config.Bridge.EnableIPv6,
+			netlabel.EnableIPv6:  config.bridgeConfig.EnableIPv6,
 		}),
 		}),
 		libnetwork.NetworkOptionIpam("default", "", v4Conf, v6Conf, nil),
 		libnetwork.NetworkOptionIpam("default", "", v4Conf, v6Conf, nil),
 		libnetwork.NetworkOptionDeferIPv6Alloc(deferIPv6Alloc))
 		libnetwork.NetworkOptionDeferIPv6Alloc(deferIPv6Alloc))

+ 4 - 4
daemon/daemon_windows.go

@@ -88,8 +88,8 @@ func verifyPlatformContainerSettings(daemon *Daemon, hostConfig *containertypes.
 	return nil, nil
 	return nil, nil
 }
 }
 
 
-// checkConfigOptions checks for mutually incompatible config options
-func checkConfigOptions(config *Config) error {
+// verifyDaemonSettings performs validation of daemon config struct
+func verifyDaemonSettings(config *Config) error {
 	return nil
 	return nil
 }
 }
 
 
@@ -121,8 +121,8 @@ func isBridgeNetworkDisabled(config *Config) bool {
 
 
 func (daemon *Daemon) initNetworkController(config *Config) (libnetwork.NetworkController, error) {
 func (daemon *Daemon) initNetworkController(config *Config) (libnetwork.NetworkController, error) {
 	// Set the name of the virtual switch if not specified by -b on daemon start
 	// Set the name of the virtual switch if not specified by -b on daemon start
-	if config.Bridge.VirtualSwitchName == "" {
-		config.Bridge.VirtualSwitchName = defaultVirtualSwitch
+	if config.bridgeConfig.VirtualSwitchName == "" {
+		config.bridgeConfig.VirtualSwitchName = defaultVirtualSwitch
 	}
 	}
 	return nil, nil
 	return nil, nil
 }
 }

+ 6 - 7
daemon/delete.go

@@ -43,15 +43,14 @@ func (daemon *Daemon) ContainerRm(name string, config *types.ContainerRmConfig)
 		return daemon.rmLink(container, name)
 		return daemon.rmLink(container, name)
 	}
 	}
 
 
-	if err := daemon.cleanupContainer(container, config.ForceRemove); err != nil {
-		return err
-	}
-
-	if err := daemon.removeMountPoints(container, config.RemoveVolume); err != nil {
-		logrus.Error(err)
+	err = daemon.cleanupContainer(container, config.ForceRemove)
+	if err == nil || config.ForceRemove {
+		if e := daemon.removeMountPoints(container, config.RemoveVolume); e != nil {
+			logrus.Error(e)
+		}
 	}
 	}
 
 
-	return nil
+	return err
 }
 }
 
 
 func (daemon *Daemon) rmLink(container *container.Container, name string) error {
 func (daemon *Daemon) rmLink(container *container.Container, name string) error {

+ 1 - 1
daemon/delete_test.go

@@ -20,7 +20,7 @@ func TestContainerDoubleDelete(t *testing.T) {
 		repository: tmp,
 		repository: tmp,
 		root:       tmp,
 		root:       tmp,
 	}
 	}
-	daemon.containers = &contStore{s: make(map[string]*container.Container)}
+	daemon.containers = container.NewMemoryStore()
 
 
 	container := &container.Container{
 	container := &container.Container{
 		CommonContainer: container.CommonContainer{
 		CommonContainer: container.CommonContainer{

+ 12 - 1
daemon/exec.go

@@ -52,6 +52,9 @@ func (d *Daemon) getExecConfig(name string) (*exec.Config, error) {
 			if container.IsPaused() {
 			if container.IsPaused() {
 				return nil, derr.ErrorCodeExecPaused.WithArgs(container.ID)
 				return nil, derr.ErrorCodeExecPaused.WithArgs(container.ID)
 			}
 			}
+			if container.IsRestarting() {
+				return nil, derr.ErrorCodeExecRestarting.WithArgs(container.ID)
+			}
 			return ec, nil
 			return ec, nil
 		}
 		}
 	}
 	}
@@ -76,6 +79,9 @@ func (d *Daemon) getActiveContainer(name string) (*container.Container, error) {
 	if container.IsPaused() {
 	if container.IsPaused() {
 		return nil, derr.ErrorCodeExecPaused.WithArgs(name)
 		return nil, derr.ErrorCodeExecPaused.WithArgs(name)
 	}
 	}
+	if container.IsRestarting() {
+		return nil, derr.ErrorCodeExecRestarting.WithArgs(name)
+	}
 	return container, nil
 	return container, nil
 }
 }
 
 
@@ -135,6 +141,11 @@ func (d *Daemon) ContainerExecStart(name string, stdin io.ReadCloser, stdout io.
 	}
 	}
 
 
 	ec.Lock()
 	ec.Lock()
+	if ec.ExitCode != nil {
+		ec.Unlock()
+		return derr.ErrorCodeExecExited.WithArgs(ec.ID)
+	}
+
 	if ec.Running {
 	if ec.Running {
 		ec.Unlock()
 		ec.Unlock()
 		return derr.ErrorCodeExecRunning.WithArgs(ec.ID)
 		return derr.ErrorCodeExecRunning.WithArgs(ec.ID)
@@ -214,7 +225,7 @@ func (d *Daemon) Exec(c *container.Container, execConfig *exec.Config, pipes *ex
 		exitStatus = 128
 		exitStatus = 128
 	}
 	}
 
 
-	execConfig.ExitCode = exitStatus
+	execConfig.ExitCode = &exitStatus
 	execConfig.Running = false
 	execConfig.Running = false
 
 
 	return exitStatus, err
 	return exitStatus, err

+ 8 - 2
daemon/exec/exec.go

@@ -18,7 +18,7 @@ type Config struct {
 	*runconfig.StreamConfig
 	*runconfig.StreamConfig
 	ID            string
 	ID            string
 	Running       bool
 	Running       bool
-	ExitCode      int
+	ExitCode      *int
 	ProcessConfig *execdriver.ProcessConfig
 	ProcessConfig *execdriver.ProcessConfig
 	OpenStdin     bool
 	OpenStdin     bool
 	OpenStderr    bool
 	OpenStderr    bool
@@ -53,7 +53,13 @@ func NewStore() *Store {
 
 
 // Commands returns the exec configurations in the store.
 // Commands returns the exec configurations in the store.
 func (e *Store) Commands() map[string]*Config {
 func (e *Store) Commands() map[string]*Config {
-	return e.commands
+	e.RLock()
+	commands := make(map[string]*Config, len(e.commands))
+	for id, config := range e.commands {
+		commands[id] = config
+	}
+	e.RUnlock()
+	return commands
 }
 }
 
 
 // Add adds a new exec configuration to the store.
 // Add adds a new exec configuration to the store.

+ 0 - 3
daemon/execdriver/native/create.go

@@ -436,7 +436,6 @@ func (d *Driver) setupMounts(container *configs.Config, c *execdriver.Command) e
 				flags = syscall.MS_NOEXEC | syscall.MS_NOSUID | syscall.MS_NODEV
 				flags = syscall.MS_NOEXEC | syscall.MS_NOSUID | syscall.MS_NODEV
 				err   error
 				err   error
 			)
 			)
-			fulldest := filepath.Join(c.Rootfs, m.Destination)
 			if m.Data != "" {
 			if m.Data != "" {
 				flags, data, err = mount.ParseTmpfsOptions(m.Data)
 				flags, data, err = mount.ParseTmpfsOptions(m.Data)
 				if err != nil {
 				if err != nil {
@@ -449,8 +448,6 @@ func (d *Driver) setupMounts(container *configs.Config, c *execdriver.Command) e
 				Data:             data,
 				Data:             data,
 				Device:           "tmpfs",
 				Device:           "tmpfs",
 				Flags:            flags,
 				Flags:            flags,
-				PremountCmds:     genTmpfsPremountCmd(c.TmpDir, fulldest, m.Destination),
-				PostmountCmds:    genTmpfsPostmountCmd(c.TmpDir, fulldest, m.Destination),
 				PropagationFlags: []int{mountPropagationMap[volume.DefaultPropagationMode]},
 				PropagationFlags: []int{mountPropagationMap[volume.DefaultPropagationMode]},
 			})
 			})
 			continue
 			continue

+ 11 - 1
daemon/execdriver/native/seccomp_default.go

@@ -17,7 +17,7 @@ func arches() []string {
 	var a = native.String()
 	var a = native.String()
 	switch a {
 	switch a {
 	case "amd64":
 	case "amd64":
-		return []string{"amd64", "x86"}
+		return []string{"amd64", "x86", "x32"}
 	case "arm64":
 	case "arm64":
 		return []string{"arm64", "arm"}
 		return []string{"arm64", "arm"}
 	case "mips64":
 	case "mips64":
@@ -944,6 +944,11 @@ var defaultSeccompProfile = &configs.Seccomp{
 			Action: configs.Allow,
 			Action: configs.Allow,
 			Args:   []*configs.Arg{},
 			Args:   []*configs.Arg{},
 		},
 		},
+		{
+			Name:   "recv",
+			Action: configs.Allow,
+			Args:   []*configs.Arg{},
+		},
 		{
 		{
 			Name:   "recvfrom",
 			Name:   "recvfrom",
 			Action: configs.Allow,
 			Action: configs.Allow,
@@ -1119,6 +1124,11 @@ var defaultSeccompProfile = &configs.Seccomp{
 			Action: configs.Allow,
 			Action: configs.Allow,
 			Args:   []*configs.Arg{},
 			Args:   []*configs.Arg{},
 		},
 		},
+		{
+			Name:   "send",
+			Action: configs.Allow,
+			Args:   []*configs.Arg{},
+		},
 		{
 		{
 			Name:   "sendfile",
 			Name:   "sendfile",
 			Action: configs.Allow,
 			Action: configs.Allow,

+ 0 - 56
daemon/execdriver/native/tmpfs.go

@@ -1,56 +0,0 @@
-package native
-
-import (
-	"fmt"
-	"os"
-	"os/exec"
-	"strings"
-
-	"github.com/Sirupsen/logrus"
-	"github.com/opencontainers/runc/libcontainer/configs"
-)
-
-func genTmpfsPremountCmd(tmpDir string, fullDest string, dest string) []configs.Command {
-	var premount []configs.Command
-	tarPath, err := exec.LookPath("tar")
-	if err != nil {
-		logrus.Warn("tar command is not available for tmpfs mount: %s", err)
-		return premount
-	}
-	if _, err = exec.LookPath("rm"); err != nil {
-		logrus.Warn("rm command is not available for tmpfs mount: %s", err)
-		return premount
-	}
-	tarFile := fmt.Sprintf("%s/%s.tar", tmpDir, strings.Replace(dest, "/", "_", -1))
-	if _, err := os.Stat(fullDest); err == nil {
-		premount = append(premount, configs.Command{
-			Path: tarPath,
-			Args: []string{"-cf", tarFile, "-C", fullDest, "."},
-		})
-	}
-	return premount
-}
-
-func genTmpfsPostmountCmd(tmpDir string, fullDest string, dest string) []configs.Command {
-	var postmount []configs.Command
-	tarPath, err := exec.LookPath("tar")
-	if err != nil {
-		return postmount
-	}
-	rmPath, err := exec.LookPath("rm")
-	if err != nil {
-		return postmount
-	}
-	if _, err := os.Stat(fullDest); os.IsNotExist(err) {
-		return postmount
-	}
-	tarFile := fmt.Sprintf("%s/%s.tar", tmpDir, strings.Replace(dest, "/", "_", -1))
-	postmount = append(postmount, configs.Command{
-		Path: tarPath,
-		Args: []string{"-xf", tarFile, "-C", fullDest, "."},
-	})
-	return append(postmount, configs.Command{
-		Path: rmPath,
-		Args: []string{"-f", tarFile},
-	})
-}

+ 3 - 13
daemon/graphdriver/aufs/aufs.go

@@ -374,20 +374,10 @@ func (a *Driver) DiffPath(id string) (string, func() error, error) {
 }
 }
 
 
 func (a *Driver) applyDiff(id string, diff archive.Reader) error {
 func (a *Driver) applyDiff(id string, diff archive.Reader) error {
-	dir := path.Join(a.rootPath(), "diff", id)
-	if err := chrootarchive.UntarUncompressed(diff, dir, &archive.TarOptions{
+	return chrootarchive.UntarUncompressed(diff, path.Join(a.rootPath(), "diff", id), &archive.TarOptions{
 		UIDMaps: a.uidMaps,
 		UIDMaps: a.uidMaps,
 		GIDMaps: a.gidMaps,
 		GIDMaps: a.gidMaps,
-	}); err != nil {
-		return err
-	}
-
-	// show invalid whiteouts warning.
-	files, err := ioutil.ReadDir(path.Join(dir, archive.WhiteoutLinkDir))
-	if err == nil && len(files) > 0 {
-		logrus.Warnf("Archive contains aufs hardlink references that are not supported.")
-	}
-	return nil
+	})
 }
 }
 
 
 // DiffSize calculates the changes between the specified id
 // DiffSize calculates the changes between the specified id
@@ -517,7 +507,7 @@ func (a *Driver) aufsMount(ro []string, rw, target, mountLabel string) (err erro
 		}
 		}
 
 
 		if firstMount {
 		if firstMount {
-			opts := "dio,noplink,xino=/dev/shm/aufs.xino"
+			opts := "dio,xino=/dev/shm/aufs.xino"
 			if useDirperm() {
 			if useDirperm() {
 				opts += ",dirperm1"
 				opts += ",dirperm1"
 			}
 			}

+ 0 - 82
daemon/graphdriver/aufs/aufs_test.go

@@ -638,88 +638,6 @@ func TestApplyDiff(t *testing.T) {
 	}
 	}
 }
 }
 
 
-func TestHardlinks(t *testing.T) {
-	// Copy 2 layers that have linked files to new layers and check if hardlink are preserved
-	d := newDriver(t)
-	defer os.RemoveAll(tmp)
-	defer d.Cleanup()
-
-	origFile := "test_file"
-	linkedFile := "linked_file"
-
-	if err := d.Create("source-1", "", ""); err != nil {
-		t.Fatal(err)
-	}
-
-	mountPath, err := d.Get("source-1", "")
-	if err != nil {
-		t.Fatal(err)
-	}
-
-	f, err := os.Create(path.Join(mountPath, origFile))
-	if err != nil {
-		t.Fatal(err)
-	}
-	f.Close()
-
-	layerTar1, err := d.Diff("source-1", "")
-	if err != nil {
-		t.Fatal(err)
-	}
-
-	if err := d.Create("source-2", "source-1", ""); err != nil {
-		t.Fatal(err)
-	}
-
-	mountPath, err = d.Get("source-2", "")
-	if err != nil {
-		t.Fatal(err)
-	}
-
-	if err := os.Link(path.Join(mountPath, origFile), path.Join(mountPath, linkedFile)); err != nil {
-		t.Fatal(err)
-	}
-
-	layerTar2, err := d.Diff("source-2", "source-1")
-	if err != nil {
-		t.Fatal(err)
-	}
-
-	if err := d.Create("target-1", "", ""); err != nil {
-		t.Fatal(err)
-	}
-
-	if _, err := d.ApplyDiff("target-1", "", layerTar1); err != nil {
-		t.Fatal(err)
-	}
-
-	if err := d.Create("target-2", "target-1", ""); err != nil {
-		t.Fatal(err)
-	}
-
-	if _, err := d.ApplyDiff("target-2", "target-1", layerTar2); err != nil {
-		t.Fatal(err)
-	}
-
-	mountPath, err = d.Get("target-2", "")
-	if err != nil {
-		t.Fatal(err)
-	}
-
-	fi1, err := os.Lstat(path.Join(mountPath, origFile))
-	if err != nil {
-		t.Fatal(err)
-	}
-	fi2, err := os.Lstat(path.Join(mountPath, linkedFile))
-	if err != nil {
-		t.Fatal(err)
-	}
-
-	if !os.SameFile(fi1, fi2) {
-		t.Fatal("Target files are not linked")
-	}
-}
-
 func hash(c string) string {
 func hash(c string) string {
 	h := sha256.New()
 	h := sha256.New()
 	fmt.Fprint(h, c)
 	fmt.Fprint(h, c)

+ 3 - 0
daemon/graphdriver/driver_linux.go

@@ -18,6 +18,8 @@ const (
 	FsMagicExtfs = FsMagic(0x0000EF53)
 	FsMagicExtfs = FsMagic(0x0000EF53)
 	// FsMagicF2fs filesystem id for F2fs
 	// FsMagicF2fs filesystem id for F2fs
 	FsMagicF2fs = FsMagic(0xF2F52010)
 	FsMagicF2fs = FsMagic(0xF2F52010)
+	// FsMagicGPFS filesystem id for GPFS
+	FsMagicGPFS = FsMagic(0x47504653)
 	// FsMagicJffs2Fs filesystem if for Jffs2Fs
 	// FsMagicJffs2Fs filesystem if for Jffs2Fs
 	FsMagicJffs2Fs = FsMagic(0x000072b6)
 	FsMagicJffs2Fs = FsMagic(0x000072b6)
 	// FsMagicJfs filesystem id for Jfs
 	// FsMagicJfs filesystem id for Jfs
@@ -60,6 +62,7 @@ var (
 		FsMagicCramfs:      "cramfs",
 		FsMagicCramfs:      "cramfs",
 		FsMagicExtfs:       "extfs",
 		FsMagicExtfs:       "extfs",
 		FsMagicF2fs:        "f2fs",
 		FsMagicF2fs:        "f2fs",
+		FsMagicGPFS:        "gpfs",
 		FsMagicJffs2Fs:     "jffs2",
 		FsMagicJffs2Fs:     "jffs2",
 		FsMagicJfs:         "jfs",
 		FsMagicJfs:         "jfs",
 		FsMagicNfsFs:       "nfs",
 		FsMagicNfsFs:       "nfs",

+ 20 - 32
daemon/image_delete.go

@@ -179,13 +179,9 @@ func isImageIDPrefix(imageID, possiblePrefix string) bool {
 // getContainerUsingImage returns a container that was created using the given
 // getContainerUsingImage returns a container that was created using the given
 // imageID. Returns nil if there is no such container.
 // imageID. Returns nil if there is no such container.
 func (daemon *Daemon) getContainerUsingImage(imageID image.ID) *container.Container {
 func (daemon *Daemon) getContainerUsingImage(imageID image.ID) *container.Container {
-	for _, container := range daemon.List() {
-		if container.ImageID == imageID {
-			return container
-		}
-	}
-
-	return nil
+	return daemon.containers.First(func(c *container.Container) bool {
+		return c.ImageID == imageID
+	})
 }
 }
 
 
 // removeImageRef attempts to parse and remove the given image reference from
 // removeImageRef attempts to parse and remove the given image reference from
@@ -328,19 +324,15 @@ func (daemon *Daemon) checkImageDeleteConflict(imgID image.ID, mask conflictType
 
 
 	if mask&conflictRunningContainer != 0 {
 	if mask&conflictRunningContainer != 0 {
 		// Check if any running container is using the image.
 		// Check if any running container is using the image.
-		for _, container := range daemon.List() {
-			if !container.IsRunning() {
-				// Skip this until we check for soft conflicts later.
-				continue
-			}
-
-			if container.ImageID == imgID {
-				return &imageDeleteConflict{
-					imgID:   imgID,
-					hard:    true,
-					used:    true,
-					message: fmt.Sprintf("image is being used by running container %s", stringid.TruncateID(container.ID)),
-				}
+		running := func(c *container.Container) bool {
+			return c.IsRunning() && c.ImageID == imgID
+		}
+		if container := daemon.containers.First(running); container != nil {
+			return &imageDeleteConflict{
+				imgID:   imgID,
+				hard:    true,
+				used:    true,
+				message: fmt.Sprintf("image is being used by running container %s", stringid.TruncateID(container.ID)),
 			}
 			}
 		}
 		}
 	}
 	}
@@ -355,18 +347,14 @@ func (daemon *Daemon) checkImageDeleteConflict(imgID image.ID, mask conflictType
 
 
 	if mask&conflictStoppedContainer != 0 {
 	if mask&conflictStoppedContainer != 0 {
 		// Check if any stopped containers reference this image.
 		// Check if any stopped containers reference this image.
-		for _, container := range daemon.List() {
-			if container.IsRunning() {
-				// Skip this as it was checked above in hard conflict conditions.
-				continue
-			}
-
-			if container.ImageID == imgID {
-				return &imageDeleteConflict{
-					imgID:   imgID,
-					used:    true,
-					message: fmt.Sprintf("image is being used by stopped container %s", stringid.TruncateID(container.ID)),
-				}
+		stopped := func(c *container.Container) bool {
+			return !c.IsRunning() && c.ImageID == imgID
+		}
+		if container := daemon.containers.First(stopped); container != nil {
+			return &imageDeleteConflict{
+				imgID:   imgID,
+				used:    true,
+				message: fmt.Sprintf("image is being used by stopped container %s", stringid.TruncateID(container.ID)),
 			}
 			}
 		}
 		}
 	}
 	}

+ 12 - 10
daemon/info.go

@@ -4,9 +4,11 @@ import (
 	"os"
 	"os"
 	"runtime"
 	"runtime"
 	"strings"
 	"strings"
+	"sync/atomic"
 	"time"
 	"time"
 
 
 	"github.com/Sirupsen/logrus"
 	"github.com/Sirupsen/logrus"
+	"github.com/docker/docker/container"
 	"github.com/docker/docker/dockerversion"
 	"github.com/docker/docker/dockerversion"
 	"github.com/docker/docker/pkg/fileutils"
 	"github.com/docker/docker/pkg/fileutils"
 	"github.com/docker/docker/pkg/parsers/kernel"
 	"github.com/docker/docker/pkg/parsers/kernel"
@@ -54,24 +56,24 @@ func (daemon *Daemon) SystemInfo() (*types.Info, error) {
 	initPath := utils.DockerInitPath("")
 	initPath := utils.DockerInitPath("")
 	sysInfo := sysinfo.New(true)
 	sysInfo := sysinfo.New(true)
 
 
-	var cRunning, cPaused, cStopped int
-	for _, c := range daemon.List() {
+	var cRunning, cPaused, cStopped int32
+	daemon.containers.ApplyAll(func(c *container.Container) {
 		switch c.StateString() {
 		switch c.StateString() {
 		case "paused":
 		case "paused":
-			cPaused++
+			atomic.AddInt32(&cPaused, 1)
 		case "running":
 		case "running":
-			cRunning++
+			atomic.AddInt32(&cRunning, 1)
 		default:
 		default:
-			cStopped++
+			atomic.AddInt32(&cStopped, 1)
 		}
 		}
-	}
+	})
 
 
 	v := &types.Info{
 	v := &types.Info{
 		ID:                 daemon.ID,
 		ID:                 daemon.ID,
-		Containers:         len(daemon.List()),
-		ContainersRunning:  cRunning,
-		ContainersPaused:   cPaused,
-		ContainersStopped:  cStopped,
+		Containers:         int(cRunning + cPaused + cStopped),
+		ContainersRunning:  int(cRunning),
+		ContainersPaused:   int(cPaused),
+		ContainersStopped:  int(cStopped),
 		Images:             len(daemon.imageStore.Map()),
 		Images:             len(daemon.imageStore.Map()),
 		Driver:             daemon.GraphDriverName(),
 		Driver:             daemon.GraphDriverName(),
 		DriverStatus:       daemon.layerStore.DriverStatus(),
 		DriverStatus:       daemon.layerStore.DriverStatus(),

+ 3 - 6
daemon/links_test.go

@@ -39,12 +39,9 @@ func TestMigrateLegacySqliteLinks(t *testing.T) {
 		},
 		},
 	}
 	}
 
 
-	store := &contStore{
-		s: map[string]*container.Container{
-			c1.ID: c1,
-			c2.ID: c2,
-		},
-	}
+	store := container.NewMemoryStore()
+	store.Add(c1.ID, c1)
+	store.Add(c2.ID, c2)
 
 
 	d := &Daemon{root: tmpDir, containers: store}
 	d := &Daemon{root: tmpDir, containers: store}
 	db, err := graphdb.NewSqliteConn(filepath.Join(d.root, "linkgraph.db"))
 	db, err := graphdb.NewSqliteConn(filepath.Join(d.root, "linkgraph.db"))

+ 21 - 5
daemon/list.go

@@ -15,6 +15,10 @@ import (
 	"github.com/docker/go-connections/nat"
 	"github.com/docker/go-connections/nat"
 )
 )
 
 
+var acceptedVolumeFilterTags = map[string]bool{
+	"dangling": true,
+}
+
 // iterationAction represents possible outcomes happening during the container iteration.
 // iterationAction represents possible outcomes happening during the container iteration.
 type iterationAction int
 type iterationAction int
 
 
@@ -410,21 +414,33 @@ func (daemon *Daemon) transformContainer(container *container.Container, ctx *li
 // Volumes lists known volumes, using the filter to restrict the range
 // Volumes lists known volumes, using the filter to restrict the range
 // of volumes returned.
 // of volumes returned.
 func (daemon *Daemon) Volumes(filter string) ([]*types.Volume, []string, error) {
 func (daemon *Daemon) Volumes(filter string) ([]*types.Volume, []string, error) {
-	var volumesOut []*types.Volume
+	var (
+		volumesOut   []*types.Volume
+		danglingOnly = false
+	)
 	volFilters, err := filters.FromParam(filter)
 	volFilters, err := filters.FromParam(filter)
 	if err != nil {
 	if err != nil {
 		return nil, nil, err
 		return nil, nil, err
 	}
 	}
 
 
-	filterUsed := volFilters.Include("dangling") &&
-		(volFilters.ExactMatch("dangling", "true") || volFilters.ExactMatch("dangling", "1"))
+	if err := volFilters.Validate(acceptedVolumeFilterTags); err != nil {
+		return nil, nil, err
+	}
+
+	if volFilters.Include("dangling") {
+		if volFilters.ExactMatch("dangling", "true") || volFilters.ExactMatch("dangling", "1") {
+			danglingOnly = true
+		} else if !volFilters.ExactMatch("dangling", "false") && !volFilters.ExactMatch("dangling", "0") {
+			return nil, nil, fmt.Errorf("Invalid filter 'dangling=%s'", volFilters.Get("dangling"))
+		}
+	}
 
 
 	volumes, warnings, err := daemon.volumes.List()
 	volumes, warnings, err := daemon.volumes.List()
 	if err != nil {
 	if err != nil {
 		return nil, nil, err
 		return nil, nil, err
 	}
 	}
-	if filterUsed {
-		volumes = daemon.volumes.FilterByUsed(volumes)
+	if volFilters.Include("dangling") {
+		volumes = daemon.volumes.FilterByUsed(volumes, !danglingOnly)
 	}
 	}
 	for _, v := range volumes {
 	for _, v := range volumes {
 		volumesOut = append(volumesOut, volumeToAPIType(v))
 		volumesOut = append(volumesOut, volumeToAPIType(v))

+ 31 - 16
daemon/logger/copier.go

@@ -20,14 +20,16 @@ type Copier struct {
 	srcs     map[string]io.Reader
 	srcs     map[string]io.Reader
 	dst      Logger
 	dst      Logger
 	copyJobs sync.WaitGroup
 	copyJobs sync.WaitGroup
+	closed   chan struct{}
 }
 }
 
 
 // NewCopier creates a new Copier
 // NewCopier creates a new Copier
 func NewCopier(cid string, srcs map[string]io.Reader, dst Logger) *Copier {
 func NewCopier(cid string, srcs map[string]io.Reader, dst Logger) *Copier {
 	return &Copier{
 	return &Copier{
-		cid:  cid,
-		srcs: srcs,
-		dst:  dst,
+		cid:    cid,
+		srcs:   srcs,
+		dst:    dst,
+		closed: make(chan struct{}),
 	}
 	}
 }
 }
 
 
@@ -44,24 +46,28 @@ func (c *Copier) copySrc(name string, src io.Reader) {
 	reader := bufio.NewReader(src)
 	reader := bufio.NewReader(src)
 
 
 	for {
 	for {
-		line, err := reader.ReadBytes('\n')
-		line = bytes.TrimSuffix(line, []byte{'\n'})
+		select {
+		case <-c.closed:
+			return
+		default:
+			line, err := reader.ReadBytes('\n')
+			line = bytes.TrimSuffix(line, []byte{'\n'})
 
 
-		// ReadBytes can return full or partial output even when it failed.
-		// e.g. it can return a full entry and EOF.
-		if err == nil || len(line) > 0 {
-			if logErr := c.dst.Log(&Message{ContainerID: c.cid, Line: line, Source: name, Timestamp: time.Now().UTC()}); logErr != nil {
-				logrus.Errorf("Failed to log msg %q for logger %s: %s", line, c.dst.Name(), logErr)
+			// ReadBytes can return full or partial output even when it failed.
+			// e.g. it can return a full entry and EOF.
+			if err == nil || len(line) > 0 {
+				if logErr := c.dst.Log(&Message{ContainerID: c.cid, Line: line, Source: name, Timestamp: time.Now().UTC()}); logErr != nil {
+					logrus.Errorf("Failed to log msg %q for logger %s: %s", line, c.dst.Name(), logErr)
+				}
 			}
 			}
-		}
 
 
-		if err != nil {
-			if err != io.EOF {
-				logrus.Errorf("Error scanning log stream: %s", err)
+			if err != nil {
+				if err != io.EOF {
+					logrus.Errorf("Error scanning log stream: %s", err)
+				}
+				return
 			}
 			}
-			return
 		}
 		}
-
 	}
 	}
 }
 }
 
 
@@ -69,3 +75,12 @@ func (c *Copier) copySrc(name string, src io.Reader) {
 func (c *Copier) Wait() {
 func (c *Copier) Wait() {
 	c.copyJobs.Wait()
 	c.copyJobs.Wait()
 }
 }
+
+// Close closes the copier
+func (c *Copier) Close() {
+	select {
+	case <-c.closed:
+	default:
+		close(c.closed)
+	}
+}

+ 37 - 1
daemon/logger/copier_test.go

@@ -10,9 +10,15 @@ import (
 
 
 type TestLoggerJSON struct {
 type TestLoggerJSON struct {
 	*json.Encoder
 	*json.Encoder
+	delay time.Duration
 }
 }
 
 
-func (l *TestLoggerJSON) Log(m *Message) error { return l.Encode(m) }
+func (l *TestLoggerJSON) Log(m *Message) error {
+	if l.delay > 0 {
+		time.Sleep(l.delay)
+	}
+	return l.Encode(m)
+}
 
 
 func (l *TestLoggerJSON) Close() error { return nil }
 func (l *TestLoggerJSON) Close() error { return nil }
 
 
@@ -94,3 +100,33 @@ func TestCopier(t *testing.T) {
 		}
 		}
 	}
 	}
 }
 }
+
+func TestCopierSlow(t *testing.T) {
+	stdoutLine := "Line that thinks that it is log line from docker stdout"
+	var stdout bytes.Buffer
+	for i := 0; i < 30; i++ {
+		if _, err := stdout.WriteString(stdoutLine + "\n"); err != nil {
+			t.Fatal(err)
+		}
+	}
+
+	var jsonBuf bytes.Buffer
+	//encoder := &encodeCloser{Encoder: json.NewEncoder(&jsonBuf)}
+	jsonLog := &TestLoggerJSON{Encoder: json.NewEncoder(&jsonBuf), delay: 100 * time.Millisecond}
+
+	cid := "a7317399f3f857173c6179d44823594f8294678dea9999662e5c625b5a1c7657"
+	c := NewCopier(cid, map[string]io.Reader{"stdout": &stdout}, jsonLog)
+	c.Run()
+	wait := make(chan struct{})
+	go func() {
+		c.Wait()
+		close(wait)
+	}()
+	<-time.After(150 * time.Millisecond)
+	c.Close()
+	select {
+	case <-time.After(200 * time.Millisecond):
+		t.Fatalf("failed to exit in time after the copier is closed")
+	case <-wait:
+	}
+}

+ 5 - 0
daemon/mounts.go

@@ -25,6 +25,11 @@ func (daemon *Daemon) removeMountPoints(container *container.Container, rm bool)
 		}
 		}
 		daemon.volumes.Dereference(m.Volume, container.ID)
 		daemon.volumes.Dereference(m.Volume, container.ID)
 		if rm {
 		if rm {
+			// Do not remove named mountpoints
+			// these are mountpoints specified like `docker run -v <name>:/foo`
+			if m.Named {
+				continue
+			}
 			err := daemon.volumes.Remove(m.Volume)
 			err := daemon.volumes.Remove(m.Volume)
 			// Ignore volume in use errors because having this
 			// Ignore volume in use errors because having this
 			// volume being referenced by other container is
 			// volume being referenced by other container is

+ 2 - 1
daemon/volumes.go

@@ -90,6 +90,7 @@ func (daemon *Daemon) registerMountPoints(container *container.Container, hostCo
 				Driver:      m.Driver,
 				Driver:      m.Driver,
 				Destination: m.Destination,
 				Destination: m.Destination,
 				Propagation: m.Propagation,
 				Propagation: m.Propagation,
+				Named:       m.Named,
 			}
 			}
 
 
 			if len(cp.Source) == 0 {
 			if len(cp.Source) == 0 {
@@ -126,6 +127,7 @@ func (daemon *Daemon) registerMountPoints(container *container.Container, hostCo
 			bind.Source = v.Path()
 			bind.Source = v.Path()
 			// bind.Name is an already existing volume, we need to use that here
 			// bind.Name is an already existing volume, we need to use that here
 			bind.Driver = v.DriverName()
 			bind.Driver = v.DriverName()
+			bind.Named = true
 			bind = setBindModeIfNull(bind)
 			bind = setBindModeIfNull(bind)
 		}
 		}
 		if label.RelabelNeeded(bind.Mode) {
 		if label.RelabelNeeded(bind.Mode) {
@@ -159,7 +161,6 @@ func (daemon *Daemon) registerMountPoints(container *container.Container, hostCo
 func (daemon *Daemon) lazyInitializeVolume(containerID string, m *volume.MountPoint) error {
 func (daemon *Daemon) lazyInitializeVolume(containerID string, m *volume.MountPoint) error {
 	if len(m.Driver) > 0 && m.Volume == nil {
 	if len(m.Driver) > 0 && m.Volume == nil {
 		v, err := daemon.volumes.GetWithRef(m.Name, m.Driver, containerID)
 		v, err := daemon.volumes.GetWithRef(m.Name, m.Driver, containerID)
-
 		if err != nil {
 		if err != nil {
 			return err
 			return err
 		}
 		}

+ 11 - 19
distribution/pull.go

@@ -3,7 +3,6 @@ package distribution
 import (
 import (
 	"fmt"
 	"fmt"
 	"os"
 	"os"
-	"strings"
 
 
 	"github.com/Sirupsen/logrus"
 	"github.com/Sirupsen/logrus"
 	"github.com/docker/docker/api"
 	"github.com/docker/docker/api"
@@ -97,13 +96,12 @@ func Pull(ctx context.Context, ref reference.Named, imagePullConfig *ImagePullCo
 	}
 	}
 
 
 	var (
 	var (
-		// use a slice to append the error strings and return a joined string to caller
-		errors []string
+		lastErr error
 
 
 		// discardNoSupportErrors is used to track whether an endpoint encountered an error of type registry.ErrNoSupport
 		// discardNoSupportErrors is used to track whether an endpoint encountered an error of type registry.ErrNoSupport
-		// By default it is false, which means that if a ErrNoSupport error is encountered, it will be saved in errors.
+		// By default it is false, which means that if a ErrNoSupport error is encountered, it will be saved in lastErr.
 		// As soon as another kind of error is encountered, discardNoSupportErrors is set to true, avoiding the saving of
 		// As soon as another kind of error is encountered, discardNoSupportErrors is set to true, avoiding the saving of
-		// any subsequent ErrNoSupport errors in errors.
+		// any subsequent ErrNoSupport errors in lastErr.
 		// It's needed for pull-by-digest on v1 endpoints: if there are only v1 endpoints configured, the error should be
 		// It's needed for pull-by-digest on v1 endpoints: if there are only v1 endpoints configured, the error should be
 		// returned and displayed, but if there was a v2 endpoint which supports pull-by-digest, then the last relevant
 		// returned and displayed, but if there was a v2 endpoint which supports pull-by-digest, then the last relevant
 		// error is the ones from v2 endpoints not v1.
 		// error is the ones from v2 endpoints not v1.
@@ -123,7 +121,7 @@ func Pull(ctx context.Context, ref reference.Named, imagePullConfig *ImagePullCo
 
 
 		puller, err := newPuller(endpoint, repoInfo, imagePullConfig)
 		puller, err := newPuller(endpoint, repoInfo, imagePullConfig)
 		if err != nil {
 		if err != nil {
-			errors = append(errors, err.Error())
+			lastErr = err
 			continue
 			continue
 		}
 		}
 		if err := puller.Pull(ctx, ref); err != nil {
 		if err := puller.Pull(ctx, ref); err != nil {
@@ -144,34 +142,28 @@ func Pull(ctx context.Context, ref reference.Named, imagePullConfig *ImagePullCo
 					// Because we found an error that's not ErrNoSupport, discard all subsequent ErrNoSupport errors.
 					// Because we found an error that's not ErrNoSupport, discard all subsequent ErrNoSupport errors.
 					discardNoSupportErrors = true
 					discardNoSupportErrors = true
 					// append subsequent errors
 					// append subsequent errors
-					errors = append(errors, err.Error())
+					lastErr = err
 				} else if !discardNoSupportErrors {
 				} else if !discardNoSupportErrors {
 					// Save the ErrNoSupport error, because it's either the first error or all encountered errors
 					// Save the ErrNoSupport error, because it's either the first error or all encountered errors
 					// were also ErrNoSupport errors.
 					// were also ErrNoSupport errors.
 					// append subsequent errors
 					// append subsequent errors
-					errors = append(errors, err.Error())
+					lastErr = err
 				}
 				}
 				continue
 				continue
 			}
 			}
-			errors = append(errors, err.Error())
-			logrus.Debugf("Not continuing with error: %v", fmt.Errorf(strings.Join(errors, "\n")))
-			if len(errors) > 0 {
-				return fmt.Errorf(strings.Join(errors, "\n"))
-			}
+			logrus.Debugf("Not continuing with error: %v", err)
+			return err
 		}
 		}
 
 
 		imagePullConfig.ImageEventLogger(ref.String(), repoInfo.Name(), "pull")
 		imagePullConfig.ImageEventLogger(ref.String(), repoInfo.Name(), "pull")
 		return nil
 		return nil
 	}
 	}
 
 
-	if len(errors) == 0 {
-		return fmt.Errorf("no endpoints found for %s", ref.String())
+	if lastErr == nil {
+		lastErr = fmt.Errorf("no endpoints found for %s", ref.String())
 	}
 	}
 
 
-	if len(errors) > 0 {
-		return fmt.Errorf(strings.Join(errors, "\n"))
-	}
-	return nil
+	return lastErr
 }
 }
 
 
 // writeStatus writes a status message to out. If layersDownloaded is true, the
 // writeStatus writes a status message to out. If layersDownloaded is true, the

+ 14 - 2
distribution/pull_v2.go

@@ -171,6 +171,10 @@ func (ld *v2LayerDescriptor) Download(ctx context.Context, progressOutput progre
 
 
 	_, err = io.Copy(tmpFile, io.TeeReader(reader, verifier))
 	_, err = io.Copy(tmpFile, io.TeeReader(reader, verifier))
 	if err != nil {
 	if err != nil {
+		tmpFile.Close()
+		if err := os.Remove(tmpFile.Name()); err != nil {
+			logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name())
+		}
 		return nil, 0, retryOnError(err)
 		return nil, 0, retryOnError(err)
 	}
 	}
 
 
@@ -179,8 +183,9 @@ func (ld *v2LayerDescriptor) Download(ctx context.Context, progressOutput progre
 	if !verifier.Verified() {
 	if !verifier.Verified() {
 		err = fmt.Errorf("filesystem layer verification failed for digest %s", ld.digest)
 		err = fmt.Errorf("filesystem layer verification failed for digest %s", ld.digest)
 		logrus.Error(err)
 		logrus.Error(err)
+
 		tmpFile.Close()
 		tmpFile.Close()
-		if err := os.RemoveAll(tmpFile.Name()); err != nil {
+		if err := os.Remove(tmpFile.Name()); err != nil {
 			logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name())
 			logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name())
 		}
 		}
 
 
@@ -191,7 +196,14 @@ func (ld *v2LayerDescriptor) Download(ctx context.Context, progressOutput progre
 
 
 	logrus.Debugf("Downloaded %s to tempfile %s", ld.ID(), tmpFile.Name())
 	logrus.Debugf("Downloaded %s to tempfile %s", ld.ID(), tmpFile.Name())
 
 
-	tmpFile.Seek(0, 0)
+	_, err = tmpFile.Seek(0, os.SEEK_SET)
+	if err != nil {
+		tmpFile.Close()
+		if err := os.Remove(tmpFile.Name()); err != nil {
+			logrus.Errorf("Failed to remove temp file: %s", tmpFile.Name())
+		}
+		return nil, 0, xfer.DoNotRetry{Err: err}
+	}
 	return ioutils.NewReadCloserWrapper(tmpFile, tmpFileCloser(tmpFile)), size, nil
 	return ioutils.NewReadCloserWrapper(tmpFile, tmpFileCloser(tmpFile)), size, nil
 }
 }
 
 

+ 10 - 2
distribution/push.go

@@ -171,7 +171,14 @@ func Push(ctx context.Context, ref reference.Named, imagePushConfig *ImagePushCo
 // argument so that it can be used with httpBlobWriter's ReadFrom method.
 // argument so that it can be used with httpBlobWriter's ReadFrom method.
 // Using httpBlobWriter's Write method would send a PATCH request for every
 // Using httpBlobWriter's Write method would send a PATCH request for every
 // Write call.
 // Write call.
-func compress(in io.Reader) io.ReadCloser {
+//
+// The second return value is a channel that gets closed when the goroutine
+// is finished. This allows the caller to make sure the goroutine finishes
+// before it releases any resources connected with the reader that was
+// passed in.
+func compress(in io.Reader) (io.ReadCloser, chan struct{}) {
+	compressionDone := make(chan struct{})
+
 	pipeReader, pipeWriter := io.Pipe()
 	pipeReader, pipeWriter := io.Pipe()
 	// Use a bufio.Writer to avoid excessive chunking in HTTP request.
 	// Use a bufio.Writer to avoid excessive chunking in HTTP request.
 	bufWriter := bufio.NewWriterSize(pipeWriter, compressionBufSize)
 	bufWriter := bufio.NewWriterSize(pipeWriter, compressionBufSize)
@@ -190,7 +197,8 @@ func compress(in io.Reader) io.ReadCloser {
 		} else {
 		} else {
 			pipeWriter.Close()
 			pipeWriter.Close()
 		}
 		}
+		close(compressionDone)
 	}()
 	}()
 
 
-	return pipeReader
+	return pipeReader, compressionDone
 }
 }

+ 7 - 2
distribution/push_v2.go

@@ -311,6 +311,8 @@ func (pd *v2PushDescriptor) Upload(ctx context.Context, progressOutput progress.
 	case distribution.ErrBlobMounted:
 	case distribution.ErrBlobMounted:
 		progress.Updatef(progressOutput, pd.ID(), "Mounted from %s", err.From.Name())
 		progress.Updatef(progressOutput, pd.ID(), "Mounted from %s", err.From.Name())
 
 
+		err.Descriptor.MediaType = schema2.MediaTypeLayer
+
 		pd.pushState.Lock()
 		pd.pushState.Lock()
 		pd.pushState.confirmedV2 = true
 		pd.pushState.confirmedV2 = true
 		pd.pushState.remoteLayers[diffID] = err.Descriptor
 		pd.pushState.remoteLayers[diffID] = err.Descriptor
@@ -343,8 +345,11 @@ func (pd *v2PushDescriptor) Upload(ctx context.Context, progressOutput progress.
 	size, _ := pd.layer.DiffSize()
 	size, _ := pd.layer.DiffSize()
 
 
 	reader := progress.NewProgressReader(ioutils.NewCancelReadCloser(ctx, arch), progressOutput, size, pd.ID(), "Pushing")
 	reader := progress.NewProgressReader(ioutils.NewCancelReadCloser(ctx, arch), progressOutput, size, pd.ID(), "Pushing")
-	defer reader.Close()
-	compressedReader := compress(reader)
+	compressedReader, compressionDone := compress(reader)
+	defer func() {
+		reader.Close()
+		<-compressionDone
+	}()
 
 
 	digester := digest.Canonical.New()
 	digester := digest.Canonical.New()
 	tee := io.TeeReader(compressedReader, digester.Hash())
 	tee := io.TeeReader(compressedReader, digester.Hash())

+ 7 - 0
distribution/registry.go

@@ -6,6 +6,7 @@ import (
 	"net/http"
 	"net/http"
 	"net/url"
 	"net/url"
 	"strings"
 	"strings"
+	"syscall"
 	"time"
 	"time"
 
 
 	"github.com/docker/distribution"
 	"github.com/docker/distribution"
@@ -145,8 +146,14 @@ func retryOnError(err error) error {
 		case errcode.ErrorCodeUnauthorized, errcode.ErrorCodeUnsupported, errcode.ErrorCodeDenied:
 		case errcode.ErrorCodeUnauthorized, errcode.ErrorCodeUnsupported, errcode.ErrorCodeDenied:
 			return xfer.DoNotRetry{Err: err}
 			return xfer.DoNotRetry{Err: err}
 		}
 		}
+	case *url.Error:
+		return retryOnError(v.Err)
 	case *client.UnexpectedHTTPResponseError:
 	case *client.UnexpectedHTTPResponseError:
 		return xfer.DoNotRetry{Err: err}
 		return xfer.DoNotRetry{Err: err}
+	case error:
+		if strings.Contains(err.Error(), strings.ToLower(syscall.ENOSPC.Error())) {
+			return xfer.DoNotRetry{Err: err}
+		}
 	}
 	}
 	// let's be nice and fallback if the error is a completely
 	// let's be nice and fallback if the error is a completely
 	// unexpected one.
 	// unexpected one.

+ 60 - 15
distribution/xfer/transfer.go

@@ -1,6 +1,7 @@
 package xfer
 package xfer
 
 
 import (
 import (
+	"runtime"
 	"sync"
 	"sync"
 
 
 	"github.com/docker/docker/pkg/progress"
 	"github.com/docker/docker/pkg/progress"
@@ -38,7 +39,7 @@ type Transfer interface {
 	Watch(progressOutput progress.Output) *Watcher
 	Watch(progressOutput progress.Output) *Watcher
 	Release(*Watcher)
 	Release(*Watcher)
 	Context() context.Context
 	Context() context.Context
-	Cancel()
+	Close()
 	Done() <-chan struct{}
 	Done() <-chan struct{}
 	Released() <-chan struct{}
 	Released() <-chan struct{}
 	Broadcast(masterProgressChan <-chan progress.Progress)
 	Broadcast(masterProgressChan <-chan progress.Progress)
@@ -61,11 +62,14 @@ type transfer struct {
 
 
 	// running remains open as long as the transfer is in progress.
 	// running remains open as long as the transfer is in progress.
 	running chan struct{}
 	running chan struct{}
-	// hasWatchers stays open until all watchers release the transfer.
-	hasWatchers chan struct{}
+	// released stays open until all watchers release the transfer and
+	// the transfer is no longer tracked by the transfer manager.
+	released chan struct{}
 
 
 	// broadcastDone is true if the master progress channel has closed.
 	// broadcastDone is true if the master progress channel has closed.
 	broadcastDone bool
 	broadcastDone bool
+	// closed is true if Close has been called
+	closed bool
 	// broadcastSyncChan allows watchers to "ping" the broadcasting
 	// broadcastSyncChan allows watchers to "ping" the broadcasting
 	// goroutine to wait for it for deplete its input channel. This ensures
 	// goroutine to wait for it for deplete its input channel. This ensures
 	// a detaching watcher won't miss an event that was sent before it
 	// a detaching watcher won't miss an event that was sent before it
@@ -78,7 +82,7 @@ func NewTransfer() Transfer {
 	t := &transfer{
 	t := &transfer{
 		watchers:          make(map[chan struct{}]*Watcher),
 		watchers:          make(map[chan struct{}]*Watcher),
 		running:           make(chan struct{}),
 		running:           make(chan struct{}),
-		hasWatchers:       make(chan struct{}),
+		released:          make(chan struct{}),
 		broadcastSyncChan: make(chan struct{}),
 		broadcastSyncChan: make(chan struct{}),
 	}
 	}
 
 
@@ -144,13 +148,13 @@ func (t *transfer) Watch(progressOutput progress.Output) *Watcher {
 		running:     make(chan struct{}),
 		running:     make(chan struct{}),
 	}
 	}
 
 
+	t.watchers[w.releaseChan] = w
+
 	if t.broadcastDone {
 	if t.broadcastDone {
 		close(w.running)
 		close(w.running)
 		return w
 		return w
 	}
 	}
 
 
-	t.watchers[w.releaseChan] = w
-
 	go func() {
 	go func() {
 		defer func() {
 		defer func() {
 			close(w.running)
 			close(w.running)
@@ -202,8 +206,19 @@ func (t *transfer) Release(watcher *Watcher) {
 	delete(t.watchers, watcher.releaseChan)
 	delete(t.watchers, watcher.releaseChan)
 
 
 	if len(t.watchers) == 0 {
 	if len(t.watchers) == 0 {
-		close(t.hasWatchers)
-		t.cancel()
+		if t.closed {
+			// released may have been closed already if all
+			// watchers were released, then another one was added
+			// while waiting for a previous watcher goroutine to
+			// finish.
+			select {
+			case <-t.released:
+			default:
+				close(t.released)
+			}
+		} else {
+			t.cancel()
+		}
 	}
 	}
 	t.mu.Unlock()
 	t.mu.Unlock()
 
 
@@ -223,9 +238,9 @@ func (t *transfer) Done() <-chan struct{} {
 }
 }
 
 
 // Released returns a channel which is closed once all watchers release the
 // Released returns a channel which is closed once all watchers release the
-// transfer.
+// transfer AND the transfer is no longer tracked by the transfer manager.
 func (t *transfer) Released() <-chan struct{} {
 func (t *transfer) Released() <-chan struct{} {
-	return t.hasWatchers
+	return t.released
 }
 }
 
 
 // Context returns the context associated with the transfer.
 // Context returns the context associated with the transfer.
@@ -233,9 +248,15 @@ func (t *transfer) Context() context.Context {
 	return t.ctx
 	return t.ctx
 }
 }
 
 
-// Cancel cancels the context associated with the transfer.
-func (t *transfer) Cancel() {
-	t.cancel()
+// Close is called by the transfer manager when the transfer is no longer
+// being tracked.
+func (t *transfer) Close() {
+	t.mu.Lock()
+	t.closed = true
+	if len(t.watchers) == 0 {
+		close(t.released)
+	}
+	t.mu.Unlock()
 }
 }
 
 
 // DoFunc is a function called by the transfer manager to actually perform
 // DoFunc is a function called by the transfer manager to actually perform
@@ -280,10 +301,33 @@ func (tm *transferManager) Transfer(key string, xferFunc DoFunc, progressOutput
 	tm.mu.Lock()
 	tm.mu.Lock()
 	defer tm.mu.Unlock()
 	defer tm.mu.Unlock()
 
 
-	if xfer, present := tm.transfers[key]; present {
+	for {
+		xfer, present := tm.transfers[key]
+		if !present {
+			break
+		}
 		// Transfer is already in progress.
 		// Transfer is already in progress.
 		watcher := xfer.Watch(progressOutput)
 		watcher := xfer.Watch(progressOutput)
-		return xfer, watcher
+
+		select {
+		case <-xfer.Context().Done():
+			// We don't want to watch a transfer that has been cancelled.
+			// Wait for it to be removed from the map and try again.
+			xfer.Release(watcher)
+			tm.mu.Unlock()
+			// The goroutine that removes this transfer from the
+			// map is also waiting for xfer.Done(), so yield to it.
+			// This could be avoided by adding a Closed method
+			// to Transfer to allow explicitly waiting for it to be
+			// removed the map, but forcing a scheduling round in
+			// this very rare case seems better than bloating the
+			// interface definition.
+			runtime.Gosched()
+			<-xfer.Done()
+			tm.mu.Lock()
+		default:
+			return xfer, watcher
+		}
 	}
 	}
 
 
 	start := make(chan struct{})
 	start := make(chan struct{})
@@ -318,6 +362,7 @@ func (tm *transferManager) Transfer(key string, xferFunc DoFunc, progressOutput
 				}
 				}
 				delete(tm.transfers, key)
 				delete(tm.transfers, key)
 				tm.mu.Unlock()
 				tm.mu.Unlock()
+				xfer.Close()
 				return
 				return
 			}
 			}
 		}
 		}

+ 38 - 0
distribution/xfer/transfer_test.go

@@ -291,6 +291,44 @@ func TestWatchRelease(t *testing.T) {
 	}
 	}
 }
 }
 
 
+func TestWatchFinishedTransfer(t *testing.T) {
+	makeXferFunc := func(id string) DoFunc {
+		return func(progressChan chan<- progress.Progress, start <-chan struct{}, inactive chan<- struct{}) Transfer {
+			xfer := NewTransfer()
+			go func() {
+				// Finish immediately
+				close(progressChan)
+			}()
+			return xfer
+		}
+	}
+
+	tm := NewTransferManager(5)
+
+	// Start a transfer
+	watchers := make([]*Watcher, 3)
+	var xfer Transfer
+	xfer, watchers[0] = tm.Transfer("id1", makeXferFunc("id1"), progress.ChanOutput(make(chan progress.Progress)))
+
+	// Give it a watcher immediately
+	watchers[1] = xfer.Watch(progress.ChanOutput(make(chan progress.Progress)))
+
+	// Wait for the transfer to complete
+	<-xfer.Done()
+
+	// Set up another watcher
+	watchers[2] = xfer.Watch(progress.ChanOutput(make(chan progress.Progress)))
+
+	// Release the watchers
+	for _, w := range watchers {
+		xfer.Release(w)
+	}
+
+	// Now that all watchers have been released, Released() should
+	// return a closed channel.
+	<-xfer.Released()
+}
+
 func TestDuplicateTransfer(t *testing.T) {
 func TestDuplicateTransfer(t *testing.T) {
 	ready := make(chan struct{})
 	ready := make(chan struct{})
 
 

+ 5 - 0
docker/client.go

@@ -6,6 +6,7 @@ import (
 	"github.com/docker/docker/cli"
 	"github.com/docker/docker/cli"
 	"github.com/docker/docker/cliconfig"
 	"github.com/docker/docker/cliconfig"
 	flag "github.com/docker/docker/pkg/mflag"
 	flag "github.com/docker/docker/pkg/mflag"
+	"github.com/docker/docker/utils"
 )
 )
 
 
 var clientFlags = &cli.ClientFlags{FlagSet: new(flag.FlagSet), Common: commonFlags}
 var clientFlags = &cli.ClientFlags{FlagSet: new(flag.FlagSet), Common: commonFlags}
@@ -24,5 +25,9 @@ func init() {
 		if clientFlags.Common.TrustKey == "" {
 		if clientFlags.Common.TrustKey == "" {
 			clientFlags.Common.TrustKey = filepath.Join(cliconfig.ConfigDir(), defaultTrustKeyFile)
 			clientFlags.Common.TrustKey = filepath.Join(cliconfig.ConfigDir(), defaultTrustKeyFile)
 		}
 		}
+
+		if clientFlags.Common.Debug {
+			utils.EnableDebug()
+		}
 	}
 	}
 }
 }

+ 16 - 11
docker/common.go

@@ -18,6 +18,7 @@ const (
 	defaultCaFile       = "ca.pem"
 	defaultCaFile       = "ca.pem"
 	defaultKeyFile      = "key.pem"
 	defaultKeyFile      = "key.pem"
 	defaultCertFile     = "cert.pem"
 	defaultCertFile     = "cert.pem"
+	tlsVerifyKey        = "tlsverify"
 )
 )
 
 
 var (
 var (
@@ -55,21 +56,12 @@ func init() {
 func postParseCommon() {
 func postParseCommon() {
 	cmd := commonFlags.FlagSet
 	cmd := commonFlags.FlagSet
 
 
-	if commonFlags.LogLevel != "" {
-		lvl, err := logrus.ParseLevel(commonFlags.LogLevel)
-		if err != nil {
-			fmt.Fprintf(os.Stderr, "Unable to parse logging level: %s\n", commonFlags.LogLevel)
-			os.Exit(1)
-		}
-		logrus.SetLevel(lvl)
-	} else {
-		logrus.SetLevel(logrus.InfoLevel)
-	}
+	setDaemonLogLevel(commonFlags.LogLevel)
 
 
 	// Regardless of whether the user sets it to true or false, if they
 	// Regardless of whether the user sets it to true or false, if they
 	// specify --tlsverify at all then we need to turn on tls
 	// specify --tlsverify at all then we need to turn on tls
 	// TLSVerify can be true even if not set due to DOCKER_TLS_VERIFY env var, so we need to check that here as well
 	// TLSVerify can be true even if not set due to DOCKER_TLS_VERIFY env var, so we need to check that here as well
-	if cmd.IsSet("-tlsverify") || commonFlags.TLSVerify {
+	if cmd.IsSet("-"+tlsVerifyKey) || commonFlags.TLSVerify {
 		commonFlags.TLS = true
 		commonFlags.TLS = true
 	}
 	}
 
 
@@ -93,3 +85,16 @@ func postParseCommon() {
 		}
 		}
 	}
 	}
 }
 }
+
+func setDaemonLogLevel(logLevel string) {
+	if logLevel != "" {
+		lvl, err := logrus.ParseLevel(logLevel)
+		if err != nil {
+			fmt.Fprintf(os.Stderr, "Unable to parse logging level: %s\n", logLevel)
+			os.Exit(1)
+		}
+		logrus.SetLevel(lvl)
+	} else {
+		logrus.SetLevel(logrus.InfoLevel)
+	}
+}

+ 16 - 7
docker/daemon.go

@@ -204,9 +204,9 @@ func (cli *DaemonCli) CmdDaemon(args ...string) error {
 	defaultHost := opts.DefaultHost
 	defaultHost := opts.DefaultHost
 	if cli.Config.TLS {
 	if cli.Config.TLS {
 		tlsOptions := tlsconfig.Options{
 		tlsOptions := tlsconfig.Options{
-			CAFile:   cli.Config.TLSOptions.CAFile,
-			CertFile: cli.Config.TLSOptions.CertFile,
-			KeyFile:  cli.Config.TLSOptions.KeyFile,
+			CAFile:   cli.Config.CommonTLSOptions.CAFile,
+			CertFile: cli.Config.CommonTLSOptions.CertFile,
+			KeyFile:  cli.Config.CommonTLSOptions.KeyFile,
 		}
 		}
 
 
 		if cli.Config.TLSVerify {
 		if cli.Config.TLSVerify {
@@ -338,12 +338,12 @@ func loadDaemonCliConfig(config *daemon.Config, daemonFlags *flag.FlagSet, commo
 	config.LogLevel = commonConfig.LogLevel
 	config.LogLevel = commonConfig.LogLevel
 	config.TLS = commonConfig.TLS
 	config.TLS = commonConfig.TLS
 	config.TLSVerify = commonConfig.TLSVerify
 	config.TLSVerify = commonConfig.TLSVerify
-	config.TLSOptions = daemon.CommonTLSOptions{}
+	config.CommonTLSOptions = daemon.CommonTLSOptions{}
 
 
 	if commonConfig.TLSOptions != nil {
 	if commonConfig.TLSOptions != nil {
-		config.TLSOptions.CAFile = commonConfig.TLSOptions.CAFile
-		config.TLSOptions.CertFile = commonConfig.TLSOptions.CertFile
-		config.TLSOptions.KeyFile = commonConfig.TLSOptions.KeyFile
+		config.CommonTLSOptions.CAFile = commonConfig.TLSOptions.CAFile
+		config.CommonTLSOptions.CertFile = commonConfig.TLSOptions.CertFile
+		config.CommonTLSOptions.KeyFile = commonConfig.TLSOptions.KeyFile
 	}
 	}
 
 
 	if configFile != "" {
 	if configFile != "" {
@@ -360,5 +360,14 @@ func loadDaemonCliConfig(config *daemon.Config, daemonFlags *flag.FlagSet, commo
 		}
 		}
 	}
 	}
 
 
+	// Regardless of whether the user sets it to true or false, if they
+	// specify TLSVerify at all then we need to turn on TLS
+	if config.IsValueSet(tlsVerifyKey) {
+		config.TLS = true
+	}
+
+	// ensure that the log level is the one set after merging configurations
+	setDaemonLogLevel(config.LogLevel)
+
 	return config, nil
 	return config, nil
 }
 }

+ 204 - 2
docker/daemon_test.go

@@ -7,6 +7,7 @@ import (
 	"strings"
 	"strings"
 	"testing"
 	"testing"
 
 
+	"github.com/Sirupsen/logrus"
 	"github.com/docker/docker/cli"
 	"github.com/docker/docker/cli"
 	"github.com/docker/docker/daemon"
 	"github.com/docker/docker/daemon"
 	"github.com/docker/docker/opts"
 	"github.com/docker/docker/opts"
@@ -50,8 +51,8 @@ func TestLoadDaemonCliConfigWithTLS(t *testing.T) {
 	if loadedConfig == nil {
 	if loadedConfig == nil {
 		t.Fatalf("expected configuration %v, got nil", c)
 		t.Fatalf("expected configuration %v, got nil", c)
 	}
 	}
-	if loadedConfig.TLSOptions.CAFile != "/tmp/ca.pem" {
-		t.Fatalf("expected /tmp/ca.pem, got %s: %q", loadedConfig.TLSOptions.CAFile, loadedConfig)
+	if loadedConfig.CommonTLSOptions.CAFile != "/tmp/ca.pem" {
+		t.Fatalf("expected /tmp/ca.pem, got %s: %q", loadedConfig.CommonTLSOptions.CAFile, loadedConfig)
 	}
 	}
 }
 }
 
 
@@ -89,3 +90,204 @@ func TestLoadDaemonCliConfigWithConflicts(t *testing.T) {
 		t.Fatalf("expected labels conflict, got %v", err)
 		t.Fatalf("expected labels conflict, got %v", err)
 	}
 	}
 }
 }
+
+func TestLoadDaemonCliConfigWithTLSVerify(t *testing.T) {
+	c := &daemon.Config{}
+	common := &cli.CommonFlags{
+		TLSOptions: &tlsconfig.Options{
+			CAFile: "/tmp/ca.pem",
+		},
+	}
+
+	f, err := ioutil.TempFile("", "docker-config-")
+	if err != nil {
+		t.Fatal(err)
+	}
+
+	configFile := f.Name()
+	f.Write([]byte(`{"tlsverify": true}`))
+	f.Close()
+
+	flags := mflag.NewFlagSet("test", mflag.ContinueOnError)
+	flags.Bool([]string{"-tlsverify"}, false, "")
+	loadedConfig, err := loadDaemonCliConfig(c, flags, common, configFile)
+	if err != nil {
+		t.Fatal(err)
+	}
+	if loadedConfig == nil {
+		t.Fatalf("expected configuration %v, got nil", c)
+	}
+
+	if !loadedConfig.TLS {
+		t.Fatalf("expected TLS enabled, got %q", loadedConfig)
+	}
+}
+
+func TestLoadDaemonCliConfigWithExplicitTLSVerifyFalse(t *testing.T) {
+	c := &daemon.Config{}
+	common := &cli.CommonFlags{
+		TLSOptions: &tlsconfig.Options{
+			CAFile: "/tmp/ca.pem",
+		},
+	}
+
+	f, err := ioutil.TempFile("", "docker-config-")
+	if err != nil {
+		t.Fatal(err)
+	}
+
+	configFile := f.Name()
+	f.Write([]byte(`{"tlsverify": false}`))
+	f.Close()
+
+	flags := mflag.NewFlagSet("test", mflag.ContinueOnError)
+	flags.Bool([]string{"-tlsverify"}, false, "")
+	loadedConfig, err := loadDaemonCliConfig(c, flags, common, configFile)
+	if err != nil {
+		t.Fatal(err)
+	}
+	if loadedConfig == nil {
+		t.Fatalf("expected configuration %v, got nil", c)
+	}
+
+	if !loadedConfig.TLS {
+		t.Fatalf("expected TLS enabled, got %q", loadedConfig)
+	}
+}
+
+func TestLoadDaemonCliConfigWithoutTLSVerify(t *testing.T) {
+	c := &daemon.Config{}
+	common := &cli.CommonFlags{
+		TLSOptions: &tlsconfig.Options{
+			CAFile: "/tmp/ca.pem",
+		},
+	}
+
+	f, err := ioutil.TempFile("", "docker-config-")
+	if err != nil {
+		t.Fatal(err)
+	}
+
+	configFile := f.Name()
+	f.Write([]byte(`{}`))
+	f.Close()
+
+	flags := mflag.NewFlagSet("test", mflag.ContinueOnError)
+	loadedConfig, err := loadDaemonCliConfig(c, flags, common, configFile)
+	if err != nil {
+		t.Fatal(err)
+	}
+	if loadedConfig == nil {
+		t.Fatalf("expected configuration %v, got nil", c)
+	}
+
+	if loadedConfig.TLS {
+		t.Fatalf("expected TLS disabled, got %q", loadedConfig)
+	}
+}
+
+func TestLoadDaemonCliConfigWithLogLevel(t *testing.T) {
+	c := &daemon.Config{}
+	common := &cli.CommonFlags{}
+
+	f, err := ioutil.TempFile("", "docker-config-")
+	if err != nil {
+		t.Fatal(err)
+	}
+
+	configFile := f.Name()
+	f.Write([]byte(`{"log-level": "warn"}`))
+	f.Close()
+
+	flags := mflag.NewFlagSet("test", mflag.ContinueOnError)
+	flags.String([]string{"-log-level"}, "", "")
+	loadedConfig, err := loadDaemonCliConfig(c, flags, common, configFile)
+	if err != nil {
+		t.Fatal(err)
+	}
+	if loadedConfig == nil {
+		t.Fatalf("expected configuration %v, got nil", c)
+	}
+	if loadedConfig.LogLevel != "warn" {
+		t.Fatalf("expected warn log level, got %v", loadedConfig.LogLevel)
+	}
+
+	if logrus.GetLevel() != logrus.WarnLevel {
+		t.Fatalf("expected warn log level, got %v", logrus.GetLevel())
+	}
+}
+
+func TestLoadDaemonConfigWithEmbeddedOptions(t *testing.T) {
+	c := &daemon.Config{}
+	common := &cli.CommonFlags{}
+	flags := mflag.NewFlagSet("test", mflag.ContinueOnError)
+	flags.String([]string{"-tlscacert"}, "", "")
+	flags.String([]string{"-log-driver"}, "", "")
+
+	f, err := ioutil.TempFile("", "docker-config-")
+	if err != nil {
+		t.Fatal(err)
+	}
+
+	configFile := f.Name()
+	f.Write([]byte(`{"tlscacert": "/etc/certs/ca.pem", "log-driver": "syslog"}`))
+	f.Close()
+
+	loadedConfig, err := loadDaemonCliConfig(c, flags, common, configFile)
+	if err != nil {
+		t.Fatal(err)
+	}
+	if loadedConfig == nil {
+		t.Fatal("expected configuration, got nil")
+	}
+	if loadedConfig.CommonTLSOptions.CAFile != "/etc/certs/ca.pem" {
+		t.Fatalf("expected CA file path /etc/certs/ca.pem, got %v", loadedConfig.CommonTLSOptions.CAFile)
+	}
+	if loadedConfig.LogConfig.Type != "syslog" {
+		t.Fatalf("expected LogConfig type syslog, got %v", loadedConfig.LogConfig.Type)
+	}
+}
+
+func TestLoadDaemonConfigWithMapOptions(t *testing.T) {
+	c := &daemon.Config{}
+	common := &cli.CommonFlags{}
+	flags := mflag.NewFlagSet("test", mflag.ContinueOnError)
+
+	flags.Var(opts.NewNamedMapOpts("cluster-store-opts", c.ClusterOpts, nil), []string{"-cluster-store-opt"}, "")
+	flags.Var(opts.NewNamedMapOpts("log-opts", c.LogConfig.Config, nil), []string{"-log-opt"}, "")
+
+	f, err := ioutil.TempFile("", "docker-config-")
+	if err != nil {
+		t.Fatal(err)
+	}
+
+	configFile := f.Name()
+	f.Write([]byte(`{
+		"cluster-store-opts": {"kv.cacertfile": "/var/lib/docker/discovery_certs/ca.pem"},
+		"log-opts": {"tag": "test"}
+}`))
+	f.Close()
+
+	loadedConfig, err := loadDaemonCliConfig(c, flags, common, configFile)
+	if err != nil {
+		t.Fatal(err)
+	}
+	if loadedConfig == nil {
+		t.Fatal("expected configuration, got nil")
+	}
+	if loadedConfig.ClusterOpts == nil {
+		t.Fatal("expected cluster options, got nil")
+	}
+
+	expectedPath := "/var/lib/docker/discovery_certs/ca.pem"
+	if caPath := loadedConfig.ClusterOpts["kv.cacertfile"]; caPath != expectedPath {
+		t.Fatalf("expected %s, got %s", expectedPath, caPath)
+	}
+
+	if loadedConfig.LogConfig.Config == nil {
+		t.Fatal("expected log config options, got nil")
+	}
+	if tag := loadedConfig.LogConfig.Config["tag"]; tag != "test" {
+		t.Fatalf("expected log tag `test`, got %s", tag)
+	}
+}

+ 43 - 0
docker/daemon_unix_test.go

@@ -0,0 +1,43 @@
+// +build daemon,!windows
+
+package main
+
+import (
+	"io/ioutil"
+	"testing"
+
+	"github.com/docker/docker/cli"
+	"github.com/docker/docker/daemon"
+	"github.com/docker/docker/pkg/mflag"
+)
+
+func TestLoadDaemonConfigWithNetwork(t *testing.T) {
+	c := &daemon.Config{}
+	common := &cli.CommonFlags{}
+	flags := mflag.NewFlagSet("test", mflag.ContinueOnError)
+	flags.String([]string{"-bip"}, "", "")
+	flags.String([]string{"-ip"}, "", "")
+
+	f, err := ioutil.TempFile("", "docker-config-")
+	if err != nil {
+		t.Fatal(err)
+	}
+
+	configFile := f.Name()
+	f.Write([]byte(`{"bip": "127.0.0.2", "ip": "127.0.0.1"}`))
+	f.Close()
+
+	loadedConfig, err := loadDaemonCliConfig(c, flags, common, configFile)
+	if err != nil {
+		t.Fatal(err)
+	}
+	if loadedConfig == nil {
+		t.Fatalf("expected configuration %v, got nil", c)
+	}
+	if loadedConfig.IP != "127.0.0.2" {
+		t.Fatalf("expected IP 127.0.0.2, got %v", loadedConfig.IP)
+	}
+	if loadedConfig.DefaultIP.String() != "127.0.0.1" {
+		t.Fatalf("expected DefaultIP 127.0.0.1, got %s", loadedConfig.DefaultIP)
+	}
+}

+ 2 - 3
docs/articles/ambassador_pattern_linking.md → docs/admin/ambassador_pattern_linking.md

@@ -1,18 +1,17 @@
 <!--[metadata]>
 <!--[metadata]>
 +++
 +++
+aliases = ["/engine/articles/ambassador_pattern_linking/"]
 title = "Link via an ambassador container"
 title = "Link via an ambassador container"
 description = "Using the Ambassador pattern to abstract (network) services"
 description = "Using the Ambassador pattern to abstract (network) services"
 keywords = ["Examples, Usage, links, docker, documentation, examples, names, name,  container naming"]
 keywords = ["Examples, Usage, links, docker, documentation, examples, names, name,  container naming"]
 [menu.main]
 [menu.main]
-parent = "smn_administrate"
+parent = "engine_admin"
 weight = 6
 weight = 6
 +++
 +++
 <![end-metadata]-->
 <![end-metadata]-->
 
 
 # Link via an ambassador container
 # Link via an ambassador container
 
 
-## Introduction
-
 Rather than hardcoding network links between a service consumer and
 Rather than hardcoding network links between a service consumer and
 provider, Docker encourages service portability, for example instead of:
 provider, Docker encourages service portability, for example instead of:
 
 

+ 0 - 0
docs/articles/b2d_volume_images/add_cd.png → docs/admin/b2d_volume_images/add_cd.png


+ 0 - 0
docs/articles/b2d_volume_images/add_new_controller.png → docs/admin/b2d_volume_images/add_new_controller.png


Některé soubory nejsou zobrazeny, neboť je v těchto rozdílových datech změněno mnoho souborů