Browse Source

Merge pull request #25780 from thaJeztah/1.12.1-docs-cherry-picks

1.12.1 docs cherry picks
Tibor Vass 9 years ago
parent
commit
5d29b79241
35 changed files with 1206 additions and 623 deletions
  1. 4 5
      docs/installation/linux/cruxlinux.md
  2. 4 6
      docs/installation/linux/ubuntulinux.md
  3. 0 1
      docs/reference/api/docker_remote_api_v1.21.md
  4. 83 15
      docs/reference/api/docker_remote_api_v1.24.md
  5. 1 1
      docs/reference/commandline/index.md
  6. 1 1
      docs/reference/commandline/network_connect.md
  7. 1 1
      docs/reference/commandline/network_create.md
  8. 1 1
      docs/reference/commandline/network_disconnect.md
  9. 1 1
      docs/reference/commandline/network_inspect.md
  10. 1 1
      docs/reference/commandline/network_ls.md
  11. 1 1
      docs/reference/commandline/network_rm.md
  12. 17 0
      docs/reference/commandline/ps.md
  13. 5 4
      docs/security/security.md
  14. 1 1
      docs/security/trust/content_trust.md
  15. BIN
      docs/swarm/images/service-vip.png
  16. 0 0
      docs/swarm/images/src/service-vip.svg
  17. 308 0
      docs/swarm/networking.md
  18. 3 2
      docs/swarm/services.md
  19. 1 1
      docs/swarm/swarm-tutorial/delete-service.md
  20. 3 3
      docs/userguide/index.md
  21. 2 2
      docs/userguide/networking/default_network/binding.md
  22. 1 1
      docs/userguide/networking/default_network/build-bridges.md
  23. 1 1
      docs/userguide/networking/default_network/configure-dns.md
  24. 1 1
      docs/userguide/networking/default_network/container-communication.md
  25. 1 1
      docs/userguide/networking/default_network/custom-docker0.md
  26. 1 1
      docs/userguide/networking/default_network/dockerlinks.md
  27. 0 538
      docs/userguide/networking/dockernetworks.md
  28. 65 14
      docs/userguide/networking/get-started-overlay.md
  29. 563 11
      docs/userguide/networking/index.md
  30. 22 0
      docs/userguide/networking/menu.md
  31. 66 0
      docs/userguide/networking/overlay-security-model.md
  32. 1 1
      docs/userguide/networking/work-with-networks.md
  33. 17 0
      docs/userguide/storagedriver/aufs-driver.md
  34. 28 6
      docs/userguide/storagedriver/overlayfs-driver.md
  35. 1 1
      man/docker-run.1.md

+ 4 - 5
docs/installation/linux/cruxlinux.md

@@ -11,8 +11,7 @@ parent = "engine_linux"
 
 # CRUX Linux
 
-Installing on CRUX Linux can be handled via the contrib ports from
-[James Mills](http://prologic.shortcircuit.net.au/) and are included in the
+Installing on CRUX Linux can be done using the
 official [contrib](http://crux.nu/portdb/?a=repo&q=contrib) ports:
 
 - docker
@@ -57,9 +56,9 @@ To start on system boot:
 
 ## Images
 
-There is a CRUX image maintained by [James Mills](http://prologic.shortcircuit.net.au/)
-as part of the Docker "Official Library" of images. To use this image simply pull it
-or use it as part of your `FROM` line in your `Dockerfile(s)`.
+There is a CRUX image as part of the Docker "Official Library" of images.
+To use this image simply pull it or use it as part of your `FROM` line in
+your `Dockerfile(s)`.
 
     $ docker pull crux
     $ docker run -i -t crux

+ 4 - 6
docs/installation/linux/ubuntulinux.md

@@ -119,10 +119,10 @@ packages from the new repository:
 - Ubuntu Trusty 14.04 (LTS)
 
 For Ubuntu Trusty, Wily, and Xenial, it's recommended to install the
-`linux-image-extra` kernel package. The `linux-image-extra` package
+`linux-image-extra-*` kernel packages. The `linux-image-extra-*` packages
 allows you use the `aufs` storage driver.
 
-To install the `linux-image-extra` package for your kernel version:
+To install the `linux-image-extra-*` packages:
 
 1. Open a terminal on your Ubuntu host.
 
@@ -130,14 +130,12 @@ To install the `linux-image-extra` package for your kernel version:
 
         $ sudo apt-get update
 
-3. Install the recommended package.
+3. Install the recommended packages.
 
-        $ sudo apt-get install linux-image-extra-$(uname -r)
+        $ sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
 
 4. Go ahead and install Docker.
 
-If you are installing on Ubuntu 14.04 or 12.04, `apparmor` is required.  You can install it using: `apt-get install apparmor`
-
 #### Ubuntu Precise 12.04 (LTS)
 
 For Ubuntu Precise, Docker requires the 3.13 kernel version. If your kernel

+ 0 - 1
docs/reference/api/docker_remote_api_v1.21.md

@@ -200,7 +200,6 @@ Create a container
              "RestartPolicy": { "Name": "", "MaximumRetryCount": 0 },
              "NetworkMode": "bridge",
              "Devices": [],
-             "Sysctls": { "net.ipv4.ip_forward": "1" },
              "Ulimits": [{}],
              "LogConfig": { "Type": "json-file", "Config": {} },
              "SecurityOpt": [],

+ 83 - 15
docs/reference/api/docker_remote_api_v1.24.md

@@ -327,6 +327,7 @@ Create a container
              "RestartPolicy": { "Name": "", "MaximumRetryCount": 0 },
              "NetworkMode": "bridge",
              "Devices": [],
+             "Sysctls": { "net.ipv4.ip_forward": "1" },
              "Ulimits": [{}],
              "LogConfig": { "Type": "json-file", "Config": {} },
              "SecurityOpt": [],
@@ -4084,6 +4085,54 @@ JSON Parameters:
 
 ## 3.8 Swarm
 
+### Inspect swarm
+
+
+`GET /swarm`
+
+Inspect swarm
+
+**Example response**:
+
+    HTTP/1.1 200 OK
+    Content-Type: application/json
+
+    {
+      "CreatedAt" : "2016-08-15T16:00:20.349727406Z",
+      "Spec" : {
+        "Dispatcher" : {
+          "HeartbeatPeriod" : 5000000000
+        },
+        "Orchestration" : {
+         "TaskHistoryRetentionLimit" : 10
+        },
+        "CAConfig" : {
+          "NodeCertExpiry" : 7776000000000000
+        },
+        "Raft" : {
+          "LogEntriesForSlowFollowers" : 500,
+          "HeartbeatTick" : 1,
+          "SnapshotInterval" : 10000,
+          "ElectionTick" : 3
+        },
+        "TaskDefaults" : {},
+        "Name" : "default"
+      },
+     "JoinTokens" : {
+        "Worker" : "SWMTKN-1-1h8aps2yszaiqmz2l3oc5392pgk8e49qhx2aj3nyv0ui0hez2a-6qmn92w6bu3jdvnglku58u11a",
+        "Manager" : "SWMTKN-1-1h8aps2yszaiqmz2l3oc5392pgk8e49qhx2aj3nyv0ui0hez2a-8llk83c4wm9lwioey2s316r9l"
+     },
+     "ID" : "70ilmkj2f6sp2137c753w2nmt",
+     "UpdatedAt" : "2016-08-15T16:32:09.623207604Z",
+     "Version" : {
+       "Index" : 51
+    }
+  }
+
+**Status codes**:
+
+- **200** - no error
+
 ### Initialize a new swarm
 
 
@@ -4403,7 +4452,10 @@ List services
 
 `POST /services/create`
 
-Create a service
+Create a service. When using this endpoint to create a service using a private
+repository from the registry, the `X-Registry-Auth` header must be used to
+include a base64-encoded AuthConfig object. Refer to the [create an
+image](#create-an-image) section for more details.
 
 **Example request**:
 
@@ -4483,7 +4535,7 @@ Create a service
     Content-Type: application/json
 
     {
-      "Id":"ak7w3gjqoa3kuz8xcpnyy0pvl"
+      "ID":"ak7w3gjqoa3kuz8xcpnyy0pvl"
     }
 
 **Status codes**:
@@ -4494,10 +4546,8 @@ Create a service
 
 JSON Parameters:
 
-- **Annotations** – Optional medata to associate with the service.
-    - **Name** – User-defined name for the service.
-    - **Labels** – A map of labels to associate with the service (e.g.,
-      `{"key":"value"[,"key2":"value2"]}`).
+- **Name** – User-defined name for the service.
+- **Labels** – A map of labels to associate with the service (e.g., `{"key":"value"[,"key2":"value2"]}`).
 - **TaskTemplate** – Specification of the tasks to start as part of the new service.
     - **ContainerSpec** - Container settings for containers started as part of this task.
         - **Image** – A string specifying the image name to use for the container.
@@ -4563,6 +4613,14 @@ JSON Parameters:
           of: `"Ports": { "<port>/<tcp|udp>: {}" }`
     - **VirtualIPs**
 
+**Request Headers**:
+
+- **Content-type** – Set to `"application/json"`.
+- **X-Registry-Auth** – base64-encoded AuthConfig object, containing either
+  login information, or a token. Refer to the [create an image](#create-an-image)
+  section for more details.
+
+
 ### Remove a service
 
 
@@ -4576,11 +4634,11 @@ Stop and remove the service `id`
 
 **Example response**:
 
-    HTTP/1.1 204 No Content
+    HTTP/1.1 200 No Content
 
 **Status codes**:
 
--   **204** – no error
+-   **200** – no error
 -   **404** – no such service
 -   **500** – server error
 
@@ -4667,11 +4725,16 @@ Return information on the service `id`.
 
 `POST /services/(id or name)/update`
 
-Update the service `id`.
+Update a service. When using this endpoint to create a service using a
+private repository from the registry, the `X-Registry-Auth` header can be used
+to update the authentication information for that is stored for the service.
+The header contains a base64-encoded AuthConfig object. Refer to the [create an
+image](#create-an-image) section for more details.
 
 **Example request**:
 
-    POST /services/1cb4dnqcyx6m66g2t538x3rxha/update HTTP/1.1
+    POST /services/1cb4dnqcyx6m66g2t538x3rxha/update?version=23 HTTP/1.1
+    Content-Type: application/json
 
     {
       "Name": "top",
@@ -4713,10 +4776,8 @@ Update the service `id`.
 
 **JSON Parameters**:
 
-- **Annotations** – Optional medata to associate with the service.
-    - **Name** – User-defined name for the service.
-    - **Labels** – A map of labels to associate with the service (e.g.,
-      `{"key":"value"[,"key2":"value2"]}`).
+- **Name** – User-defined name for the service.
+- **Labels** – A map of labels to associate with the service (e.g., `{"key":"value"[,"key2":"value2"]}`).
 - **TaskTemplate** – Specification of the tasks to start as part of the new service.
     - **ContainerSpec** - Container settings for containers started as part of this task.
         - **Image** – A string specifying the image name to use for the container.
@@ -4780,12 +4841,19 @@ Update the service `id`.
 - **version** – The version number of the service object being updated. This is
   required to avoid conflicting writes.
 
+**Request Headers**:
+
+- **Content-type** – Set to `"application/json"`.
+- **X-Registry-Auth** – base64-encoded AuthConfig object, containing either
+  login information, or a token. Refer to the [create an image](#create-an-image)
+  section for more details.
+
 **Status codes**:
 
 -   **200** – no error
 -   **404** – no such service
 -   **500** – server error
-
+ 
 ## 3.10 Tasks
 
 **Note**: Task operations require the engine to be part of a swarm.

+ 1 - 1
docs/reference/commandline/index.md

@@ -39,7 +39,6 @@ read the [`dockerd`](dockerd.md) reference page.
 |:--------|:-------------------------------------------------------------------|
 | [build](build.md) |  Build an image from a Dockerfile                        |
 | [commit](commit.md) | Create a new image from a container's changes          |
-| [export](export.md) | Export a container's filesystem as a tar archive       |
 | [history](history.md) | Show the history of an image                         |
 | [images](images.md) | List images                                            |
 | [import](import.md) | Import the contents from a tarball to create a filesystem image |
@@ -58,6 +57,7 @@ read the [`dockerd`](dockerd.md) reference page.
 | [diff](diff.md) | Inspect changes on a container's filesystem                |
 | [events](events.md) | Get real time events from the server                   |
 | [exec](exec.md) | Run a command in a running container                       |
+| [export](export.md) | Export a container's filesystem as a tar archive       |
 | [kill](kill.md) | Kill a running container                                   |
 | [logs](logs.md) | Fetch the logs of a container                              |
 | [pause](pause.md) | Pause all processes within a container                   |

+ 1 - 1
docs/reference/commandline/network_connect.md

@@ -93,5 +93,5 @@ You can connect a container to one or more networks. The networks need not be th
 * [network disconnect](network_disconnect.md)
 * [network ls](network_ls.md)
 * [network rm](network_rm.md)
-* [Understand Docker container networks](../../userguide/networking/dockernetworks.md)
+* [Understand Docker container networks](../../userguide/networking/index.md)
 * [Work with networks](../../userguide/networking/work-with-networks.md)

+ 1 - 1
docs/reference/commandline/network_create.md

@@ -192,4 +192,4 @@ to create an externally isolated `overlay` network, you can specify the
 * [network disconnect](network_disconnect.md)
 * [network ls](network_ls.md)
 * [network rm](network_rm.md)
-* [Understand Docker container networks](../../userguide/networking/dockernetworks.md)
+* [Understand Docker container networks](../../userguide/networking/index.md)

+ 1 - 1
docs/reference/commandline/network_disconnect.md

@@ -34,4 +34,4 @@ Disconnects a container from a network. The container must be running to disconn
 * [network create](network_create.md)
 * [network ls](network_ls.md)
 * [network rm](network_rm.md)
-* [Understand Docker container networks](../../userguide/networking/dockernetworks.md)
+* [Understand Docker container networks](../../userguide/networking/index.md)

+ 1 - 1
docs/reference/commandline/network_inspect.md

@@ -119,4 +119,4 @@ $ docker network inspect simple-network
 * [network create](network_create.md)
 * [network ls](network_ls.md)
 * [network rm](network_rm.md)
-* [Understand Docker container networks](../../userguide/networking/dockernetworks.md)
+* [Understand Docker container networks](../../userguide/networking/index.md)

+ 1 - 1
docs/reference/commandline/network_ls.md

@@ -176,4 +176,4 @@ attached.
 * [network create](network_create.md)
 * [network inspect](network_inspect.md)
 * [network rm](network_rm.md)
-* [Understand Docker container networks](../../userguide/networking/dockernetworks.md)
+* [Understand Docker container networks](../../userguide/networking/index.md)

+ 1 - 1
docs/reference/commandline/network_rm.md

@@ -50,4 +50,4 @@ deletion.
 * [network create](network_create.md)
 * [network ls](network_ls.md)
 * [network inspect](network_inspect.md)
-* [Understand Docker container networks](../../userguide/networking/dockernetworks.md)
+* [Understand Docker container networks](../../userguide/networking/index.md)

+ 17 - 0
docs/reference/commandline/ps.md

@@ -138,6 +138,23 @@ ea09c3c82f6e        registry:latest   /srv/run.sh            2 weeks ago
 48ee228c9464        fedora:20         bash                   2 weeks ago         Exited (0) 2 weeks ago                              tender_torvalds
 ```
 
+#### Killed containers
+
+You can use a filter to locate containers that exited with status of `137`
+meaning a `SIGKILL(9)` killed them.
+
+```bash
+$ docker ps -a --filter 'exited=137'
+CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS                       PORTS               NAMES
+b3e1c0ed5bfe        ubuntu:latest       "sleep 1000"           12 seconds ago      Exited (137) 5 seconds ago                       grave_kowalevski
+a2eb5558d669        redis:latest        "/entrypoint.sh redi   2 hours ago         Exited (137) 2 hours ago                         sharp_lalande
+
+Any of these events result in a `137` status:
+
+* the `init` process of the container is killed manually
+* `docker kill` kills the container
+* Docker daemon restarts which kills all running containers
+
 #### Status
 
 The `status` filter matches containers by status. You can filter using

+ 5 - 4
docs/security/security.md

@@ -120,10 +120,10 @@ certificates](https.md).
 
 The daemon is also potentially vulnerable to other inputs, such as image
 loading from either disk with 'docker load', or from the network with
-'docker pull'. As of Docker 1.3.2, images are now extracted in a chrooted 
-subprocess on Linux/Unix platforms, being the first-step in a wider effort 
-toward privilege separation. As of Docker 1.10.0, all images are stored and 
-accessed by the cryptographic checksums of their contents, limiting the 
+'docker pull'. As of Docker 1.3.2, images are now extracted in a chrooted
+subprocess on Linux/Unix platforms, being the first-step in a wider effort
+toward privilege separation. As of Docker 1.10.0, all images are stored and
+accessed by the cryptographic checksums of their contents, limiting the
 possibility of an attacker causing a collision with an existing image.
 
 Eventually, it is expected that the Docker daemon will run restricted
@@ -272,3 +272,4 @@ pull requests, and communicate via the mailing list.
 * [Seccomp security profiles for Docker](../security/seccomp.md)
 * [AppArmor security profiles for Docker](../security/apparmor.md)
 * [On the Security of Containers (2014)](https://medium.com/@ewindisch/on-the-security-of-containers-2c60ffe25a9e)
+* [Docker swarm mode overlay network security model](../userguide/networking/overlay-security-model.md)

+ 1 - 1
docs/security/trust/content_trust.md

@@ -33,7 +33,7 @@ and [Notary](../../reference/commandline/cli.md#notary) configuration
 for the docker client for more options.
 
 Once content trust is enabled, image publishers can sign their images. Image consumers can
-ensure that the images they use are signed. publishers and consumers can be
+ensure that the images they use are signed. Publishers and consumers can be
 individuals alone or in organizations. Docker's content trust supports users and
 automated processes such as builds.
 

BIN
docs/swarm/images/service-vip.png


File diff suppressed because it is too large
+ 0 - 0
docs/swarm/images/src/service-vip.svg


+ 308 - 0
docs/swarm/networking.md

@@ -0,0 +1,308 @@
+<!--[metadata]>
++++
+title = "Attach services to an overlay network"
+description = "Use swarm mode networking features"
+keywords = ["guide", "swarm mode", "swarm", "network"]
+[menu.main]
+identifier="networking-guide"
+parent="engine_swarm"
+weight=16
++++
+<![end-metadata]-->
+
+# Attach services to an overlay network
+
+Docker Engine swarm mode natively supports **overlay networks**, so you can
+enable container-to-container networks. When you use swarm mode, you don't need
+an external key-value store. Features of swarm mode overlay networks include the
+following:
+
+* You can attach multiple services to the same network.
+* By default, **service discovery** assigns a virtual IP address (VIP) and DNS
+entry to each service in the swarm, making it available by its service name to
+containers on the same network.
+* You can configure the service to use DNS round-robin instead of a VIP.
+
+In order to use overlay networks in the swarm, you need to have the following
+ports open between the swarm nodes before you enable swarm mode:
+
+* Port `7946` TCP/UDP for container network discovery.
+* Port `4789` UDP for the container overlay network.
+
+## Create an overlay network in a swarm
+
+When you run Docker Engine in swarm mode, you can run `docker network create`
+from a manager node to create an overlay network. For instance, to create a
+network named `my-network`:
+
+```
+$ docker network create \
+  --driver overlay \
+  --subnet 10.0.9.0/24 \
+  --opt encrypted \
+  my-network
+
+273d53261bcdfda5f198587974dae3827e947ccd7e74a41bf1f482ad17fa0d33
+```
+
+By default nodes in the swarm encrypt traffic between themselves and other
+nodes. The optional `--opt encrypted` flag enables an additional layer of
+encryption in the overlay driver for vxlan traffic between containers on
+different nodes. For more information, refer to [Docker swarm mode overlay network security model](../userguide/networking/overlay-security-model.md).
+
+The `--subnet` flag specifies the subnet for use with the overlay network. When
+you don't specify a subnet, the swarm manager automatically chooses a subnet and
+assigns it to the network. On some older kernels, including kernel 3.10,
+automatically assigned adresses may overlap with another subnet in your
+infrastructure. Such overlaps can cause connectivity issues or failures with containers connected to the network.
+
+Before you attach a service to the network, the network only extends to manager
+nodes. You can run `docker network ls` to view the network:
+
+```bash
+$ docker network ls
+
+NETWORK ID          NAME        DRIVER   SCOPE
+f9145f09b38b        bridge      bridge   local
+..snip..
+bd0befxwiva4        my-network  overlay  swarm
+```
+
+The `swarm` scope indicates that the network is available for use with services
+deployed to the swarm. After you create a service attached to the network, the
+swarm only extends the network to worker nodes where the scheduler places tasks
+for the service. On workers without tasks running for a service attached to the
+network, `network ls` does not display the network.
+
+## Attach a service to an overlay network
+
+To attach a service to an overlay network, pass the `--network` flag when you
+create a service. For example to create an nginx service attached to a
+network called `my-network`:
+
+```bash
+$ docker service create \
+  --replicas 3 \
+  --name my-web \
+  --network my-network \
+  nginx
+```
+
+>**Note:** You have to create the network before you can attach a service to it.
+
+The containers for the tasks in the service can connect to one another on the
+overlay network. The swarm extends the network to all the nodes with `Running`
+tasks for the service.
+
+From a manager node, run `docker service ps <SERVICE>` to view the nodes where
+tasks are running for the service:
+
+```bash
+$ docker service ps my-web
+
+ID                         NAME      IMAGE  NODE   DESIRED STATE  CURRENT STATE               ERROR
+63s86gf6a0ms34mvboniev7bs  my-web.1  nginx  node1  Running        Running 58 seconds ago
+6b3q2qbjveo4zauc6xig7au10  my-web.2  nginx  node2  Running        Running 58 seconds ago
+66u2hcrz0miqpc8h0y0f3v7aw  my-web.3  nginx  node3  Running        Running about a minute ago
+```
+
+![service vip image](images/service-vip.png)
+
+You can inspect the network from any node with a `Running` task for a service
+attached to the network:
+
+```bash
+$ docker network inspect <NETWORK>
+```
+
+The network information includes a list of the containers on the node that are
+attached to the network. For instance:
+
+```bash
+$ docker network inspect my-network
+[
+    {
+        "Name": "my-network",
+        "Id": "7m2rjx0a97n88wzr4nu8772r3",
+        "Scope": "swarm",
+        "Driver": "overlay",
+        "EnableIPv6": false,
+        "IPAM": {
+            "Driver": "default",
+            "Options": null,
+            "Config": [
+                {
+                    "Subnet": "10.0.9.0/24",
+                    "Gateway": "10.0.9.1"
+                }
+            ]
+        },
+        "Internal": false,
+        "Containers": {
+            "404d1dec939a021678132a35259c3604b9657649437e59060621a17edae7a819": {
+                "Name": "my-web.1.63s86gf6a0ms34mvboniev7bs",
+                "EndpointID": "3c9588d04db9bc2bf8749cb079689a3072c44c68e544944cbea8e4bc20eb7de7",
+                "MacAddress": "02:42:0a:00:09:03",
+                "IPv4Address": "10.0.9.3/24",
+                "IPv6Address": ""
+            }
+        },
+        "Options": {
+            "com.docker.network.driver.overlay.vxlanid_list": "257"
+        },
+        "Labels": {}
+    }
+]
+```
+
+In the example above, the container `my-web.1.63s86gf6a0ms34mvboniev7bs` for the
+`my-web` service is attached to the `my-network` network on node2.
+
+## Use swarm mode service discovery
+
+By default, when you create a service attached to a network, the swarm assigns
+the service a VIP. The VIP maps to a DNS alias based upon the service name.
+Containers on the network share DNS mappings for the service via gossip so any container on the network can access the service via its service
+name.
+
+You don't need to expose service-specific ports to make the service
+available to other services on the same overlay network. The swarm's internal
+load balancer automatically distributes requests to the service VIP among the
+active tasks.
+
+You can inspect the service to view the virtual IP. For example:
+
+```bash
+$ docker service inspect \
+  --format='{{json .Endpoint.VirtualIPs}}' \
+  my-web
+
+[{"NetworkID":"7m2rjx0a97n88wzr4nu8772r3" "Addr":"10.0.0.2/24"}]
+```
+
+The following example shows how you can add a `busybox` service on the same
+network as the `nginx` service and the busybox service is able to access `nginx`
+using the DNS name `my-web`:
+
+1. From a manager node, deploy a busybox service to the same network as
+`my-web`:
+
+    ```bash
+    $ docker service create \
+      --name my-busybox \
+      --network my-network \
+      busybox \
+      sleep 3000
+    ```
+
+2. Lookup the node where `my-busybox` is running:
+
+    ```bash
+    $ docker service ps my-busybox
+
+    ID                         NAME          IMAGE    NODE   DESIRED STATE  CURRENT STATE          ERROR
+    1dok2cmx2mln5hbqve8ilnair  my-busybox.1  busybox  node1  Running        Running 5 seconds ago
+    ```
+
+3. From the node where the busybox task is running, open an interactive shell to
+the busybox container:
+
+    ```bash
+    $ docker exec -it my-busybox.1.1dok2cmx2mln5hbqve8ilnair /bin/sh
+    ```
+
+    You can deduce the container name as `<TASK-NAME>`+`<ID>`. Alternatively,
+    you can run `docker ps` on the node where the task is running.
+
+4. From inside the busybox container, query the DNS to view the VIP for the
+`my-web` service:
+
+    ```bash
+    $ nslookup my-web
+
+    Server:    127.0.0.11
+    Address 1: 127.0.0.11
+
+    Name:      my-web
+    Address 1: 10.0.9.2 ip-10-0-9-2.us-west-2.compute.internal
+    ```
+
+    >**Note:** the examples here use `nslookup`, but you can use `dig` or any
+    available DNS query tool.
+
+5. From inside the busybox container, query the DNS using a special query
+<tasks.SERVICE-NAME> to find the IP addresses of all the containers for the
+`my-web` service:
+
+    ```bash
+    $ nslookup tasks.my-web
+
+    Server:    127.0.0.11
+    Address 1: 127.0.0.11
+
+    Name:      tasks.my-web
+    Address 1: 10.0.9.4 my-web.2.6b3q2qbjveo4zauc6xig7au10.my-network
+    Address 2: 10.0.9.3 my-web.1.63s86gf6a0ms34mvboniev7bs.my-network
+    Address 3: 10.0.9.5 my-web.3.66u2hcrz0miqpc8h0y0f3v7aw.my-network
+    ```
+
+6. From inside the busybox container, run `wget` to access the nginx web server
+running in the `my-web` service:
+
+    ```bash
+    $ wget -O- my-web
+
+    Connecting to my-web (10.0.9.2:80)
+    <!DOCTYPE html>
+    <html>
+    <head>
+    <title>Welcome to nginx!</title>
+    ...snip...
+    ```
+
+    The swarm load balancer automatically routes the HTTP request to the
+    service's VIP to an active task. It distributes subsequent requests to
+    other tasks using round-robin selection.
+
+## Use DNS round-robin for a service
+
+You can configure the service to use DNS round-robin directly without using a
+VIP, by setting the `--endpoint-mode dnsrr` when you create the service. DNS round-robin is useful in cases where you want to use your own load balancer.
+
+The following example shows a service with `dnsrr` endpoint mode:
+
+```bash
+$ docker service create \
+  --replicas 3 \
+  --name my-dnsrr-service \
+  --network my-network \
+  --endpoint-mode dnsrr \
+  nginx
+```
+
+When you query the DNS for the service name, the DNS service returns the IP
+addresses for all the task containers:
+
+```bash
+$ nslookup my-dnsrr-service
+Server:    127.0.0.11
+Address 1: 127.0.0.11
+
+Name:      my-dnsrr
+Address 1: 10.0.9.8 my-dnsrr-service.1.bd3a67p61by5dfdkyk7kog7pr.my-network
+Address 2: 10.0.9.10 my-dnsrr-service.3.0sb1jxr99bywbvzac8xyw73b1.my-network
+Address 3: 10.0.9.9 my-dnsrr-service.2.am6fx47p3bropyy2dy4f8hofb.my-network
+```
+
+## Confirm VIP connectivity
+
+In genaral we recommend you use `dig`, `nslookup`, or another DNS query tool to
+test access to the service name via DNS. Because a VIP is a logical IP, `ping`
+is not the right tool to confirm VIP connectivity.
+
+## Learn More
+
+* [Deploy services to a swarm](services.md)
+* [Swarm administration guide](admin_guide.md)
+* [Docker Engine command line reference](../reference/commandline/index.md)
+* [Swarm mode tutorial](swarm-tutorial/index.md)

+ 3 - 2
docs/swarm/services.md

@@ -213,8 +213,9 @@ $ docker service create \
 
 The swarm extends `my-network` to each node running the service.
 
-<!-- TODO when overlay-security-model is published
-For more information, refer to [Note on Docker 1.12 Overlay Network Security Model](../userguide/networking/overlay-security-model.md).-->
+For more information on overlay networking and service discovery, refer to
+[Attach services to an overlay network](networking.md). See also
+[Docker swarm mode overlay network security model](../userguide/networking/overlay-security-model.md).
 
 ## Configure update behavior
 

+ 1 - 1
docs/swarm/swarm-tutorial/delete-service.md

@@ -19,7 +19,7 @@ you can delete the service from the swarm.
 run your manager node. For example, the tutorial uses a machine named
 `manager1`.
 
-2. Run `docker service remove helloworld` to remove the `helloworld` service.
+2. Run `docker service rm helloworld` to remove the `helloworld` service.
 
     ```
     $ docker service rm helloworld

+ 3 - 3
docs/userguide/index.md

@@ -43,7 +43,7 @@ This guide helps users learn how to use Docker Engine.
 
 ## Configure networks
 
-- [Understand Docker container networks](networking/dockernetworks.md)
+- [Understand Docker container networks](networking/index.md)
 - [Embedded DNS server in user-defined networks](networking/configure-dns.md)
 - [Get started with multi-host networking](networking/get-started-overlay.md)
 - [Work with network commands](networking/work-with-networks.md)
@@ -55,8 +55,8 @@ This guide helps users learn how to use Docker Engine.
 - [Binding container ports to the host](networking/default_network/binding.md)
 - [Build your own bridge](networking/default_network/build-bridges.md)
 - [Configure container DNS](networking/default_network/configure-dns.md)
-- [Customize the docker0 bridge](networking/default_network/custom-docker0.md)  
-- [IPv6 with Docker](networking/default_network/ipv6.md)  
+- [Customize the docker0 bridge](networking/default_network/custom-docker0.md)
+- [IPv6 with Docker](networking/default_network/ipv6.md)
 
 ## Misc
 

+ 2 - 2
docs/userguide/networking/default_network/binding.md

@@ -12,7 +12,7 @@ parent = "smn_networking_def"
 
 The information in this section explains binding container ports within the Docker default bridge. This is a `bridge` network named `bridge` created automatically when you install Docker.
 
-> **Note**: The [Docker networks feature](../dockernetworks.md) allows you to
+> **Note**: The [Docker networks feature](../index.md) allows you to
 create user-defined networks in addition to the default bridge network.
 
 By default Docker containers can make connections to the outside world, but the
@@ -100,6 +100,6 @@ address: this alternative is preferred for performance reasons.
 
 ## Related information
 
-- [Understand Docker container networks](../dockernetworks.md)
+- [Understand Docker container networks](../index.md)
 - [Work with network commands](../work-with-networks.md)
 - [Legacy container links](dockerlinks.md)

+ 1 - 1
docs/userguide/networking/default_network/build-bridges.md

@@ -14,7 +14,7 @@ This section explains how to build your own bridge to replace the Docker default
 bridge. This is a `bridge` network named `bridge` created automatically when you
 install Docker.
 
-> **Note**: The [Docker networks feature](../dockernetworks.md) allows you to
+> **Note**: The [Docker networks feature](../index.md) allows you to
 create user-defined networks in addition to the default bridge network.
 
 You can set up your own bridge before starting Docker and use `-b BRIDGE` or

+ 1 - 1
docs/userguide/networking/default_network/configure-dns.md

@@ -14,7 +14,7 @@ The information in this section explains configuring container DNS within
 the Docker default bridge. This is a `bridge` network named `bridge` created
 automatically when you install Docker.  
 
-> **Note**: The [Docker networks feature](../dockernetworks.md) allows you to create user-defined networks in addition to the default bridge network. Please refer to the [Docker Embedded DNS](../configure-dns.md) section for more information on DNS configurations in user-defined networks.
+> **Note**: The [Docker networks feature](../index.md) allows you to create user-defined networks in addition to the default bridge network. Please refer to the [Docker Embedded DNS](../configure-dns.md) section for more information on DNS configurations in user-defined networks.
 
 How can Docker supply each container with a hostname and DNS configuration, without having to build a custom image with the hostname written inside?  Its trick is to overlay three crucial `/etc` files inside the container with virtual files where it can write fresh information.  You can see this by running `mount` inside a container:
 

+ 1 - 1
docs/userguide/networking/default_network/container-communication.md

@@ -14,7 +14,7 @@ The information in this section explains container communication within the
 Docker default bridge. This is a `bridge` network named `bridge` created
 automatically when you install Docker.  
 
-**Note**: The [Docker networks feature](../dockernetworks.md) allows you to create user-defined networks in addition to the default bridge network.
+**Note**: The [Docker networks feature](../index.md) allows you to create user-defined networks in addition to the default bridge network.
 
 ## Communicating to the outside world
 

+ 1 - 1
docs/userguide/networking/default_network/custom-docker0.md

@@ -12,7 +12,7 @@ parent = "smn_networking_def"
 
 The information in this section explains how to customize the Docker default bridge. This is a `bridge` network named `bridge` created automatically when you install Docker.  
 
-**Note**: The [Docker networks feature](../dockernetworks.md) allows you to create user-defined networks in addition to the default bridge network.
+**Note**: The [Docker networks feature](../index.md) allows you to create user-defined networks in addition to the default bridge network.
 
 By default, the Docker server creates and configures the host system's `docker0` interface as an _Ethernet bridge_ inside the Linux kernel that can pass packets back and forth between other physical or virtual network interfaces so that they behave as a single Ethernet network.
 

+ 1 - 1
docs/userguide/networking/default_network/dockerlinks.md

@@ -13,7 +13,7 @@ weight=-2
 
 The information in this section explains legacy container links within the Docker default bridge. This is a `bridge` network named `bridge` created automatically when you install Docker.
 
-Before the [Docker networks feature](../dockernetworks.md), you could use the
+Before the [Docker networks feature](../index.md), you could use the
 Docker link feature to allow containers to discover each other and securely
 transfer information about one container to another container. With the
 introduction of the Docker networks feature, you can still create links but they

+ 0 - 538
docs/userguide/networking/dockernetworks.md

@@ -1,538 +0,0 @@
-<!--[metadata]>
-+++
-title = "Docker container networking"
-description = "How do we connect docker containers within and across hosts ?"
-keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"]
-[menu.main]
-parent = "smn_networking"
-weight = -5
-+++
-<![end-metadata]-->
-
-# Understand Docker container networks
-
-To build web applications that act in concert but do so securely, use the Docker
-networks feature. Networks, by definition, provide complete isolation for
-containers. So, it is important to have control over the networks your
-applications run on. Docker container networks give you that control.
-
-This section provides an overview of the default networking behavior that Docker
-Engine delivers natively. It describes the type of networks created by default
-and how to create your own, user-defined networks. It also describes the
-resources required to create networks on a single host or across a cluster of
-hosts.
-
-## Default Networks
-
-When you install Docker, it creates three networks automatically. You can list
-these networks using the `docker network ls` command:
-
-```
-$ docker network ls
-
-NETWORK ID          NAME                DRIVER
-7fca4eb8c647        bridge              bridge
-9f904ee27bf5        none                null
-cf03ee007fb4        host                host
-```
-
-Historically, these three networks are part of Docker's implementation. When
-you run a container you can use the `--network` flag to specify which network you
-want to run a container on. These three networks are still available to you.
-
-The `bridge` network represents the `docker0` network present in all Docker
-installations. Unless you specify otherwise with the `docker run
---network=<NETWORK>` option, the Docker daemon connects containers to this network
-by default. You can see this bridge as part of a host's network stack by using
-the `ifconfig` command on the host.
-
-```
-$ ifconfig
-
-docker0   Link encap:Ethernet  HWaddr 02:42:47:bc:3a:eb  
-          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
-          inet6 addr: fe80::42:47ff:febc:3aeb/64 Scope:Link
-          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
-          RX packets:17 errors:0 dropped:0 overruns:0 frame:0
-          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
-          collisions:0 txqueuelen:0
-          RX bytes:1100 (1.1 KB)  TX bytes:648 (648.0 B)
-```
-
-The `none` network adds a container to a container-specific network stack. That container lacks a network interface. Attaching to such a container and looking at its stack you see this:
-
-```
-$ docker attach nonenetcontainer
-
-root@0cb243cd1293:/# cat /etc/hosts
-127.0.0.1	localhost
-::1	localhost ip6-localhost ip6-loopback
-fe00::0	ip6-localnet
-ff00::0	ip6-mcastprefix
-ff02::1	ip6-allnodes
-ff02::2	ip6-allrouters
-root@0cb243cd1293:/# ifconfig
-lo        Link encap:Local Loopback  
-          inet addr:127.0.0.1  Mask:255.0.0.0
-          inet6 addr: ::1/128 Scope:Host
-          UP LOOPBACK RUNNING  MTU:65536  Metric:1
-          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
-          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
-          collisions:0 txqueuelen:0
-          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
-
-root@0cb243cd1293:/#
-```
->**Note**: You can detach from the container and leave it running with `CTRL-p CTRL-q`.
-
-The `host` network adds a container on the hosts network stack. You'll find the
-network configuration inside the container is identical to the host.
-
-With the exception of the `bridge` network, you really don't need to
-interact with these default networks. While you can list and inspect them, you
-cannot remove them. They are required by your Docker installation. However, you
-can add your own user-defined networks and these you can remove when you no
-longer need them. Before you learn more about creating your own networks, it is
-worth looking at the default `bridge` network a bit.
-
-
-### The default bridge network in detail
-The default `bridge` network is present on all Docker hosts. The `docker network inspect`
-command returns information about a network:
-
-```
-$ docker network inspect bridge
-
-[
-   {
-       "Name": "bridge",
-       "Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
-       "Scope": "local",
-       "Driver": "bridge",
-       "IPAM": {
-           "Driver": "default",
-           "Config": [
-               {
-                   "Subnet": "172.17.0.1/16",
-                   "Gateway": "172.17.0.1"
-               }
-           ]
-       },
-       "Containers": {},
-       "Options": {
-           "com.docker.network.bridge.default_bridge": "true",
-           "com.docker.network.bridge.enable_icc": "true",
-           "com.docker.network.bridge.enable_ip_masquerade": "true",
-           "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
-           "com.docker.network.bridge.name": "docker0",
-           "com.docker.network.driver.mtu": "9001"
-       }
-   }
-]
-```
-The Engine automatically creates a `Subnet` and `Gateway` to the network.
-The `docker run` command automatically adds new containers to this network.
-
-```
-$ docker run -itd --name=container1 busybox
-
-3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c
-
-$ docker run -itd --name=container2 busybox
-
-94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c
-```
-
-Inspecting the `bridge` network again after starting two containers shows both newly launched containers in the network. Their ids show up in the "Containers" section of `docker network inspect`:
-
-```
-$ docker network inspect bridge
-
-{[
-    {
-        "Name": "bridge",
-        "Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
-        "Scope": "local",
-        "Driver": "bridge",
-        "IPAM": {
-            "Driver": "default",
-            "Config": [
-                {
-                    "Subnet": "172.17.0.1/16",
-                    "Gateway": "172.17.0.1"
-                }
-            ]
-        },
-        "Containers": {
-            "3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c": {
-                "EndpointID": "647c12443e91faf0fd508b6edfe59c30b642abb60dfab890b4bdccee38750bc1",
-                "MacAddress": "02:42:ac:11:00:02",
-                "IPv4Address": "172.17.0.2/16",
-                "IPv6Address": ""
-            },
-            "94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c": {
-                "EndpointID": "b047d090f446ac49747d3c37d63e4307be745876db7f0ceef7b311cbba615f48",
-                "MacAddress": "02:42:ac:11:00:03",
-                "IPv4Address": "172.17.0.3/16",
-                "IPv6Address": ""
-            }
-        },
-        "Options": {
-            "com.docker.network.bridge.default_bridge": "true",
-            "com.docker.network.bridge.enable_icc": "true",
-            "com.docker.network.bridge.enable_ip_masquerade": "true",
-            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
-            "com.docker.network.bridge.name": "docker0",
-            "com.docker.network.driver.mtu": "9001"
-        }
-    }
-]
-```
-
-The `docker network inspect` command above shows all the connected containers and their network resources on a given network. Containers in this default network are able to communicate with each other using IP addresses. Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy `docker run --link` option.
-
-You can `attach` to a running `container` and investigate its configuration:
-
-```
-$ docker attach container1
-
-root@0cb243cd1293:/# ifconfig
-ifconfig
-eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02  
-          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
-          inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
-          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
-          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
-          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
-          collisions:0 txqueuelen:0
-          RX bytes:1296 (1.2 KiB)  TX bytes:648 (648.0 B)
-
-lo        Link encap:Local Loopback  
-          inet addr:127.0.0.1  Mask:255.0.0.0
-          inet6 addr: ::1/128 Scope:Host
-          UP LOOPBACK RUNNING  MTU:65536  Metric:1
-          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
-          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
-          collisions:0 txqueuelen:0
-          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
-```
-
-Then use `ping` for about 3 seconds to test the connectivity of the containers on this `bridge` network.
-
-```
-root@0cb243cd1293:/# ping -w3 172.17.0.3
-
-PING 172.17.0.3 (172.17.0.3): 56 data bytes
-64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.096 ms
-64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.080 ms
-64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.074 ms
-
---- 172.17.0.3 ping statistics ---
-3 packets transmitted, 3 packets received, 0% packet loss
-round-trip min/avg/max = 0.074/0.083/0.096 ms
-```
-
-Finally, use the `cat` command to check the `container1` network configuration:
-
-```
-root@0cb243cd1293:/# cat /etc/hosts
-
-172.17.0.2	3386a527aa08
-127.0.0.1	localhost
-::1	localhost ip6-localhost ip6-loopback
-fe00::0	ip6-localnet
-ff00::0	ip6-mcastprefix
-ff02::1	ip6-allnodes
-ff02::2	ip6-allrouters
-```
-To detach from a `container1` and leave it running use `CTRL-p CTRL-q`.Then, attach to `container2` and repeat these three commands.
-
-```
-$ docker attach container2
-
-root@0cb243cd1293:/# ifconfig
-
-eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03  
-          inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
-          inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
-          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
-          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
-          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
-          collisions:0 txqueuelen:0
-          RX bytes:1166 (1.1 KiB)  TX bytes:1026 (1.0 KiB)
-
-lo        Link encap:Local Loopback  
-          inet addr:127.0.0.1  Mask:255.0.0.0
-          inet6 addr: ::1/128 Scope:Host
-          UP LOOPBACK RUNNING  MTU:65536  Metric:1
-          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
-          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
-          collisions:0 txqueuelen:0
-          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
-
-root@0cb243cd1293:/# ping -w3 172.17.0.2
-
-PING 172.17.0.2 (172.17.0.2): 56 data bytes
-64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.067 ms
-64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
-64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms
-
---- 172.17.0.2 ping statistics ---
-3 packets transmitted, 3 packets received, 0% packet loss
-round-trip min/avg/max = 0.067/0.071/0.075 ms
-/ # cat /etc/hosts
-172.17.0.3	94447ca47985
-127.0.0.1	localhost
-::1	localhost ip6-localhost ip6-loopback
-fe00::0	ip6-localnet
-ff00::0	ip6-mcastprefix
-ff02::1	ip6-allnodes
-ff02::2	ip6-allrouters
-```
-
-The default `docker0` bridge network supports the use of port mapping and `docker run --link` to allow communications between containers in the `docker0` network. These techniques are cumbersome to set up and prone to error. While they are still available to you as techniques, it is better to avoid them and define your own bridge networks instead.
-
-## User-defined networks
-
-You can create your own user-defined networks that better isolate containers.
-Docker provides some default **network drivers** for creating these
-networks. You can create a new **bridge network** or **overlay network**. You
-can also create a **network plugin** or **remote network**  written to your own
-specifications.
-
-You can create multiple networks. You can add containers to more than one
-network. Containers can only communicate within networks but not across
-networks. A container attached to two networks can communicate with member
-containers in either network. When a container is connected to multiple
-networks, its external connectivity is provided via the first non-internal
-network, in lexical order.
-
-The next few sections describe each of Docker's built-in network drivers in
-greater detail.
-
-### A bridge network
-
-The easiest user-defined network to create is a `bridge` network. This network
-is similar to the historical, default `docker0` network. There are some added
-features and some old features that aren't available.
-
-```
-$ docker network create --driver bridge isolated_nw
-1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b
-
-$ docker network inspect isolated_nw
-
-[
-    {
-        "Name": "isolated_nw",
-        "Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
-        "Scope": "local",
-        "Driver": "bridge",
-        "IPAM": {
-            "Driver": "default",
-            "Config": [
-                {
-                    "Subnet": "172.21.0.0/16",
-                    "Gateway": "172.21.0.1/16"
-                }
-            ]
-        },
-        "Containers": {},
-        "Options": {}
-    }
-]
-
-$ docker network ls
-
-NETWORK ID          NAME                DRIVER
-9f904ee27bf5        none                null
-cf03ee007fb4        host                host
-7fca4eb8c647        bridge              bridge
-c5ee82f76de3        isolated_nw         bridge
-
-```
-
-After you create the network, you can launch containers on it using  the `docker run --network=<NETWORK>` option.
-
-```
-$ docker run --network=isolated_nw -itd --name=container3 busybox
-
-8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c
-
-$ docker network inspect isolated_nw
-[
-    {
-        "Name": "isolated_nw",
-        "Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
-        "Scope": "local",
-        "Driver": "bridge",
-        "IPAM": {
-            "Driver": "default",
-            "Config": [
-                {}
-            ]
-        },
-        "Containers": {
-            "8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c": {
-                "EndpointID": "93b2db4a9b9a997beb912d28bcfc117f7b0eb924ff91d48cfa251d473e6a9b08",
-                "MacAddress": "02:42:ac:15:00:02",
-                "IPv4Address": "172.21.0.2/16",
-                "IPv6Address": ""
-            }
-        },
-        "Options": {}
-    }
-]
-```
-
-The containers you launch into this network must reside on the same Docker host.
-Each container in the network can immediately communicate with other containers
-in the network. Though, the network itself isolates the containers from external
-networks.
-
-![An isolated network](images/bridge_network.png)
-
-Within a user-defined bridge network, linking is not supported. You can
-expose and publish container ports on containers in this network. This is useful
-if you want to make a portion of the `bridge` network available to an outside
-network.
-
-![Bridge network](images/network_access.png)
-
-A bridge network is useful in cases where you want to run a relatively small
-network on a single host. You can, however, create significantly larger networks
-by creating an `overlay` network.
-
-
-### An overlay network
-
-Docker's `overlay` network driver supports multi-host networking natively
-out-of-the-box. This support is accomplished with the help of `libnetwork`, a
-built-in VXLAN-based overlay network driver, and Docker's `libkv` library.
-
-The `overlay` network requires a valid key-value store service. Currently,
-Docker's `libkv` supports Consul, Etcd, and ZooKeeper (Distributed store). Before
-creating a network you must install and configure your chosen key-value store
-service. The Docker hosts that you intend to network and the service must be
-able to communicate.
-
-![Key-value store](images/key_value.png)
-
-Each host in the network must run a Docker Engine instance. The easiest way to
-provision the hosts are with Docker Machine.
-
-![Engine on each host](images/engine_on_net.png)
-
-You should open the following ports between each of your hosts.
-
-| Protocol | Port | Description           |
-|----------|------|-----------------------|
-| udp      | 4789 | Data plane (VXLAN)    |
-| tcp/udp  | 7946 | Control plane         |
-
-Your key-value store service may require additional ports.
-Check your vendor's documentation and open any required ports.
-
-Once you have several machines provisioned, you can use Docker Swarm to quickly
-form them into a swarm which includes a discovery service as well.
-
-To create an overlay network, you configure options on  the `daemon` on each
-Docker Engine for use with `overlay` network. There are three options to set:
-
-<table>
-    <thead>
-    <tr>
-        <th>Option</th>
-        <th>Description</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td><pre>--cluster-store=PROVIDER://URL</pre></td>
-        <td>Describes the location of the KV service.</td>
-    </tr>
-    <tr>
-        <td><pre>--cluster-advertise=HOST_IP|HOST_IFACE:PORT</pre></td>
-        <td>The IP address or interface of the HOST used for clustering.</td>
-    </tr>
-    <tr>
-        <td><pre>--cluster-store-opt=KEY-VALUE OPTIONS</pre></td>
-        <td>Options such as TLS certificate or tuning discovery Timers</td>
-    </tr>
-    </tbody>
-</table>
-
-Create an `overlay` network on one of the machines in the swarm.
-
-    $ docker network create --driver overlay my-multi-host-network
-
-This results in a single network spanning multiple hosts. An `overlay` network
-provides complete isolation for the containers.
-
-![An overlay network](images/overlay_network.png)
-
-Then, on each host, launch containers making sure to specify the network name.
-
-    $ docker run -itd --network=my-multi-host-network busybox
-
-Once connected, each container has access to all the containers in the network
-regardless of which Docker host the container was launched on.
-
-![Published port](images/overlay-network-final.png)
-
-If you would like to try this for yourself, see the [Getting started for
-overlay](get-started-overlay.md).
-
-### Custom network plugin
-
-If you like, you can write your own network driver plugin. A network
-driver plugin makes use of Docker's plugin infrastructure. In this
-infrastructure, a plugin is a process running on the same Docker host as the
-Docker `daemon`.
-
-Network plugins follow the same restrictions and installation rules as other
-plugins. All plugins make use of the plugin API. They have a lifecycle that
-encompasses installation, starting, stopping and activation.
-
-Once you have created and installed a custom network driver, you use it like the
-built-in network drivers. For example:
-
-    $ docker network create --driver weave mynet
-
-You can inspect it, add containers to and from it, and so forth. Of course,
-different plugins may make use of different technologies or frameworks. Custom
-networks can include features not present in Docker's default networks. For more
-information on writing plugins, see [Extending Docker](../../extend/index.md) and
-[Writing a network driver plugin](../../extend/plugins_network.md).
-
-### Docker embedded DNS server
-
-Docker daemon runs an embedded DNS server to provide automatic service discovery
-for containers connected to user defined networks. Name resolution requests from
-the containers are handled first by the embedded DNS server. If the embedded DNS
-server is unable to resolve the request it will be forwarded to any external DNS
-servers configured for the container. To facilitate this when the container is
-created, only the embedded DNS server reachable at `127.0.0.11` will be listed
-in the container's `resolv.conf` file. More information on embedded DNS server on
-user-defined networks can be found in the [embedded DNS server in user-defined networks]
-(configure-dns.md)
-
-## Links
-
-Before the Docker network feature, you could use the Docker link feature to
-allow containers to discover each other.  With the introduction of Docker networks,
-containers can be discovered by its name automatically. But you can still create
-links but they behave differently when used in the default `docker0` bridge network
-compared to user-defined networks. For more information, please refer to
-[Legacy Links](default_network/dockerlinks.md) for link feature in default `bridge` network
-and the [linking containers in user-defined networks](work-with-networks.md#linking-containers-in-user-defined-networks) for links
-functionality in user-defined networks.
-
-## Related information
-
-- [Work with network commands](work-with-networks.md)
-- [Get started with multi-host networking](get-started-overlay.md)
-- [Managing Data in Containers](../../tutorials/dockervolumes.md)
-- [Docker Machine overview](https://docs.docker.com/machine)
-- [Docker Swarm overview](https://docs.docker.com/swarm)
-- [Investigate the LibNetwork project](https://github.com/docker/libnetwork)

+ 65 - 14
docs/userguide/networking/get-started-overlay.md

@@ -14,19 +14,70 @@ weight=-3
 This article uses an example to explain the basics of creating a multi-host
 network. Docker Engine supports multi-host networking out-of-the-box through the
 `overlay` network driver.  Unlike `bridge` networks, overlay networks require
-some pre-existing conditions before you can create one. These conditions are:
+some pre-existing conditions before you can create one:
 
-* Access to a key-value store. Docker supports Consul, Etcd, and ZooKeeper (Distributed store) key-value stores.
+* [Docker Engine running in swarm mode](#overlay-networking-and-swarm-mode)
+
+OR
+
+* [A cluster of hosts using a key value store](#overlay-networking-with-an-external-key-value-store)
+
+## Overlay networking and swarm mode
+
+Using docker engine running in [swarm mode](../../swarm/swarm-mode.md), you can create an overlay network on a manager node.
+
+The swarm makes the overlay network available only to nodes in the swarm that
+require it for a service. When you create a service that uses an overlay
+network, the manager node automatically extends the overlay network to nodes
+that run service tasks.
+
+To learn more about running Docker Engine in swarm mode, refer to the
+[Swarm mode overview](../../swarm/index.md).
+
+The example below shows how to create a network and use it for a service from a manager node in the swarm:
+
+```bash
+# Create an overlay network `my-multi-host-network`.
+$ docker network create \
+  --driver overlay \
+  --subnet 10.0.9.0/24 \
+  my-multi-host-network
+
+400g6bwzd68jizzdx5pgyoe95
+
+# Create an nginx service and extend the my-multi-host-network to nodes where
+# the service's tasks run.
+$ $ docker service create --replicas 2 --network my-multi-host-network --name my-web nginx
+
+716thylsndqma81j6kkkb5aus
+```
+
+Overlay networks for a swarm are not available to unmanaged containers. For more information refer to [Docker swarm mode overlay network security model](overlay-security-model.md).
+
+See also [Attach services to an overlay network](../../swarm/networking.md). 
+
+## Overlay networking with an external key-value store
+
+To use an Docker engine with an external key-value store, you need the
+following:
+
+* Access to the key-value store. Docker supports Consul, Etcd, and ZooKeeper
+(Distributed store) key-value stores.
 * A cluster of hosts with connectivity to the key-value store.
 * A properly configured Engine `daemon` on each host in the cluster.
-* Hosts within the cluster must have unique hostnames because the key-value store uses the hostnames to identify cluster members.
+* Hosts within the cluster must have unique hostnames because the key-value
+store uses the hostnames to identify cluster members.
 
 Though Docker Machine and Docker Swarm are not mandatory to experience Docker
-multi-host networking, this example uses them to illustrate how they are
-integrated. You'll use Machine to create both the key-value store
-server and the host cluster. This example creates a Swarm cluster.
+multi-host networking with a key-value store, this example uses them to
+illustrate how they are integrated. You'll use Machine to create both the
+key-value store server and the host cluster. This example creates a Swarm
+cluster.
+
+>**Note:** Docker Engine running in swarm mode is not compatible with networking
+with an external key-value store.
 
-## Prerequisites
+### Prerequisites
 
 Before you begin, make sure you have a system on your network with the latest
 version of Docker Engine and Docker Machine installed. The example also relies
@@ -37,7 +88,7 @@ If you have not already done so, make sure you upgrade Docker Engine and Docker
 Machine to the latest versions.
 
 
-## Step 1: Set up a key-value store
+### Set up a key-value store
 
 An overlay network requires a key-value store. The key-value store holds
 information about the network state which includes discovery, networks,
@@ -80,7 +131,7 @@ key-value stores. This example uses Consul.
 Keep your terminal open and move onto the next step.
 
 
-## Step 2: Create a Swarm cluster
+### Create a Swarm cluster
 
 In this step, you use `docker-machine` to provision the hosts for your network.
 At this point, you won't actually create the network. You'll create several
@@ -123,7 +174,7 @@ At this point you have a set of hosts running on your network. You are ready to
 
 Leave your terminal open and go onto the next step.
 
-## Step 3: Create the overlay Network
+### Create the overlay Network
 
 To create an overlay network
 
@@ -213,7 +264,7 @@ To create an overlay network
   Both agents report they have the `my-net` network with the `6b07d0be843f` ID.
 	You now have a multi-host container network running!
 
-##  Step 4: Run an application on your Network
+### Run an application on your Network
 
 Once your network is created, you can start a container on any of the hosts and it automatically is part of the network.
 
@@ -263,7 +314,7 @@ Once your network is created, you can start a container on any of the hosts and
 		</html>
 		-                    100% |*******************************|   612   0:00:00 ETA
 
-## Step 5: Check external connectivity
+### Check external connectivity
 
 As you've seen, Docker's built-in overlay network driver provides out-of-the-box
 connectivity between the containers on multiple hosts within the same network.
@@ -326,7 +377,7 @@ to have external connectivity outside of their cluster.
 	the `my-net` overlay network. While the `eth1` interface represents the
 	container interface that is connected to the `docker_gwbridge` network.
 
-## Step 6: Extra Credit with Docker Compose
+### Extra Credit with Docker Compose
 
 Please refer to the Networking feature introduced in [Compose V2 format]
 (https://docs.docker.com/compose/networking/) and execute the
@@ -334,7 +385,7 @@ multi-host networking scenario in the Swarm cluster used above.
 
 ## Related information
 
-* [Understand Docker container networks](dockernetworks.md)
+* [Understand Docker container networks](index.md)
 * [Work with network commands](work-with-networks.md)
 * [Docker Swarm overview](https://docs.docker.com/swarm)
 * [Docker Machine overview](https://docs.docker.com/machine)

+ 563 - 11
docs/userguide/networking/index.md

@@ -1,21 +1,573 @@
 <!--[metadata]>
 +++
-title = "Network configuration"
-description = "Docker networking feature is introduced"
-keywords = ["network, networking, bridge, docker,  documentation"]
+aliases=[
+"/engine/userguide/networking/dockernetworks/"
+]
+title = "Docker container networking"
+description = "How do we connect docker containers within and across hosts ?"
+keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"]
 [menu.main]
-identifier="smn_networking"
-parent= "engine_guide"
-weight=7
+identifier="networking_index"
+parent = "smn_networking"
+weight = -5
 +++
 <![end-metadata]-->
 
-# Docker networks feature overview
+# Understand Docker container networks
 
-This sections explains how to use the Docker networks feature. This feature allows users to define their own networks and connect containers to them. Using this feature you can create a network on a single host or a network that spans across multiple hosts.
+This section provides an overview of the default networking behavior that Docker
+Engine delivers natively. It describes the type of networks created by default
+and how to create your own, user-defined networks. It also describes the
+resources required to create networks on a single host or across a cluster of
+hosts.
+
+## Default Networks
+
+When you install Docker, it creates three networks automatically. You can list
+these networks using the `docker network ls` command:
+
+```
+$ docker network ls
+
+NETWORK ID          NAME                DRIVER
+7fca4eb8c647        bridge              bridge
+9f904ee27bf5        none                null
+cf03ee007fb4        host                host
+```
+
+Historically, these three networks are part of Docker's implementation. When
+you run a container you can use the `--network` flag to specify which network you
+want to run a container on. These three networks are still available to you.
+
+The `bridge` network represents the `docker0` network present in all Docker
+installations. Unless you specify otherwise with the `docker run
+--network=<NETWORK>` option, the Docker daemon connects containers to this network
+by default. You can see this bridge as part of a host's network stack by using
+the `ifconfig` command on the host.
+
+```
+$ ifconfig
+
+docker0   Link encap:Ethernet  HWaddr 02:42:47:bc:3a:eb
+          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
+          inet6 addr: fe80::42:47ff:febc:3aeb/64 Scope:Link
+          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
+          RX packets:17 errors:0 dropped:0 overruns:0 frame:0
+          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
+          collisions:0 txqueuelen:0
+          RX bytes:1100 (1.1 KB)  TX bytes:648 (648.0 B)
+```
+
+The `none` network adds a container to a container-specific network stack. That container lacks a network interface. Attaching to such a container and looking at its stack you see this:
+
+```
+$ docker attach nonenetcontainer
+
+root@0cb243cd1293:/# cat /etc/hosts
+127.0.0.1	localhost
+::1	localhost ip6-localhost ip6-loopback
+fe00::0	ip6-localnet
+ff00::0	ip6-mcastprefix
+ff02::1	ip6-allnodes
+ff02::2	ip6-allrouters
+root@0cb243cd1293:/# ifconfig
+lo        Link encap:Local Loopback
+          inet addr:127.0.0.1  Mask:255.0.0.0
+          inet6 addr: ::1/128 Scope:Host
+          UP LOOPBACK RUNNING  MTU:65536  Metric:1
+          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
+          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+          collisions:0 txqueuelen:0
+          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
+
+root@0cb243cd1293:/#
+```
+>**Note**: You can detach from the container and leave it running with `CTRL-p CTRL-q`.
+
+The `host` network adds a container on the hosts network stack. You'll find the
+network configuration inside the container is identical to the host.
+
+With the exception of the `bridge` network, you really don't need to
+interact with these default networks. While you can list and inspect them, you
+cannot remove them. They are required by your Docker installation. However, you
+can add your own user-defined networks and these you can remove when you no
+longer need them. Before you learn more about creating your own networks, it is
+worth looking at the default `bridge` network a bit.
+
+
+### The default bridge network in detail
+The default `bridge` network is present on all Docker hosts. The `docker network inspect`
+command returns information about a network:
+
+```
+$ docker network inspect bridge
+
+[
+   {
+       "Name": "bridge",
+       "Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
+       "Scope": "local",
+       "Driver": "bridge",
+       "IPAM": {
+           "Driver": "default",
+           "Config": [
+               {
+                   "Subnet": "172.17.0.1/16",
+                   "Gateway": "172.17.0.1"
+               }
+           ]
+       },
+       "Containers": {},
+       "Options": {
+           "com.docker.network.bridge.default_bridge": "true",
+           "com.docker.network.bridge.enable_icc": "true",
+           "com.docker.network.bridge.enable_ip_masquerade": "true",
+           "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
+           "com.docker.network.bridge.name": "docker0",
+           "com.docker.network.driver.mtu": "9001"
+       }
+   }
+]
+```
+The Engine automatically creates a `Subnet` and `Gateway` to the network.
+The `docker run` command automatically adds new containers to this network.
+
+```
+$ docker run -itd --name=container1 busybox
+
+3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c
+
+$ docker run -itd --name=container2 busybox
+
+94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c
+```
+
+Inspecting the `bridge` network again after starting two containers shows both newly launched containers in the network. Their ids show up in the "Containers" section of `docker network inspect`:
+
+```
+$ docker network inspect bridge
+
+{[
+    {
+        "Name": "bridge",
+        "Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
+        "Scope": "local",
+        "Driver": "bridge",
+        "IPAM": {
+            "Driver": "default",
+            "Config": [
+                {
+                    "Subnet": "172.17.0.1/16",
+                    "Gateway": "172.17.0.1"
+                }
+            ]
+        },
+        "Containers": {
+            "3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c": {
+                "EndpointID": "647c12443e91faf0fd508b6edfe59c30b642abb60dfab890b4bdccee38750bc1",
+                "MacAddress": "02:42:ac:11:00:02",
+                "IPv4Address": "172.17.0.2/16",
+                "IPv6Address": ""
+            },
+            "94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c": {
+                "EndpointID": "b047d090f446ac49747d3c37d63e4307be745876db7f0ceef7b311cbba615f48",
+                "MacAddress": "02:42:ac:11:00:03",
+                "IPv4Address": "172.17.0.3/16",
+                "IPv6Address": ""
+            }
+        },
+        "Options": {
+            "com.docker.network.bridge.default_bridge": "true",
+            "com.docker.network.bridge.enable_icc": "true",
+            "com.docker.network.bridge.enable_ip_masquerade": "true",
+            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
+            "com.docker.network.bridge.name": "docker0",
+            "com.docker.network.driver.mtu": "9001"
+        }
+    }
+]
+```
+
+The `docker network inspect` command above shows all the connected containers and their network resources on a given network. Containers in this default network are able to communicate with each other using IP addresses. Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy `docker run --link` option.
+
+You can `attach` to a running `container` and investigate its configuration:
+
+```
+$ docker attach container1
+
+root@0cb243cd1293:/# ifconfig
+ifconfig
+eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
+          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
+          inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
+          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
+          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
+          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
+          collisions:0 txqueuelen:0
+          RX bytes:1296 (1.2 KiB)  TX bytes:648 (648.0 B)
+
+lo        Link encap:Local Loopback
+          inet addr:127.0.0.1  Mask:255.0.0.0
+          inet6 addr: ::1/128 Scope:Host
+          UP LOOPBACK RUNNING  MTU:65536  Metric:1
+          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
+          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+          collisions:0 txqueuelen:0
+          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
+```
+
+Then use `ping`to send three ICMP requests and test the connectivity of the
+containers on this `bridge` network.
+
+```
+root@0cb243cd1293:/# ping -w3 172.17.0.3
+
+PING 172.17.0.3 (172.17.0.3): 56 data bytes
+64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.096 ms
+64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.080 ms
+64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.074 ms
+
+--- 172.17.0.3 ping statistics ---
+3 packets transmitted, 3 packets received, 0% packet loss
+round-trip min/avg/max = 0.074/0.083/0.096 ms
+```
+
+Finally, use the `cat` command to check the `container1` network configuration:
+
+```
+root@0cb243cd1293:/# cat /etc/hosts
+
+172.17.0.2	3386a527aa08
+127.0.0.1	localhost
+::1	localhost ip6-localhost ip6-loopback
+fe00::0	ip6-localnet
+ff00::0	ip6-mcastprefix
+ff02::1	ip6-allnodes
+ff02::2	ip6-allrouters
+```
+To detach from a `container1` and leave it running use `CTRL-p CTRL-q`.Then, attach to `container2` and repeat these three commands.
+
+```
+$ docker attach container2
+
+root@0cb243cd1293:/# ifconfig
+
+eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03
+          inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
+          inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
+          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
+          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
+          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
+          collisions:0 txqueuelen:0
+          RX bytes:1166 (1.1 KiB)  TX bytes:1026 (1.0 KiB)
+
+lo        Link encap:Local Loopback
+          inet addr:127.0.0.1  Mask:255.0.0.0
+          inet6 addr: ::1/128 Scope:Host
+          UP LOOPBACK RUNNING  MTU:65536  Metric:1
+          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
+          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
+          collisions:0 txqueuelen:0
+          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
+
+root@0cb243cd1293:/# ping -w3 172.17.0.2
+
+PING 172.17.0.2 (172.17.0.2): 56 data bytes
+64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.067 ms
+64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
+64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms
+
+--- 172.17.0.2 ping statistics ---
+3 packets transmitted, 3 packets received, 0% packet loss
+round-trip min/avg/max = 0.067/0.071/0.075 ms
+/ # cat /etc/hosts
+172.17.0.3	94447ca47985
+127.0.0.1	localhost
+::1	localhost ip6-localhost ip6-loopback
+fe00::0	ip6-localnet
+ff00::0	ip6-mcastprefix
+ff02::1	ip6-allnodes
+ff02::2	ip6-allrouters
+```
+
+The default `docker0` bridge network supports the use of port mapping and `docker run --link` to allow communications between containers in the `docker0` network. These techniques are cumbersome to set up and prone to error. While they are still available to you as techniques, it is better to avoid them and define your own bridge networks instead.
+
+## User-defined networks
+
+You can create your own user-defined networks that better isolate containers.
+Docker provides some default **network drivers** for creating these networks.
+You can create a new **bridge network**, **overlay network** or **MACVLAN
+network**. You can also create a **network plugin** or **remote network**
+written to your own specifications.
+
+You can create multiple networks. You can add containers to more than one
+network. Containers can only communicate within networks but not across
+networks. A container attached to two networks can communicate with member
+containers in either network. When a container is connected to multiple
+networks, its external connectivity is provided via the first non-internal
+network, in lexical order.
+
+The next few sections describe each of Docker's built-in network drivers in
+greater detail.
+
+### A bridge network
+
+The easiest user-defined network to create is a `bridge` network. This network
+is similar to the historical, default `docker0` network. There are some added
+features and some old features that aren't available.
+
+```
+$ docker network create --driver bridge isolated_nw
+1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b
+
+$ docker network inspect isolated_nw
+
+[
+    {
+        "Name": "isolated_nw",
+        "Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
+        "Scope": "local",
+        "Driver": "bridge",
+        "IPAM": {
+            "Driver": "default",
+            "Config": [
+                {
+                    "Subnet": "172.21.0.0/16",
+                    "Gateway": "172.21.0.1/16"
+                }
+            ]
+        },
+        "Containers": {},
+        "Options": {}
+    }
+]
+
+$ docker network ls
+
+NETWORK ID          NAME                DRIVER
+9f904ee27bf5        none                null
+cf03ee007fb4        host                host
+7fca4eb8c647        bridge              bridge
+c5ee82f76de3        isolated_nw         bridge
+
+```
+
+After you create the network, you can launch containers on it using  the `docker run --network=<NETWORK>` option.
+
+```
+$ docker run --network=isolated_nw -itd --name=container3 busybox
+
+8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c
+
+$ docker network inspect isolated_nw
+[
+    {
+        "Name": "isolated_nw",
+        "Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
+        "Scope": "local",
+        "Driver": "bridge",
+        "IPAM": {
+            "Driver": "default",
+            "Config": [
+                {}
+            ]
+        },
+        "Containers": {
+            "8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c": {
+                "EndpointID": "93b2db4a9b9a997beb912d28bcfc117f7b0eb924ff91d48cfa251d473e6a9b08",
+                "MacAddress": "02:42:ac:15:00:02",
+                "IPv4Address": "172.21.0.2/16",
+                "IPv6Address": ""
+            }
+        },
+        "Options": {}
+    }
+]
+```
+
+The containers you launch into this network must reside on the same Docker host.
+Each container in the network can immediately communicate with other containers
+in the network. Though, the network itself isolates the containers from external
+networks.
+
+![An isolated network](images/bridge_network.png)
+
+Within a user-defined bridge network, linking is not supported. You can
+expose and publish container ports on containers in this network. This is useful
+if you want to make a portion of the `bridge` network available to an outside
+network.
+
+![Bridge network](images/network_access.png)
+
+A bridge network is useful in cases where you want to run a relatively small
+network on a single host. You can, however, create significantly larger networks
+by creating an `overlay` network.
+
+
+### An overlay network with Docker Engine swarm mode
+
+You can create an overlay network on a manager node running in swarm mode
+without an external key-value store. The swarm makes the overlay network
+available only to nodes in the swarm that require it for a service. When you
+create a service that uses the overlay network, the manager node automatically
+extends the overlay network to nodes that run service tasks.
+
+To learn more about running Docker Engine in swarm mode, refer to the
+[Swarm mode overview](../../swarm/index.md).
+
+The example below shows how to create a network and use it for a service from a manager node in the swarm:
+
+```bash
+# Create an overlay network `my-multi-host-network`.
+$ docker network create \
+  --driver overlay \
+  --subnet 10.0.9.0/24 \
+  my-multi-host-network
+
+400g6bwzd68jizzdx5pgyoe95
+
+# Create an nginx service and extend the my-multi-host-network to nodes where
+# the service's tasks run.
+$ $ docker service create --replicas 2 --network my-multi-host-network --name my-web nginx
+
+716thylsndqma81j6kkkb5aus
+```
+
+Overlay networks for a swarm are not available to containers started with
+`docker run` that don't run as part of a swarm mode service. For more
+information refer to [Docker swarm mode overlay network security model](overlay-security-model.md).
+
+See also [Attach services to an overlay network](../../swarm/networking.md).
+
+### An overlay network with an external key-value store
+
+If you are not using Docker Engine in swarm mode, the `overlay` network requires
+a valid key-value store service. Supported key-value stores include Consul,
+Etcd, and ZooKeeper (Distributed store). Before creating a network on this
+version of the Engine, you must install and configure your chosen key-value
+store service. The Docker hosts that you intend to network and the service must
+be able to communicate.
+
+>**Note:** Docker Engine running in swarm mode is not compatible with networking
+with an external key-value store.
+
+![Key-value store](images/key_value.png)
+
+Each host in the network must run a Docker Engine instance. The easiest way to
+provision the hosts is with Docker Machine.
+
+![Engine on each host](images/engine_on_net.png)
+
+You should open the following ports between each of your hosts.
+
+| Protocol | Port | Description           |
+|----------|------|-----------------------|
+| udp      | 4789 | Data plane (VXLAN)    |
+| tcp/udp  | 7946 | Control plane         |
+
+Your key-value store service may require additional ports.
+Check your vendor's documentation and open any required ports.
+
+Once you have several machines provisioned, you can use Docker Swarm to quickly
+form them into a swarm which includes a discovery service as well.
+
+To create an overlay network, you configure options on  the `daemon` on each
+Docker Engine for use with `overlay` network. There are three options to set:
+
+<table>
+    <thead>
+    <tr>
+        <th>Option</th>
+        <th>Description</th>
+    </tr>
+    </thead>
+    <tbody>
+    <tr>
+        <td><pre>--cluster-store=PROVIDER://URL</pre></td>
+        <td>Describes the location of the KV service.</td>
+    </tr>
+    <tr>
+        <td><pre>--cluster-advertise=HOST_IP|HOST_IFACE:PORT</pre></td>
+        <td>The IP address or interface of the HOST used for clustering.</td>
+    </tr>
+    <tr>
+        <td><pre>--cluster-store-opt=KEY-VALUE OPTIONS</pre></td>
+        <td>Options such as TLS certificate or tuning discovery Timers</td>
+    </tr>
+    </tbody>
+</table>
+
+Create an `overlay` network on one of the machines in the Swarm.
+
+    $ docker network create --driver overlay my-multi-host-network
+
+This results in a single network spanning multiple hosts. An `overlay` network
+provides complete isolation for the containers.
+
+![An overlay network](images/overlay_network.png)
+
+Then, on each host, launch containers making sure to specify the network name.
+
+    $ docker run -itd --network=my-multi-host-network busybox
+
+Once connected, each container has access to all the containers in the network
+regardless of which Docker host the container was launched on.
+
+![Published port](images/overlay-network-final.png)
+
+If you would like to try this for yourself, see the [Getting started for
+overlay](get-started-overlay.md).
+
+### Custom network plugin
+
+If you like, you can write your own network driver plugin. A network
+driver plugin makes use of Docker's plugin infrastructure. In this
+infrastructure, a plugin is a process running on the same Docker host as the
+Docker `daemon`.
+
+Network plugins follow the same restrictions and installation rules as other
+plugins. All plugins make use of the plugin API. They have a lifecycle that
+encompasses installation, starting, stopping and activation.
+
+Once you have created and installed a custom network driver, you use it like the
+built-in network drivers. For example:
+
+    $ docker network create --driver weave mynet
+
+You can inspect it, add containers to and from it, and so forth. Of course,
+different plugins may make use of different technologies or frameworks. Custom
+networks can include features not present in Docker's default networks. For more
+information on writing plugins, see [Extending Docker](../../extend/index.md) and
+[Writing a network driver plugin](../../extend/plugins_network.md).
+
+### Docker embedded DNS server
+
+Docker daemon runs an embedded DNS server to provide automatic service discovery
+for containers connected to user defined networks. Name resolution requests from
+the containers are handled first by the embedded DNS server. If the embedded DNS
+server is unable to resolve the request it will be forwarded to any external DNS
+servers configured for the container. To facilitate this when the container is
+created, only the embedded DNS server reachable at `127.0.0.11` will be listed
+in the container's `resolv.conf` file. More information on embedded DNS server on
+user-defined networks can be found in the [embedded DNS server in user-defined networks]
+(configure-dns.md)
+
+## Links
+
+Before the Docker network feature, you could use the Docker link feature to
+allow containers to discover each other.  With the introduction of Docker networks,
+containers can be discovered by its name automatically. But you can still create
+links but they behave differently when used in the default `docker0` bridge network
+compared to user-defined networks. For more information, please refer to
+[Legacy Links](default_network/dockerlinks.md) for link feature in default `bridge` network
+and the [linking containers in user-defined networks](work-with-networks.md#linking-containers-in-user-defined-networks) for links
+functionality in user-defined networks.
+
+## Related information
 
-- [Understand Docker container networks](dockernetworks.md)
 - [Work with network commands](work-with-networks.md)
 - [Get started with multi-host networking](get-started-overlay.md)
-
-If you are already familiar with Docker's default bridge network, `docker0` that network continues to be supported. It is created automatically in every installation. The default bridge network is also named `bridge`. To see a list of topics related to that network, read the articles listed in the [Docker default bridge network](default_network/index.md).
+- [Managing Data in Containers](../../tutorials/dockervolumes.md)
+- [Docker Machine overview](https://docs.docker.com/machine)
+- [Docker Swarm overview](https://docs.docker.com/swarm)
+- [Investigate the LibNetwork project](https://github.com/docker/libnetwork)

+ 22 - 0
docs/userguide/networking/menu.md

@@ -0,0 +1,22 @@
+<!--[metadata]>
++++
+title = "Network configuration"
+description = "Docker networking feature is introduced"
+keywords = ["network, networking, bridge, docker,  documentation"]
+type="menu"
+[menu.main]
+identifier="smn_networking"
+parent= "engine_guide"
+weight=7
++++
+<![end-metadata]-->
+
+# Docker networks feature overview
+
+This sections explains how to use the Docker networks feature. This feature allows users to define their own networks and connect containers to them. Using this feature you can create a network on a single host or a network that spans across multiple hosts.
+
+- [Understand Docker container networks](index.md)
+- [Work with network commands](work-with-networks.md)
+- [Get started with multi-host networking](get-started-overlay.md)
+
+If you are already familiar with Docker's default bridge network, `docker0` that network continues to be supported. It is created automatically in every installation. The default bridge network is also named `bridge`. To see a list of topics related to that network, read the articles listed in the [Docker default bridge network](default_network/index.md).

+ 66 - 0
docs/userguide/networking/overlay-security-model.md

@@ -0,0 +1,66 @@
+<!--[metadata]>
++++
+title = "Swarm mode overlay network security model"
+description = "Docker swarm mode overlay network security model"
+keywords = ["network, docker, documentation, user guide, multihost, swarm mode", "overlay"]
+[menu.main]
+parent = "smn_networking"
+weight=-2
++++
+<![end-metadata]-->
+
+# Docker swarm mode overlay network security model
+
+Overlay networking for Docker Engine swarm mode comes secure out of the box. The
+swarm nodes exchange overlay network information using a gossip protocol. By
+default the nodes encrypt and authenticate information they exchange via gossip
+using the [AES algorithm](https://en.wikipedia.org/wiki/Galois/Counter_Mode) in
+GCM mode. Manager nodes in the swarm rotate the key used to encrypt gossip data
+every 12 hours.
+
+You can also encrypt data exchanged between containers on different nodes on the
+overlay network. To enable encryption, when you create an overlay network pass
+the `--opt encrypted` flag:
+
+```bash
+$ docker network create --opt encrypted --driver overlay my-multi-host-network
+
+dt0zvqn0saezzinc8a5g4worx
+```
+
+When you enable overlay encryption, Docker creates IPSEC tunnels between all the
+nodes where tasks are scheduled for services attached to the overlay network.
+These tunnels also use the AES algorithm in GCM mode and manager nodes
+automatically rotate the keys every 12 hours.
+
+## Swarm mode overlay networks and unmanaged containers
+
+Because the overlay networks for swarm mode use encryption keys from the manager
+nodes to encrypt the gossip communications, only containers running as tasks in
+the swarm have access to the keys. Consequently, containers started outside of
+swarm mode using `docker run` (unmanaged containers) cannot attach to the
+overlay network.
+
+For example:
+
+```bash
+$ docker run --network my-multi-host-network nginx
+
+docker: Error response from daemon: swarm-scoped network
+(my-multi-host-network) is not compatible with `docker create` or `docker
+run`. This network can only be used by a docker service.
+```
+
+To work around this situation, migrate the unmanaged containers to managed
+services. For instance:
+
+```bash
+$ docker service create --network my-multi-host-network my-image
+```
+
+Because [swarm mode](../../swarm/index.md) is an optional feature, the Docker
+Engine preserves backward compatibility. You can continue to rely on a
+third-party key-value store to support overlay networking if you wish.
+However, switching to swarm-mode is strongly encouraged. In addition to the
+security benefits described in this article, swarm mode enables you to leverage
+the substantially greater scalability provided by the new services API.

+ 1 - 1
docs/userguide/networking/work-with-networks.md

@@ -23,7 +23,7 @@ available through the Docker Engine CLI. These commands are:
 * `docker network inspect`
 
 While not required, it is a good idea to read [Understanding Docker
-network](dockernetworks.md) before trying the examples in this section. The
+network](index.md) before trying the examples in this section. The
 examples for the rely on a `bridge` network so that you can try them
 immediately.  If you would prefer to experiment with an `overlay` network see
 the [Getting started with multi-host networks](get-started-overlay.md) instead.

+ 17 - 0
docs/userguide/storagedriver/aufs-driver.md

@@ -91,6 +91,16 @@ a whiteout file in the container's top layer. This whiteout file effectively
 existence in the image's read-only layers. This works the same no matter which
 of the image's read-only layers the file exists in.
 
+## Renaming directories with the AUFS storage driver
+
+Calling `rename(2)` for a directory is not fully supported on AUFS. It returns 
+`EXDEV` ("cross-device link not permitted"), even when both of the source and 
+the destination path are on a same AUFS layer, unless the directory has no 
+children.
+
+So your application has to be designed so that it can handle `EXDEV` and fall 
+back to a "copy and unlink" strategy.
+
 ## Configure Docker with AUFS
 
 You can only use the AUFS storage driver on Linux systems with AUFS installed.
@@ -211,6 +221,13 @@ any of the potential overheads introduced by thin provisioning and
 copy-on-write. For this reason, you may want to place heavy write workloads on
 data volumes.
 
+## AUFS compatibility
+
+To summarize the AUFS's aspect which is incompatible with other filesystems:
+
+- The AUFS does not fully support the `rename(2)` system call. Your application 
+needs to detect its failure and fall back to a "copy and unlink" strategy.
+
 ## Related information
 
 * [Understand images, containers, and storage drivers](imagesandcontainers.md)

+ 28 - 6
docs/userguide/storagedriver/overlayfs-driver.md

@@ -301,6 +301,13 @@ file in the image layer ("lowerdir") is not deleted. However, the whiteout file
 created in the "upperdir". This has the same effect as a whiteout file and 
 effectively masks the existence of the directory in the image's "lowerdir".
 
+- **Renaming directories**. Calling `rename(2)` for a directory is allowed only 
+when both of the source and the destination path are on the top layer. 
+Otherwise, it returns `EXDEV` ("cross-device link not permitted").
+
+So your application has to be designed so that it can handle `EXDEV` and fall 
+back to a "copy and unlink" strategy.
+
 ## Configure Docker with the `overlay`/`overlay2` storage driver
 
 To configure Docker to use the `overlay` storage driver your Docker host must be 
@@ -386,12 +393,6 @@ large. However, once the file has been copied up, all subsequent writes to that
 with AUFS. This is because AUFS supports more layers than OverlayFS and it is 
 possible to incur far larger latencies if searching through many AUFS layers.
 
-- **RPMs and Yum**. OverlayFS only implements a subset of the POSIX standards. 
-This can result in certain OverlayFS operations breaking POSIX standards. One 
-such operation is the *copy-up* operation. Therefore, using `yum` inside of a 
-container on a Docker host using the `overlay`/`overlay2` storage drivers is
-unlikely to work without implementing workarounds.
-
 - **Inode limits**. Use of the `overlay` storage driver can cause excessive 
 inode consumption. This is especially so as the number of images and containers
  on the Docker host grows. A Docker host with a large number of images and lots
@@ -413,3 +414,24 @@ performance. This is because they bypass the storage driver and do not incur
 any of the potential overheads introduced by thin provisioning and 
 copy-on-write. For this reason, you should place heavy write workloads on data 
 volumes.
+
+## OverlayFS compatibility
+To summarize the OverlayFS's aspect which is incompatible with other
+filesystems:
+
+- **open(2)**. OverlayFS only implements a subset of the POSIX standards. 
+This can result in certain OverlayFS operations breaking POSIX standards. One 
+such operation is the *copy-up* operation. Suppose that  your application calls 
+`fd1=open("foo", O_RDONLY)` and then `fd2=open("foo", O_RDWR)`. In this case, 
+your application expects `fd1` and `fd2` to refer to the same file. However, due 
+to a copy-up operation that occurs after the first calling to `open(2)`, the 
+descriptors refer to different files.
+
+`yum` is known to be affected unless the `yum-plugin-ovl` package is installed. 
+If the `yum-plugin-ovl` package is not available in your distribution (e.g. 
+RHEL/CentOS prior to 6.8 or 7.2), you may need to run `touch /var/lib/rpm/*` 
+before running `yum install`.
+
+- **rename(2)**. OverlayFS does not fully support the `rename(2)` system call. 
+Your application needs to detect its failure and fall back to a "copy and 
+unlink" strategy.

+ 1 - 1
man/docker-run.1.md

@@ -990,7 +990,7 @@ network namespace, run this command:
 
 Note:
 
-Not all sysctls are namespaced. docker does not support changing sysctls
+Not all sysctls are namespaced. Docker does not support changing sysctls
 inside of a container that also modify the host system. As the kernel 
 evolves we expect to see more sysctls become namespaced.
 

Some files were not shown because too many files changed in this diff