Updating networking docs with technical information

- the /etc/hosts read caveat due to dynamic update
- information about docker_gwbridge
- Carries and closes #17654
- Updating with last change by Madhu
- Updating with the IPAM api 1.22

Signed-off-by: Mary Anthony <mary@docker.com>
This commit is contained in:
Madhu Venugopal 2015-11-03 06:15:56 -08:00 committed by Mary Anthony
parent 8d5695470a
commit 39dfc536d4
5 changed files with 193 additions and 93 deletions

View file

@ -2715,6 +2715,12 @@ Content-Type: application/json
{
"Name":"isolated_nw",
"Driver":"bridge"
"IPAM":{
"Config":[{
"Subnet":"172.20.0.0/16",
"IPRange":"172.20.10.0/24",
"Gateway":"172.20.10.11"
}]
}
```
@ -2740,6 +2746,7 @@ JSON Parameters:
- **Name** - The new network's name. this is a mandatory field
- **Driver** - Name of the network driver to use. Defaults to `bridge` driver
- **IPAM** - Optional custom IP scheme for the network
- **Options** - Network specific options to be used by the drivers
- **CheckDuplicate** - Requests daemon to check for networks with same name

View file

@ -2709,6 +2709,11 @@ Create a network
**Example request**:
```
Create a network
**Example request**:
```
POST /networks/create HTTP/1.1
Content-Type: application/json
@ -2716,6 +2721,12 @@ Content-Type: application/json
{
"Name":"isolated_nw",
"Driver":"bridge"
"IPAM":{
"Config":[{
"Subnet":"172.20.0.0/16",
"IPRange":"172.20.10.0/24",
"Gateway":"172.20.10.11"
}]
}
```
@ -2741,6 +2752,7 @@ JSON Parameters:
- **Name** - The new network's name. this is a mandatory field
- **Driver** - Name of the network driver to use. Defaults to `bridge` driver
- **IPAM** - Optional custom IP scheme for the network
- **Options** - Network specific options to be used by the drivers
- **CheckDuplicate** - Requests daemon to check for networks with same name

View file

@ -404,6 +404,19 @@ container itself as well as `localhost` and a few other common things. The
::1 localhost ip6-localhost ip6-loopback
86.75.30.9 db-static
If a container is connected to the default bridge network and `linked`
with other containers, then the container's `/etc/hosts` file is updated
with the linked container's name.
If the container is connected to user-defined network, the container's
`/etc/hosts` file is updated with names of all other containers in that
user-defined network.
> **Note** Since Docker may live update the containers `/etc/hosts` file, there
may be situations when processes inside the container can end up reading an
empty or incomplete `/etc/hosts` file. In most cases, retrying the read again
should fix the problem.
## Restart policies (--restart)
Using the `--restart` flag on Docker run you can specify a restart policy for

View file

@ -11,7 +11,7 @@ weight=-3
# Get started with multi-host networking
This article uses an example to explain the basics of creating a mult-host
This article uses an example to explain the basics of creating a multi-host
network. Docker Engine supports multi-host-networking out-of-the-box through the
`overlay` network driver. Unlike `bridge` networks overlay networks require
some pre-existing conditions before you can create one. These conditions are:
@ -21,8 +21,10 @@ some pre-existing conditions before you can create one. These conditions are:
* A cluster of hosts with connectivity to the key-value store.
* A properly configured Engine `daemon` on each host in the cluster.
You'll use Docker Machine to create both the the key-value store server and the
host cluster. This example creates a Swarm cluster.
Though Docker Machine and Docker Swarm are not mandatory to experience Docker
multi-host-networking, this example uses them to illustrate how they are
integrated. You'll use Machine to create both the the key-value store
server and the host cluster. This example creates a Swarm cluster.
## Prerequisites
@ -46,7 +48,7 @@ store) key-value stores. This example uses Consul.
2. Provision a VirtualBox machine called `mh-keystore`.
$ docker-machine create -d virtualbox mh-keystore
$ docker-machine create -d virtualbox mh-keystore
When you provision a new machine, the process adds Docker Engine to the
host. This means rather than installing Consul manually, you can create an
@ -55,10 +57,10 @@ store) key-value stores. This example uses Consul.
3. Start a `progrium/consul` container running on the `mh-keystore` machine.
$ docker $(docker-machine config mh-keystore) run -d \
-p "8500:8500" \
-h "consul" \
progrium/consul -server -bootstrap
$ docker $(docker-machine config mh-keystore) run -d \
-p "8500:8500" \
-h "consul" \
progrium/consul -server -bootstrap
You passed the `docker run` command the connection configuration using a bash
expansion `$(docker-machine config mh-keystore)`. The client started a
@ -66,13 +68,13 @@ store) key-value stores. This example uses Consul.
4. Set your local environment to the `mh-keystore` machine.
$ eval "$(docker-machine env mh-keystore)"
$ eval "$(docker-machine env mh-keystore)"
5. Run the `docker ps` command to see the `consul` container.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4d51392253b3 progrium/consul "/bin/start -server -" 25 minutes ago Up 25 minutes 53/tcp, 53/udp, 8300-8302/tcp, 0.0.0.0:8500->8500/tcp, 8400/tcp, 8301-8302/udp admiring_panini
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4d51392253b3 progrium/consul "/bin/start -server -" 25 minutes ago Up 25 minutes 53/tcp, 53/udp, 8300-8302/tcp, 0.0.0.0:8500->8500/tcp, 8400/tcp, 8301-8302/udp admiring_panini
Keep your terminal open and move onto the next step.
@ -87,13 +89,13 @@ that machine options that are needed by the `overlay` network driver.
1. Create a Swarm master.
$ docker-machine create \
-d virtualbox \
--swarm --swarm-image="swarm" --swarm-master \
--swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \
--engine-opt="cluster-advertise=eth1:2376" \
mhs-demo0
$ docker-machine create \
-d virtualbox \
--swarm --swarm-image="swarm" --swarm-master \
--swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \
--engine-opt="cluster-advertise=eth1:2376" \
mhs-demo0
At creation time, you supply the Engine `daemon` with the ` --cluster-store` option. This option tells the Engine the location of the key-value store for the `overlay` network. The bash expansion `$(docker-machine ip mh-keystore)` resolves to the IP address of the Consul server you created in "STEP 1". The `--cluster-advertise` option advertises the machine on the network.
@ -126,74 +128,71 @@ To create an overlay network
1. Set your docker environment to the Swarm master.
$ eval $(docker-machine env --swarm mhs-demo0)
$ eval $(docker-machine env --swarm mhs-demo0)
Using the `--swarm` flag with `docker-machine` restricts the `docker` commands to Swarm information alone.
Using the `--swarm` flag with `docker-machine` restricts the `docker` commands to Swarm information alone.
2. Use the `docker info` command to view the Swarm.
$ docker info
Containers: 3
Images: 2
Role: primary
Strategy: spread
Filters: affinity, health, constraint, port, dependency
Nodes: 2
mhs-demo0: 192.168.99.104:2376
└ Containers: 2
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs
mhs-demo1: 192.168.99.105:2376
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs
CPUs: 2
Total Memory: 2.043 GiB
Name: 30438ece0915
$ docker info
Containers: 3
Images: 2
Role: primary
Strategy: spread
Filters: affinity, health, constraint, port, dependency
Nodes: 2
mhs-demo0: 192.168.99.104:2376
└ Containers: 2
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs
mhs-demo1: 192.168.99.105:2376
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs
CPUs: 2
Total Memory: 2.043 GiB
Name: 30438ece0915
From this information, you can see that you are running three containers and 2 images on the Master.
3. Create your `overlay` network.
$ docker network create --driver overlay my-net
$ docker network create --driver overlay my-net
You only need to create the network on a single host in the cluster. In this case, you used the Swarm master but you could easily have run it on any host in the cluster.
You only need to create the network on a single host in the cluster. In this case, you used the Swarm master but you could easily have run it on any host in the cluster.
4. Check that the network is running:
$ docker network ls
NETWORK ID NAME DRIVER
412c2496d0eb mhs-demo1/host host
dd51763e6dd2 mhs-demo0/bridge bridge
6b07d0be843f my-net overlay
b4234109bd9b mhs-demo0/none null
1aeead6dd890 mhs-demo0/host host
d0bb78cbe7bd mhs-demo1/bridge bridge
1c0eb8f69ebb mhs-demo1/none null
$ docker network ls
NETWORK ID NAME DRIVER
412c2496d0eb mhs-demo1/host host
dd51763e6dd2 mhs-demo0/bridge bridge
6b07d0be843f my-net overlay
b4234109bd9b mhs-demo0/none null
1aeead6dd890 mhs-demo0/host host
d0bb78cbe7bd mhs-demo1/bridge bridge
1c0eb8f69ebb mhs-demo1/none null
Because you are in the Swarm master environment, you see all the networks on all Swarm agents. Notice that each `NETWORK ID` is unique. The default networks on each engine and the single overlay network.
5. Switch to each Swarm agent in turn and list the network.
$ eval $(docker-machine env mhs-demo0)
$ docker network ls
NETWORK ID NAME DRIVER
6b07d0be843f my-net overlay
dd51763e6dd2 bridge bridge
b4234109bd9b none null
1aeead6dd890 host host
$ eval $(docker-machine env mhs-demo1)
$ docker network ls
NETWORK ID NAME DRIVER
d0bb78cbe7bd bridge bridge
1c0eb8f69ebb none null
412c2496d0eb host host
6b07d0be843f my-net overlay
$ eval $(docker-machine env mhs-demo0)
$ docker network ls
NETWORK ID NAME DRIVER
6b07d0be843f my-net overlay
dd51763e6dd2 bridge bridge
b4234109bd9b none null
1aeead6dd890 host host
$ eval $(docker-machine env mhs-demo1)
$ docker network ls
NETWORK ID NAME DRIVER
d0bb78cbe7bd bridge bridge
1c0eb8f69ebb none null
412c2496d0eb host host
6b07d0be843f my-net overlay
Both agents reports it has the `my-net `network with the `6b07d0be843f` id. You have a multi-host container network running!
@ -203,7 +202,7 @@ Once your network is created, you can start a container on any of the hosts and
1. Point your environment to your `mhs-demo0` instance.
$ eval $(docker-machine env mhs-demo0)
$ eval $(docker-machine env mhs-demo0)
2. Start an Nginx server on `mhs-demo0`.
@ -215,7 +214,7 @@ Once your network is created, you can start a container on any of the hosts and
$ eval $(docker-machine env mhs-demo1)
2. Run a Busybox instance and get the contents of the Ngnix server's home page.
4. Run a Busybox instance and get the contents of the Ngnix server's home page.
$ docker run -it --rm --net=my-net --env="constraint:node==mhs-demo1" busybox wget -O- http://web
Unable to find image 'busybox:latest' locally
@ -252,9 +251,68 @@ Once your network is created, you can start a container on any of the hosts and
</html>
- 100% |*******************************| 612 0:00:00 ETA
## Step 5: Extra Credit with Docker Compose
## Step 5: Check external connectivity
You can try starting a second network on your existing Swarm cluser using Docker Compose.
As you've seen, Docker's built-in overlay network driver provides out-of-the-box
connectivity between the containers on multiple hosts within the same network.
Additionally, containers connected to the multi-host network are automatically
connected to the `docker_gwbridge` network. This network allows the containers
to have external connectivity outside of their cluster.
1. Change your environment to the Swarm agent.
$ eval $(docker-machine env mhs-demo1)
2. View the `docker_gwbridge` network, by listing the networks.
$ docker network ls
NETWORK ID NAME DRIVER
6b07d0be843f my-net overlay
dd51763e6dd2 bridge bridge
b4234109bd9b none null
1aeead6dd890 host host
e1dbd5dff8be docker_gwbridge bridge
3. Repeat steps 1 and 2 on the Swarm master.
$ eval $(docker-machine env mhs-demo0)
$ docker network ls
NETWORK ID NAME DRIVER
6b07d0be843f my-net overlay
d0bb78cbe7bd bridge bridge
1c0eb8f69ebb none null
412c2496d0eb host host
97102a22e8d2 docker_gwbridge bridge
2. Check the Ngnix container's network interfaces.
$ docker exec web ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
22: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:09:03 brd ff:ff:ff:ff:ff:ff
inet 10.0.9.3/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe00:903/64 scope link
valid_lft forever preferred_lft forever
24: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.2/16 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe12:2/64 scope link
valid_lft forever preferred_lft forever
The `eth0` interface represents the container interface that is connected to
the `my-net` overlay network. While the `eth1` interface represents the
container interface that is connected to the `docker_gwbridge` network.
## Step 6: Extra Credit with Docker Compose
You can try starting a second network on your existing Swarm cluster using Docker Compose.
1. Log into the Swarm master.
@ -271,7 +329,6 @@ You can try starting a second network on your existing Swarm cluser using Docker
- "constraint:node==swl-demo0"
ports:
- "80:5000"
mongo:
image: mongo
@ -283,5 +340,7 @@ You can try starting a second network on your existing Swarm cluser using Docker
## Related information
* [Understand Docker container networks](dockernetworks.md)
* [Work with network commands](work-with-networks.md)
* [Docker Swarm overview](https://docs.docker.com/swarm)
* [Docker Machine overview](https://docs.docker.com/machine)

View file

@ -355,9 +355,9 @@ ports and the exposed ports, use `docker port`.
Publish a container's port, or range of ports, to the host.
Format: `ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort`
Both hostPort and containerPort can be specified as a range of ports.
Both hostPort and containerPort can be specified as a range of ports.
When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range.
(e.g., `docker run -p 1234-1236:1222-1224 --name thisWorks -t busybox`
(e.g., `docker run -p 1234-1236:1222-1224 --name thisWorks -t busybox`
but not `docker run -p 1230-1236:1230-1240 --name RangeContainerPortsBiggerThanRangeHostPorts -t busybox`)
With ip: `docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT --name CONTAINER -t someimage`
Use `docker port` to see the actual mapping: `docker port CONTAINER $CONTAINERPORT`
@ -437,17 +437,17 @@ standard input.
""--ulimit""=[]
Ulimit options
**-v**, **--volume**=[] Create a bind mount
**-v**, **--volume**=[] Create a bind mount
(format: `[host-dir:]container-dir[:<suffix options>]`, where suffix options
are comma delimited and selected from [rw|ro] and [z|Z].)
(e.g., using -v /host-dir:/container-dir, bind mounts /host-dir in the
host to /container-dir in the Docker container)
If 'host-dir' is missing, then docker automatically creates the new volume
on the host. **This auto-creation of the host path has been deprecated in
Release: v1.9.**
The **-v** option can be used one or
more times to add one or more mounts to a container. These mounts can then be
used in other containers using the **--volumes-from** option.
@ -469,31 +469,31 @@ content label. Shared volume labels allow all containers to read/write content.
The `Z` option tells Docker to label the content with a private unshared label.
Only the current container can use a private volume.
The `container-dir` must always be an absolute path such as `/src/docs`.
The `host-dir` can either be an absolute path or a `name` value. If you
supply an absolute path for the `host-dir`, Docker bind-mounts to the path
The `container-dir` must always be an absolute path such as `/src/docs`.
The `host-dir` can either be an absolute path or a `name` value. If you
supply an absolute path for the `host-dir`, Docker bind-mounts to the path
you specify. If you supply a `name`, Docker creates a named volume by that `name`.
A `name` value must start with start with an alphanumeric character,
followed by `a-z0-9`, `_` (underscore), `.` (period) or `-` (hyphen).
A `name` value must start with start with an alphanumeric character,
followed by `a-z0-9`, `_` (underscore), `.` (period) or `-` (hyphen).
An absolute path starts with a `/` (forward slash).
For example, you can specify either `/foo` or `foo` for a `host-dir` value.
If you supply the `/foo` value, Docker creates a bind-mount. If you supply
For example, you can specify either `/foo` or `foo` for a `host-dir` value.
If you supply the `/foo` value, Docker creates a bind-mount. If you supply
the `foo` specification, Docker creates a named volume.
**--volumes-from**=[]
Mount volumes from the specified container(s)
Mounts already mounted volumes from a source container onto another
container. You must supply the source's container-id. To share
container. You must supply the source's container-id. To share
a volume, use the **--volumes-from** option when running
the target container. You can share volumes even if the source container
the target container. You can share volumes even if the source container
is not running.
By default, Docker mounts the volumes in the same mode (read-write or
read-only) as it is mounted in the source container. Optionally, you
can change this by suffixing the container-id with either the `:ro` or
By default, Docker mounts the volumes in the same mode (read-write or
read-only) as it is mounted in the source container. Optionally, you
can change this by suffixing the container-id with either the `:ro` or
`:rw ` keyword.
If the location of the volume from the source container overlaps with
@ -558,7 +558,7 @@ Now run a regular container, and it correctly does NOT see the shared memory seg
```
$ docker run -it shm ipcs -m
------ Shared Memory Segments --------
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
```
@ -637,6 +637,15 @@ Running the **env** command in the linker container shows environment variables
When linking two containers Docker will use the exposed ports of the container
to create a secure tunnel for the parent to access.
If a container is connected to the default bridge network and `linked`
with other containers, then the container's `/etc/hosts` file is updated
with the linked container's name.
> **Note** Since Docker may live update the containers `/etc/hosts` file, there
may be situations when processes inside the container can end up reading an
empty or incomplete `/etc/hosts` file. In most cases, retrying the read again
should fix the problem.
## Mapping Ports for External Usage