First pass at consolidating

Removing old networking.md
Updating dockernetworks.md with images
Adding information on network plugins
Adding blurb about links to docker networking
Updating the working documentation
Adding Overlay Getting Started
Downplaying links by removing refs/examples, adding refs/examples for network.
Updating getting started to reflect networks not links
Pulling out old network material
Updating per discussion with Madhu to add Default docs section
Updating with bridge default
Fix bad merge
Updating with new cluster-advertise behavior
Update working and NetworkSettings examples
Correcting example for default bridge discovery behavior
Entering comments
Fixing broken Markdown Syntax
Updating with comments
Updating all the links

Signed-off-by: Mary Anthony <mary@docker.com>
This commit is contained in:
Mary Anthony 2015-09-30 13:11:36 -07:00
parent 4eac6d4529
commit 9ef855f9e5
82 changed files with 3118 additions and 2233 deletions

View file

@ -59,7 +59,7 @@ in a database image.
In almost all cases, you should only run a single process in a single
container. Decoupling applications into multiple containers makes it much
easier to scale horizontally and reuse containers. If that service depends on
another service, make use of [container linking](../userguide/dockerlinks.md).
another service, make use of [container linking](../userguide/networking/default_network/dockerlinks.md).
### Minimize the number of layers

View file

@ -4,8 +4,7 @@ title = "Automatically start containers"
description = "How to generate scripts for upstart, systemd, etc."
keywords = ["systemd, upstart, supervisor, docker, documentation, host integration"]
[menu.main]
parent = "smn_containers"
weight = 99
parent = "smn_administrate"
+++
<![end-metadata]-->

File diff suppressed because it is too large Load diff

View file

@ -39,7 +39,7 @@ of another container. Of course, if the host system is setup
accordingly, containers can interact with each other through their
respective network interfaces — just like they can interact with
external hosts. When you specify public ports for your containers or use
[*links*](../userguide/dockerlinks.md)
[*links*](../userguide/networking/default_network/dockerlinks.md)
then IP traffic is allowed between containers. They can ping each other,
send/receive UDP packets, and establish TCP connections, but that can be
restricted if necessary. From a network architecture point of view, all
@ -129,7 +129,7 @@ privilege separation.
Eventually, it is expected that the Docker daemon will run restricted
privileges, delegating operations well-audited sub-processes,
each with its own (very limited) scope of Linux capabilities,
each with its own (very limited) scope of Linux capabilities,
virtual network setup, filesystem management, etc. That is, most likely,
pieces of the Docker engine itself will run inside of containers.

View file

@ -172,6 +172,6 @@ the exposed port to two different ports on the host
$ mongo --port 28001
$ mongo --port 28002
- [Linking containers](../userguide/dockerlinks.md)
- [Linking containers](../userguide/networking/default_network/dockerlinks.md)
- [Cross-host linking containers](../articles/ambassador_pattern_linking.md)
- [Creating an Automated Build](https://docs.docker.com/docker-hub/builds/)

View file

@ -10,7 +10,7 @@ parent = "smn_applied"
# Dockerizing PostgreSQL
> **Note**:
> **Note**:
> - **If you don't like sudo** then see [*Giving non-root
> access*](../installation/binaries.md#giving-non-root-access)
@ -85,7 +85,7 @@ And run the PostgreSQL server container (in the foreground):
$ docker run --rm -P --name pg_test eg_postgresql
There are 2 ways to connect to the PostgreSQL server. We can use [*Link
Containers*](../userguide/dockerlinks.md), or we can access it from our host
Containers*](../userguide/networking/default_network/dockerlinks.md), or we can access it from our host
(or the network).
> **Note**:

View file

@ -18,9 +18,8 @@ plugins.
Plugins extend Docker's functionality. They come in specific types. For
example, a [volume plugin](plugins_volume.md) might enable Docker
volumes to persist across multiple Docker hosts and a
[network plugin](plugins_network.md) might provide network plumbing
using a favorite networking technology, such as vxlan overlay, ipvlan, EVPN, etc.
volumes to persist across multiple Docker hosts and a
[network plugin](plugins_network.md) might provide network plumbing.
Currently Docker supports volume and network driver plugins. In the future it
will support additional plugin types.

View file

@ -1,7 +1,7 @@
<!--[metadata]>
+++
title = "Docker network driver plugins"
description = "Network drive plugins."
description = "Network driver plugins."
keywords = ["Examples, Usage, plugins, docker, documentation, user guide"]
[menu.main]
parent = "mn_extend"
@ -11,41 +11,48 @@ weight=-1
# Docker network driver plugins
Docker supports network driver plugins via
[LibNetwork](https://github.com/docker/libnetwork). Network driver plugins are
implemented as "remote drivers" for LibNetwork, which shares plugin
infrastructure with Docker. In effect this means that network driver plugins
are activated in the same way as other plugins, and use the same kind of
protocol.
Docker network plugins enable Docker deployments to be extended to support a
wide range of networking technologies, such as VXLAN, IPVLAN, MACVLAN or
something completely different. Network driver plugins are supported via the
LibNetwork project. Each plugin is implemented asa "remote driver" for
LibNetwork, which shares plugin infrastructure with Docker. Effectively,
network driver plugins are activated in the same way as other plugins, and use
the same kind of protocol.
## Using network driver plugins
The means of installing and running a network driver plugin will depend on the
particular plugin.
The means of installing and running a network driver plugin depend on the
particular plugin. So, be sure to install your plugin according to the
instructions obtained from the plugin developer.
Once running however, network driver plugins are used just like the built-in
network drivers: by being mentioned as a driver in network-oriented Docker
commands. For example,
docker network create -d weave mynet
$ docker network create --driver weave mynet
Some network driver plugins are listed in [plugins](plugins.md)
The network thus created is owned by the plugin, so subsequent commands
referring to that network will also be run through the plugin such as,
The `mynet` network is now owned by `weave`, so subsequent commands
referring to that network will be sent to the plugin,
docker run --net=mynet busybox top
$ docker run --net=mynet busybox top
## Network driver plugin protocol
The network driver protocol, additional to the plugin activation call, is
documented as part of LibNetwork:
## Write a network plugin
Network plugins implement the [Docker plugin
API](https://docs.docker.com/extend/plugin_api/) and the network plugin protocol
## Network plugin protocol
The network driver protocol, in addition to the plugin activation call, is
documented as part of libnetwork:
[https://github.com/docker/libnetwork/blob/master/docs/remote.md](https://github.com/docker/libnetwork/blob/master/docs/remote.md).
# Related GitHub PRs and issues
# Related Information
Please record your feedback in the following issue, on the usual
Google Groups, or the IRC channel #docker-network.
To interact with the Docker maintainers and other interested users, se the IRC channel `#docker-network`.
- [#14083](https://github.com/docker/docker/issues/14083) Feedback on
experimental networking features
- [Docker networks feature overview](../userguide/networking/index.md)
- The [LibNetwork](https://github.com/docker/libnetwork) project

View file

@ -98,7 +98,7 @@ with several powerful functionalities:
- *Sharing.* Docker has access to a public registry [on Docker Hub](https://hub.docker.com/)
where thousands of people have uploaded useful images: anything from Redis,
CouchDB, PostgreSQL to IRC bouncers to Rails app servers to Hadoop to base
CouchDB, PostgreSQL to IRC bouncers to Rails app servers to Hadoop to base
images for various Linux distros. The
[*registry*](https://docs.docker.com/registry/) also
includes an official "standard library" of useful containers maintained by the
@ -135,8 +135,7 @@ thousands or even millions of containers running in parallel.
### How do I connect Docker containers?
Currently the recommended way to link containers is via the link primitive. You
can see details of how to [work with links here](../userguide/dockerlinks.md).
Currently the recommended way to connect containers is via the Docker network feature. You can see details of how to [work with Docker networks here](https://docs.docker.com/networking).
Also useful for more flexible service portability is the [Ambassador linking
pattern](../articles/ambassador_pattern_linking.md).
@ -154,19 +153,19 @@ the container will continue to as well. You can see a more substantial example
Linux:
- Ubuntu 12.04, 13.04 et al
- Fedora 19/20+
- RHEL 6.5+
- CentOS 6+
- Gentoo
- ArchLinux
- openSUSE 12.3+
- Ubuntu 12.04, 13.04 et al
- Fedora 19/20+
- RHEL 6.5+
- CentOS 6+
- Gentoo
- ArchLinux
- openSUSE 12.3+
- CRUX 3.0+
Cloud:
- Amazon EC2
- Google Compute Engine
- Amazon EC2
- Google Compute Engine
- Microsoft Azure
- Rackspace
@ -263,11 +262,11 @@ how to do this, check the documentation for your OS.
You can find more answers on:
- [Docker user mailinglist](https://groups.google.com/d/forum/docker-user)
- [Docker developer mailinglist](https://groups.google.com/d/forum/docker-dev)
- [IRC, docker on freenode](irc://chat.freenode.net#docker)
- [GitHub](https://github.com/docker/docker)
- [Ask questions on Stackoverflow](http://stackoverflow.com/search?q=docker)
- [Docker user mailinglist](https://groups.google.com/d/forum/docker-user)
- [Docker developer mailinglist](https://groups.google.com/d/forum/docker-dev)
- [IRC, docker on freenode](irc://chat.freenode.net#docker)
- [GitHub](https://github.com/docker/docker)
- [Ask questions on Stackoverflow](http://stackoverflow.com/search?q=docker)
- [Join the conversation on Twitter](http://twitter.com/docker)
Looking for something else to read? Checkout the [User Guide](../userguide/).

View file

@ -43,7 +43,7 @@ Dockerfile.
>**Warning**: Do not use your root directory, `/`, as the `PATH` as it causes
>the build to transfer the entire contents of your hard drive to the Docker
>daemon.
>daemon.
To use a file in the build context, the `Dockerfile` refers to the file specified
in an instruction, for example, a `COPY` instruction. To increase the build's
@ -159,7 +159,7 @@ Example (parsed representation is displayed after the `#`):
ADD . $foo # ADD . /bar
COPY \$foo /quux # COPY $foo /quux
Environment variables are supported by the following list of instructions in
Environment variables are supported by the following list of instructions in
the `Dockerfile`:
* `ADD`
@ -177,7 +177,7 @@ as well as:
* `ONBUILD` (when combined with one of the supported instructions above)
> **Note**:
> prior to 1.4, `ONBUILD` instructions did **NOT** support environment
> prior to 1.4, `ONBUILD` instructions did **NOT** support environment
> variable, even when combined with any of the instructions listed above.
Environment variable substitution will use the same value for each variable
@ -187,7 +187,7 @@ throughout the entire command. In other words, in this example:
ENV abc=bye def=$abc
ENV ghi=$abc
will result in `def` having a value of `hello`, not `bye`. However,
will result in `def` having a value of `hello`, not `bye`. However,
`ghi` will have a value of `bye` because it is not part of the same command
that set `abc` to `bye`.
@ -354,13 +354,13 @@ RUN /bin/bash -c 'source $HOME/.bashrc ; echo $HOME'
> Unlike the *shell* form, the *exec* form does not invoke a command shell.
> This means that normal shell processing does not happen. For example,
> `RUN [ "echo", "$HOME" ]` will not do variable substitution on `$HOME`.
> If you want shell processing then either use the *shell* form or execute
> If you want shell processing then either use the *shell* form or execute
> a shell directly, for example: `RUN [ "sh", "-c", "echo", "$HOME" ]`.
The cache for `RUN` instructions isn't invalidated automatically during
the next build. The cache for an instruction like
`RUN apt-get dist-upgrade -y` will be reused during the next build. The
cache for `RUN` instructions can be invalidated by using the `--no-cache`
the next build. The cache for an instruction like
`RUN apt-get dist-upgrade -y` will be reused during the next build. The
cache for `RUN` instructions can be invalidated by using the `--no-cache`
flag, for example `docker build --no-cache`.
See the [`Dockerfile` Best Practices
@ -399,8 +399,8 @@ the executable, in which case you must specify an `ENTRYPOINT`
instruction as well.
> **Note**:
> If `CMD` is used to provide default arguments for the `ENTRYPOINT`
> instruction, both the `CMD` and `ENTRYPOINT` instructions should be specified
> If `CMD` is used to provide default arguments for the `ENTRYPOINT`
> instruction, both the `CMD` and `ENTRYPOINT` instructions should be specified
> with the JSON array format.
> **Note**:
@ -411,7 +411,7 @@ instruction as well.
> Unlike the *shell* form, the *exec* form does not invoke a command shell.
> This means that normal shell processing does not happen. For example,
> `CMD [ "echo", "$HOME" ]` will not do variable substitution on `$HOME`.
> If you want shell processing then either use the *shell* form or execute
> If you want shell processing then either use the *shell* form or execute
> a shell directly, for example: `CMD [ "sh", "-c", "echo", "$HOME" ]`.
When used in the shell or exec formats, the `CMD` instruction sets the command
@ -461,7 +461,7 @@ An image can have more than one label. To specify multiple labels,
Docker recommends combining labels into a single `LABEL` instruction where
possible. Each `LABEL` instruction produces a new layer which can result in an
inefficient image if you use many labels. This example results in a single image
layer.
layer.
LABEL multi.label1="value1" multi.label2="value2" other="value3"
@ -470,7 +470,7 @@ The above can also be written as:
LABEL multi.label1="value1" \
multi.label2="value2" \
other="value3"
Labels are additive including `LABEL`s in `FROM` images. If Docker
encounters a label/key that already exists, the new value overrides any previous
labels with identical keys.
@ -494,12 +494,15 @@ To view an image's labels, use the `docker inspect` command.
The `EXPOSE` instruction informs Docker that the container listens on the
specified network ports at runtime. `EXPOSE` does not make the ports of the
container accessible to the host. To do that, you must use either the `-p` flag
to publish a range of ports or the `-P` flag to publish all of the exposed ports.
You can expose one port number and publish it externally under another number.
Docker uses exposed and published ports to interconnect containers using links
(see [Linking containers together](../userguide/dockerlinks.md))
and to set up port redirection on the host system when [using the -P flag](run.md#expose-incoming-ports).
to publish a range of ports or the `-P` flag to publish all of the exposed
ports. You can expose one port number and publish it externally under another
number.
To set up port redirection on the host system, see [using the -P
flag](run.md#expose-incoming-ports). The Docker network feature supports
creating networks without the need to expose ports within the network, for
detailed information see the [overview of this
feature](../userguide/networking/index.md)).
## ENV
@ -507,17 +510,18 @@ and to set up port redirection on the host system when [using the -P flag](run.m
ENV <key>=<value> ...
The `ENV` instruction sets the environment variable `<key>` to the value
`<value>`. This value will be in the environment of all "descendent" `Dockerfile`
commands and can be [replaced inline](#environment-replacement) in many as well.
`<value>`. This value will be in the environment of all "descendent"
`Dockerfile` commands and can be [replaced inline](#environment-replacement) in
many as well.
The `ENV` instruction has two forms. The first form, `ENV <key> <value>`,
will set a single variable to a value. The entire string after the first
space will be treated as the `<value>` - including characters such as
space will be treated as the `<value>` - including characters such as
spaces and quotes.
The second form, `ENV <key>=<value> ...`, allows for multiple variables to
be set at one time. Notice that the second form uses the equals sign (=)
in the syntax, while the first form does not. Like command line parsing,
The second form, `ENV <key>=<value> ...`, allows for multiple variables to
be set at one time. Notice that the second form uses the equals sign (=)
in the syntax, while the first form does not. Like command line parsing,
quotes and backslashes can be used to include spaces within values.
For example:
@ -531,7 +535,7 @@ and
ENV myDog Rex The Dog
ENV myCat fluffy
will yield the same net results in the final container, but the first form
will yield the same net results in the final container, but the first form
is preferred because it produces a single cache layer.
The environment variables set using `ENV` will persist when a container is run
@ -555,8 +559,8 @@ whitespace)
The `ADD` instruction copies new files, directories or remote file URLs from `<src>`
and adds them to the filesystem of the container at the path `<dest>`.
Multiple `<src>` resource may be specified but if they are files or
directories then they must be relative to the source directory that is
Multiple `<src>` resource may be specified but if they are files or
directories then they must be relative to the source directory that is
being built (the context of the build).
Each `<src>` may contain wildcards and matching will be done using Go's
@ -619,8 +623,8 @@ guide](../articles/dockerfile_best-practices.md#build-cache) for more informatio
appropriate filename can be discovered in this case (`http://example.com`
will not work).
- If `<src>` is a directory, the entire contents of the directory are copied,
including filesystem metadata.
- If `<src>` is a directory, the entire contents of the directory are copied,
including filesystem metadata.
> **Note**:
> The directory itself is not copied, just its contents.
@ -640,7 +644,7 @@ guide](../articles/dockerfile_best-practices.md#build-cache) for more informatio
at `<dest>/base(<src>)`.
- If multiple `<src>` resources are specified, either directly or due to the
use of a wildcard, then `<dest>` must be a directory, and it must end with
use of a wildcard, then `<dest>` must be a directory, and it must end with
a slash `/`.
- If `<dest>` does not end with a trailing slash, it will be considered a
@ -688,8 +692,8 @@ All new files and directories are created with a UID and GID of 0.
`docker build` is to send the context directory (and subdirectories) to the
docker daemon.
- If `<src>` is a directory, the entire contents of the directory are copied,
including filesystem metadata.
- If `<src>` is a directory, the entire contents of the directory are copied,
including filesystem metadata.
> **Note**:
> The directory itself is not copied, just its contents.
@ -700,7 +704,7 @@ All new files and directories are created with a UID and GID of 0.
at `<dest>/base(<src>)`.
- If multiple `<src>` resources are specified, either directly or due to the
use of a wildcard, then `<dest>` must be a directory, and it must end with
use of a wildcard, then `<dest>` must be a directory, and it must end with
a slash `/`.
- If `<dest>` does not end with a trailing slash, it will be considered a
@ -729,7 +733,7 @@ Command line arguments to `docker run <image>` will be appended after all
elements in an *exec* form `ENTRYPOINT`, and will override all elements specified
using `CMD`.
This allows arguments to be passed to the entry point, i.e., `docker run <image> -d`
will pass the `-d` argument to the entry point.
will pass the `-d` argument to the entry point.
You can override the `ENTRYPOINT` instruction using the `docker run --entrypoint`
flag.
@ -760,10 +764,10 @@ When you run the container, you can see that `top` is the only process:
%Cpu(s): 0.1 us, 0.1 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 2056668 total, 1616832 used, 439836 free, 99352 buffers
KiB Swap: 1441840 total, 0 used, 1441840 free. 1324440 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 19744 2336 2080 R 0.0 0.1 0:00.04 top
To examine the result further, you can use `docker exec`:
$ docker exec -it test ps aux
@ -867,7 +871,7 @@ sys 0m 0.03s
> Unlike the *shell* form, the *exec* form does not invoke a command shell.
> This means that normal shell processing does not happen. For example,
> `ENTRYPOINT [ "echo", "$HOME" ]` will not do variable substitution on `$HOME`.
> If you want shell processing then either use the *shell* form or execute
> If you want shell processing then either use the *shell* form or execute
> a shell directly, for example: `ENTRYPOINT [ "sh", "-c", "echo", "$HOME" ]`.
> Variables that are defined in the `Dockerfile`using `ENV`, will be substituted by
> the `Dockerfile` parser.
@ -941,12 +945,12 @@ and marks it as holding externally mounted volumes from native host or other
containers. The value can be a JSON array, `VOLUME ["/var/log/"]`, or a plain
string with multiple arguments, such as `VOLUME /var/log` or `VOLUME /var/log
/var/db`. For more information/examples and mounting instructions via the
Docker client, refer to
Docker client, refer to
[*Share Directories via Volumes*](../userguide/dockervolumes.md#mount-a-host-directory-as-a-data-volume)
documentation.
The `docker run` command initializes the newly created volume with any data
that exists at the specified location within the base image. For example,
The `docker run` command initializes the newly created volume with any data
that exists at the specified location within the base image. For example,
consider the following Dockerfile snippet:
FROM ubuntu
@ -955,7 +959,7 @@ consider the following Dockerfile snippet:
VOLUME /myvol
This Dockerfile results in an image that causes `docker run`, to
create a new mount point at `/myvol` and copy the `greeting` file
create a new mount point at `/myvol` and copy the `greeting` file
into the newly created volume.
> **Note**:

View file

@ -22,7 +22,7 @@ The `docker attach` command allows you to attach to a running container using
the container's ID or name, either to view its ongoing output or to control it
interactively. You can attach to the same contained process multiple times
simultaneously, screen sharing style, or quickly view the progress of your
daemonized process.
detached process.
You can detach from the container and leave it running with `CTRL-p CTRL-q`
(for a quiet exit) or with `CTRL-c` if `--sig-proxy` is false.

View file

@ -23,7 +23,7 @@ weight = -1
--default-gateway="" Container default gateway IPv4 address
--default-gateway-v6="" Container default gateway IPv6 address
--cluster-store="" URL of the distributed storage backend
--cluster-advertise="" Address of the daemon instance to advertise
--cluster-advertise="" Address of the daemon instance on the cluster
--cluster-store-opt=map[] Set cluster options
--dns=[] DNS server to use
--dns-opt=[] DNS options to use
@ -547,13 +547,16 @@ please check the [run](run.md) reference.
## Nodes discovery
`--cluster-advertise` specifies the 'host:port' combination that this particular
daemon instance should use when advertising itself to the cluster. The daemon
is reached by remote hosts on this 'host:port' combination.
The `--cluster-advertise` option specifies the 'host:port' or `interface:port`
combination that this particular daemon instance should use when advertising
itself to the cluster. The daemon is reached by remote hosts through this value.
If you specify an interface, make sure it includes the IP address of the actual
Docker host. For Engine installation created through `docker-machine`, the
interface is typically `eth1`.
The daemon uses [libkv](https://github.com/docker/libkv/) to advertise
the node within the cluster. Some Key/Value backends support mutual
TLS, and the client TLS settings used by the daemon can be configured
the node within the cluster. Some key-value backends support mutual
TLS. To configure the client TLS settings used by the daemon can be configured
using the `--cluster-store-opt` flag, specifying the paths to PEM encoded
files. For example:

View file

@ -33,14 +33,14 @@ describes all the details of the format.
For the most part, you can pick out any field from the JSON in a fairly
straightforward manner.
$ docker inspect --format='{{.NetworkSettings.IPAddress}}' $INSTANCE_ID
$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $INSTANCE_ID
**Get an instance's MAC Address:**
For the most part, you can pick out any field from the JSON in a fairly
straightforward manner.
$ docker inspect --format='{{.NetworkSettings.MacAddress}}' $INSTANCE_ID
$ docker inspect '{{range .NetworkSettings.Networks}}{{.MacAddress}}{{end}}' $INSTANCE_ID
**Get an instance's log path:**
@ -58,7 +58,7 @@ output:
The `.Field` syntax doesn't work when the field name begins with a
number, but the template language's `index` function does. The
`.NetworkSettings.Ports` section contains a map of the internal port
mappings to a list of external address/port objects, so to grab just the
mappings to a list of external address/port objects. To grab just the
numeric public port, you use `index` to find the specific port map, and
then `index` 0 contains the first object inside of that. Then we ask for
the `HostPort` field to get the public address.

View file

@ -2,7 +2,7 @@
+++
title = "network connect"
description = "The network connect command description and usage"
keywords = ["network, connect"]
keywords = ["network, connect, user-defined"]
[menu.main]
parent = "smn_cli"
+++

View file

@ -2,7 +2,7 @@
+++
title = "network create"
description = "The network create command description and usage"
keywords = ["network create"]
keywords = ["network, create"]
[menu.main]
parent = "smn_cli"
+++
@ -55,7 +55,7 @@ The `docker daemon` options that support the `overlay` network are:
To read more about these options and how to configure them, see ["*Get started
with multi-host network*"](../../userguide/networking/get-started-overlay.md).
It is also a good idea, though not required, that you install Docker Swarm on to
manage the cluster that makes up your network. Swarm provides sophisticated
discovery and server management that can assist your implementation.

View file

@ -2,7 +2,7 @@
+++
title = "network disconnect"
description = "The network disconnect command description and usage"
keywords = ["network, disconnect"]
keywords = ["network, disconnect, user-defined"]
[menu.main]
parent = "smn_cli"
+++

View file

@ -2,7 +2,7 @@
+++
title = "network inspect"
description = "The network inspect command description and usage"
keywords = ["network, inspect"]
keywords = ["network, inspect, user-defined"]
[menu.main]
parent = "smn_cli"
+++

View file

@ -2,7 +2,7 @@
+++
title = "network ls"
description = "The network ls command description and usage"
keywords = ["network, list"]
keywords = ["network, list, user-defined"]
[menu.main]
parent = "smn_cli"
+++

View file

@ -2,7 +2,7 @@
+++
title = "network rm"
description = "the network rm command description and usage"
keywords = ["network, rm"]
keywords = ["network, rm, user-defined"]
[menu.main]
parent = "smn_cli"
+++

View file

@ -86,14 +86,10 @@ specified image, and then `starts` it using the specified command. That is,
previous changes intact using `docker start`. See `docker ps -a` to view a list
of all containers.
There is detailed information about `docker run` in the [Docker run reference](run.md).
The `docker run` command can be used in combination with `docker commit` to
[*change the command that a container runs*](commit.md).
[*change the command that a container runs*](commit.md). There is additional detailed information about `docker run` in the [Docker run reference](../run.md).
See the [Docker User Guide](../../userguide/dockerlinks.md) for more detailed
information about the `--expose`, `-p`, `-P` and `--link` parameters,
and linking containers.
For information on connecting a container to a network, see the ["*Docker network overview*"](../../userguide/networking/index.md).
## Examples
@ -185,16 +181,15 @@ manipulate the host's Docker daemon.
$ docker run -p 127.0.0.1:80:8080 ubuntu bash
This binds port `8080` of the container to port `80` on `127.0.0.1` of
the host machine. The [Docker User Guide](../../userguide/dockerlinks.md)
This binds port `8080` of the container to port `80` on `127.0.0.1` of the host
machine. The [Docker User
Guide](../../userguide/networking/default_network/dockerlinks.md)
explains in detail how to manipulate ports in Docker.
$ docker run --expose 80 ubuntu bash
This exposes port `80` of the container for use within a link without
publishing the port to the host system's interfaces. The [Docker User
Guide](../../userguide/dockerlinks.md) explains in detail how to manipulate
ports in Docker.
This exposes port `80` of the container without publishing the port to the host
system's interfaces.
### Set environment variables (-e, --env, --env-file)
@ -302,21 +297,29 @@ For additional information on working with labels, see [*Labels - custom
metadata in Docker*](../../userguide/labels-custom-metadata.md) in the Docker User
Guide.
### Add link to another container (--link)
### Connect a container to a network (--net)
$ docker run --link /redis:redis --name console ubuntu bash
When you start a container use the `--net` flag to connect it to a network.
This adds the `busybox` container to the `mynet` network.
The `--link` flag will link the container named `/redis` into the newly
created container with the alias `redis`. The new container can access the
network and environment of the `redis` container via environment variables.
The `--link` flag will also just accept the form `<name or id>` in which case
the alias will match the name. For instance, you could have written the previous
example as:
```bash
$ docker run -itd --net=my-multihost-network busybox
```
$ docker run --link redis --name console ubuntu bash
If you want to add a running container to a network use the `docker network connect` subcommand.
The `--name` flag will assign the name `console` to the newly created
container.
You can connect multiple containers to the same network. Once connected, the
containers can communicate easily need only another container's IP address
or name. For `overlay` networks or custom plugins that support multi-host
connectivity, containers connected to the same multi-host network but launched
from different Engines can also communicate in this way.
**Note**: Service discovery is unavailable on the default bridge network.
Containers can communicate via their IP addresses by default. To communicate
by name, they must be linked.
You can disconnect a container from a network using the `docker network
disconnect` command.
### Mount volumes from container (--volumes-from)
@ -537,34 +540,3 @@ the three processes quota set for the `daemon` user.
The `--stop-signal` flag sets the system call signal that will be sent to the container to exit.
This signal can be a valid unsigned number that matches a position in the kernel's syscall table, for instance 9,
or a signal name in the format SIGNAME, for instance SIGKILL.
### A complete example
$ docker run -d --name static static-web-files sh
$ docker run -d --expose=8098 --name riak riakserver
$ docker run -d -m 100m -e DEVELOPMENT=1 -e BRANCH=example-code -v $(pwd):/app/bin:ro --name app appserver
$ docker run -d -p 1443:443 --dns=10.0.0.1 --dns-search=dev.org -v /var/log/httpd --volumes-from static --link riak --link app -h www.sven.dev.org --name web webserver
$ docker run -t -i --rm --volumes-from web -w /var/log/httpd busybox tail -f access.log
This example shows five containers that might be set up to test a web
application change:
1. Start a pre-prepared volume image `static-web-files` (in the background)
that has CSS, image and static HTML in it, (with a `VOLUME` instruction in
the Dockerfile to allow the web server to use those files);
2. Start a pre-prepared `riakserver` image, give the container name `riak` and
expose port `8098` to any containers that link to it;
3. Start the `appserver` image, restricting its memory usage to 100MB, setting
two environment variables `DEVELOPMENT` and `BRANCH` and bind-mounting the
current directory (`$(pwd)`) in the container in read-only mode as `/app/bin`;
4. Start the `webserver`, mapping port `443` in the container to port `1443` on
the Docker server, setting the DNS server to `10.0.0.1` and DNS search
domain to `dev.org`, creating a volume to put the log files into (so we can
access it from another container), then importing the files from the volume
exposed by the `static` container, and linking to all exposed ports from
`riak` and `app`. Lastly, we set the hostname to `web.sven.dev.org` so its
consistent with the pre-generated SSL certificate;
5. Finally, we create a container that runs `tail -f access.log` using the logs
volume from the `web` container, setting the workdir to `/var/log/httpd`. The
`--rm` option means that when the container exits, the container's layer is
removed.

View file

@ -135,16 +135,14 @@ after the container is created.
## libnetwork
libnetwork provides a native Go implementation for creating and managing container
network namespaces and other network resources. It manage the networking lifecycle
network namespaces and other network resources. It manage the networking lifecycle
of the container performing additional operations after the container is created.
## link
links provide an interface to connect Docker containers running on the same host
to each other without exposing the hosts' network ports. When you set up a link,
you create a conduit between a source container and a recipient container.
The recipient can then access select data about the source. To create a link,
you can use the `--link` flag.
links provide a legacy interface to connect Docker containers running on the
same host to each other without exposing the hosts' network ports. Use the
Docker networks feature instead.
## Machine
@ -221,4 +219,3 @@ Compared to to containers, a Virtual Machine is heavier to run, provides more is
gets its own set of resources and does minimal sharing.
*Also known as : VM*

View file

@ -154,13 +154,14 @@ The operator can identify a container in three ways:
- UUID short identifier ("f78375b1c487")
- Name ("evil_ptolemy")
The UUID identifiers come from the Docker daemon, and if you do not
assign a name to the container with `--name` then the daemon will also
generate a random string name too. The name can become a handy way to
add meaning to a container since you can use this name when defining
[*links*](../userguide/dockerlinks.md) (or any
other place you need to identify a container). This works for both
background and foreground Docker containers.
The UUID identifiers come from the Docker daemon. If you do not assign a
container name with the `--name` option, then the daemon generates a random
string name for you. Defining a `name` can be a handy way to add meaning to a
container. If you specify a `name`, you can use it when referencing the
container within a Docker network. This works for both background and foreground
Docker containers.
**Note**: Containers on the default bridge network must be linked to communicate by name.
### PID equivalent
@ -259,8 +260,7 @@ with `docker run --net none` which disables all incoming and outgoing
networking. In cases like this, you would perform I/O through files or
`STDIN` and `STDOUT` only.
Publishing ports and linking to other containers will not work
when `--net` is anything other than the default (bridge).
Publishing ports and linking to other containers only works with the the default (bridge). The linking feature is a legacy feature. You should always prefer using Docker network drivers over linking.
Your container will use the same DNS servers as the host by default, but
you can override this with `--dns`.
@ -331,6 +331,9 @@ container's namespaces in addition to the `loopback` interface. An IP
address will be allocated for containers on the bridge's network and
traffic will be routed though this bridge to the container.
Containers can communicate via their IP addresses by default. To communicate by
name, they must be linked.
#### Network: host
With the network set to `host` a container will share the host's
@ -366,19 +369,23 @@ running the `redis-cli` command and connecting to the Redis server over the
$ # use the redis container's network stack to access localhost
$ docker run --rm -it --net container:redis example/redis-cli -h 127.0.0.1
#### Network: User-Created NETWORK
#### User-defined network
In addition to all the above special networks, user can create a network using
their favorite network driver or external plugin. The driver used to create the
network takes care of all the network plumbing requirements for the container
connected to that network.
You can create a network using a Docker network driver or an external network
driver plugin. You can connect multiple containers to the same network. Once
connected to a user-defined network, the containers can communicate easily using
only another container's IP address or name.
Example creating a network using the inbuilt overlay network driver and running
a container in the created network
For `overlay` networks or custom plugins that support multi-host connectivity,
containers connected to the same multi-host network but launched from different
Engines can also communicate in this way.
The following example creates a network using the built-in `bridge` network
driver and running a container in the created network
```
$ docker network create -d overlay multi-host-network
$ docker run --net=multi-host-network -itd --name=container3 busybox
$ docker network create -d overlay my-net
$ docker run --net=my-net -itd --name=container3 busybox
```
### Managing /etc/hosts
@ -510,8 +517,8 @@ the container exits**, you can add the `--rm` flag:
--rm=false: Automatically remove the container when it exits (incompatible with -d)
> **Note**: When you set the `--rm` flag, Docker also removes the volumes
associated with the container when the container is removed. This is similar
> **Note**: When you set the `--rm` flag, Docker also removes the volumes
associated with the container when the container is removed. This is similar
to running `docker rm -v my-container`.
## Security configuration
@ -664,7 +671,7 @@ same as the hard memory limit.
Memory reservation is a soft-limit feature and does not guarantee the limit
won't be exceeded. Instead, the feature attempts to ensure that, when memory is
heavily contended for, memory is allocated based on the reservation hints/setup.
heavily contended for, memory is allocated based on the reservation hints/setup.
The following example limits the memory (`-m`) to 500M and sets the memory
reservation to 200M.
@ -1186,12 +1193,12 @@ specifies `EXPOSE 80` in the Dockerfile). At runtime, the port might be
bound to 42800 on the host. To find the mapping between the host ports
and the exposed ports, use `docker port`.
If the operator uses `--link` when starting a new client container,
then the client container can access the exposed port via a private
networking interface. Docker will set some environment variables in the
client container to help indicate which interface and port to use. For
more information on linking, see [the guide on linking container
together](../userguide/dockerlinks.md)
If the operator uses `--link` when starting a new client container, then the
client container can access the exposed port via a private networking interface.
Linking is a legacy feature that is only supported on the default bridge
network. You should prefer the Docker networks feature instead. For more
information on this feature, see the [*Docker network
overview*""](../userguide/networking/index.md)).
### ENV (environment variables)
@ -1227,11 +1234,6 @@ variables automatically:
</tr>
</table>
The container may also include environment variables defined
as a result of the container being linked with another container. See
the [*Container Links*](../userguide/dockerlinks.md#connect-with-the-linking-system)
section for more details.
Additionally, the operator can **set any environment variable** in the
container by using one or more `-e` flags, even overriding those mentioned
above, or already defined by the developer with a Dockerfile `ENV`:
@ -1248,69 +1250,11 @@ above, or already defined by the developer with a Dockerfile `ENV`:
Similarly the operator can set the **hostname** with `-h`.
`--link <name or id>:alias` also sets environment variables, using the *alias* string to
define environment variables within the container that give the IP and PORT
information for connecting to the service container. Let's imagine we have a
container running Redis:
# Start the service container, named redis-name
$ docker run -d --name redis-name dockerfiles/redis
4241164edf6f5aca5b0e9e4c9eccd899b0b8080c64c0cd26efe02166c73208f3
# The redis-name container exposed port 6379
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4241164edf6f $ dockerfiles/redis:latest /redis-stable/src/re 5 seconds ago Up 4 seconds 6379/tcp redis-name
# Note that there are no public ports exposed since we didn᾿t use -p or -P
$ docker port 4241164edf6f 6379
2014/01/25 00:55:38 Error: No public port '6379' published for 4241164edf6f
Yet we can get information about the Redis container's exposed ports
with `--link`. Choose an alias that will form a
valid environment variable!
$ docker run --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c export
declare -x HOME="/"
declare -x HOSTNAME="acda7f7b1cdc"
declare -x OLDPWD
declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
declare -x PWD="/"
declare -x REDIS_ALIAS_NAME="/distracted_wright/redis"
declare -x REDIS_ALIAS_PORT="tcp://172.17.0.32:6379"
declare -x REDIS_ALIAS_PORT_6379_TCP="tcp://172.17.0.32:6379"
declare -x REDIS_ALIAS_PORT_6379_TCP_ADDR="172.17.0.32"
declare -x REDIS_ALIAS_PORT_6379_TCP_PORT="6379"
declare -x REDIS_ALIAS_PORT_6379_TCP_PROTO="tcp"
declare -x SHLVL="1"
declare -x container="lxc"
And we can use that information to connect from another container as a client:
$ docker run -i -t --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c '/redis-stable/src/redis-cli -h $REDIS_ALIAS_PORT_6379_TCP_ADDR -p $REDIS_ALIAS_PORT_6379_TCP_PORT'
172.17.0.32:6379>
Docker will also map the private IP address to the alias of a linked
container by inserting an entry into `/etc/hosts`. You can use this
mechanism to communicate with a linked container by its alias:
$ docker run -d --name servicename busybox sleep 30
$ docker run -i -t --link servicename:servicealias busybox ping -c 1 servicealias
If you restart the source container (`servicename` in this case), the recipient
container's `/etc/hosts` entry will be automatically updated.
> **Note**:
> Unlike host entries in the `/etc/hosts` file, IP addresses stored in the
> environment variables are not automatically updated if the source container is
> restarted. We recommend using the host entries in `/etc/hosts` to resolve the
> IP address of linked containers.
### VOLUME (shared filesystems)
-v=[]: Create a bind mount with: [host-dir:]container-dir[:<options>], where
options are comma delimited and selected from [rw|ro] and [z|Z].
If 'host-dir' is missing, then docker creates a new volume.
options are comma delimited and selected from [rw|ro] and [z|Z].
If 'host-dir' is missing, then docker creates a new volume.
If neither 'rw' or 'ro' is specified then the volume is mounted
in read-write mode.
--volumes-from="": Mount all volumes from the given container(s)
@ -1325,17 +1269,17 @@ one or more `VOLUME`'s associated with an image, but only the operator
can give access from one container to another (or from a container to a
volume mounted on the host).
The `container-dir` must always be an absolute path such as `/src/docs`.
The `host-dir` can either be an absolute path or a `name` value. If you
supply an absolute path for the `host-dir`, Docker bind-mounts to the path
The `container-dir` must always be an absolute path such as `/src/docs`.
The `host-dir` can either be an absolute path or a `name` value. If you
supply an absolute path for the `host-dir`, Docker bind-mounts to the path
you specify. If you supply a `name`, Docker creates a named volume by that `name`.
A `name` value must start with start with an alphanumeric character,
followed by `a-z0-9`, `_` (underscore), `.` (period) or `-` (hyphen).
A `name` value must start with start with an alphanumeric character,
followed by `a-z0-9`, `_` (underscore), `.` (period) or `-` (hyphen).
An absolute path starts with a `/` (forward slash).
For example, you can specify either `/foo` or `foo` for a `host-dir` value.
If you supply the `/foo` value, Docker creates a bind-mount. If you supply
For example, you can specify either `/foo` or `foo` for a `host-dir` value.
If you supply the `/foo` value, Docker creates a bind-mount. If you supply
the `foo` specification, Docker creates a named volume.
### USER

View file

@ -1,17 +1,16 @@
<!--[metadata]>
+++
title = "Get started with containers"
title = "Quickstart containers"
description = "Common usage and commands"
keywords = ["Examples, Usage, basic commands, docker, documentation, examples"]
[menu.main]
parent = "smn_containers"
parent = "mn_fun_docker"
+++
<![end-metadata]-->
# Get started with containers
# Quickstart containers
This guide assumes you have a working installation of Docker. To verify Docker
is installed, use the following command:
This quickstart assumes you have a working installation of Docker. To verify Docker is installed, use the following command:
# Check that you have a working install
$ docker info
@ -54,7 +53,7 @@ image cache.
To run an interactive shell in the Ubuntu image:
$ docker run -i -t ubuntu /bin/bash
The `-i` flag starts an interactive container. The `-t` flag creates a
pseudo-TTY that attaches `stdin` and `stdout`.
@ -183,7 +182,7 @@ re-used.
When you commit your container, Docker only stores the diff (difference) between
the source image and the current state of the container's image. To list images
you already have, use the `docker images` command.
you already have, use the `docker images` command.
# Commit your container to a new named image
$ docker commit <container> <some_name>
@ -193,7 +192,8 @@ you already have, use the `docker images` command.
You now have an image state from which you can create new instances.
Read more about [*Share Images via
Repositories*](../userguide/dockerrepos.md) or
continue to the complete [*Command
Line*](../reference/commandline/cli.md)
## Where to go next
* Work your way through the [Docker User Guide](../userguide/index.md)
* Read more about [*Share Images via Repositories*](../userguide/dockerrepos.md)
* Review [*Command Line*](../reference/commandline/cli.md)

View file

@ -1,28 +1,26 @@
<!--[metadata]>
+++
title = "Get started with images"
title = "Build your own images"
description = "How to work with Docker images."
keywords = ["documentation, docs, the docker guide, docker guide, docker, docker platform, virtualization framework, docker.io, Docker images, Docker image, image management, Docker repos, Docker repositories, docker, docker tag, docker tags, Docker Hub, collaboration"]
[menu.main]
parent = "smn_images"
weight = 1
parent = "smn_containers"
weight = -4
+++
<![end-metadata]-->
# Get started with images
# Build your own images
In the [introduction](../introduction/understanding-docker.md) we've discovered that Docker
images are the basis of containers. In the
[previous](dockerizing.md) [sections](usingdocker.md)
we've used Docker images that already exist, for example the `ubuntu`
image and the `training/webapp` image.
Docker images are the basis of containers. Each time you've used `docker run`
you told it which image you wanted. In the previous sections of the guide you
used Docker images that already exist, for example the `ubuntu` image and the
`training/webapp` image.
We've also discovered that Docker stores downloaded images on the Docker
host. If an image isn't already present on the host then it'll be
downloaded from a registry: by default the
[Docker Hub Registry](https://registry.hub.docker.com).
You also discovered that Docker stores downloaded images on the Docker host. If
an image isn't already present on the host then it'll be downloaded from a
registry: by default the [Docker Hub Registry](https://registry.hub.docker.com).
In this section we're going to explore Docker images a bit more
In this section you're going to explore Docker images a bit more
including:
* Managing and working with images locally on your Docker host.
@ -31,55 +29,40 @@ including:
## Listing images on the host
Let's start with listing the images we have locally on our host. You can
Let's start with listing the images you have locally on our host. You can
do this using the `docker images` command like so:
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
training/webapp latest fc77f57ad303 3 weeks ago 280.5 MB
ubuntu 13.10 5e019ab7bf6d 4 weeks ago 180 MB
ubuntu saucy 5e019ab7bf6d 4 weeks ago 180 MB
ubuntu 12.04 74fe38d11401 4 weeks ago 209.6 MB
ubuntu precise 74fe38d11401 4 weeks ago 209.6 MB
ubuntu 12.10 a7cf8ae4e998 4 weeks ago 171.3 MB
ubuntu quantal a7cf8ae4e998 4 weeks ago 171.3 MB
ubuntu 14.04 99ec81b80c55 4 weeks ago 266 MB
ubuntu latest 99ec81b80c55 4 weeks ago 266 MB
ubuntu trusty 99ec81b80c55 4 weeks ago 266 MB
ubuntu 13.04 316b678ddf48 4 weeks ago 169.4 MB
ubuntu raring 316b678ddf48 4 weeks ago 169.4 MB
ubuntu 10.04 3db9c44f4520 4 weeks ago 183 MB
ubuntu lucid 3db9c44f4520 4 weeks ago 183 MB
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 14.04 1d073211c498 3 days ago 187.9 MB
busybox latest 2c5ac3f849df 5 days ago 1.113 MB
training/webapp latest 54bb4e8718e8 5 months ago 348.7 MB
We can see the images we've previously used in our user guide.
Each has been downloaded from [Docker Hub](https://hub.docker.com) when we
launched a container using that image.
We can see three crucial pieces of information about our images in the listing.
You can see the images you've previously used in the user guide.
Each has been downloaded from [Docker Hub](https://hub.docker.com) when you
launched a container using that image. When you list images, you get three crucial pieces of information in the listing.
* What repository they came from, for example `ubuntu`.
* The tags for each image, for example `14.04`.
* The image ID of each image.
> **Note:**
> Previously, the `docker images` command supported the `--tree` and `--dot`
> arguments, which displayed different visualizations of the image data. Docker
> core removed this functionality in the 1.7 version. If you liked this
> functionality, you can still find it in
> [the third-party dockviz tool](https://github.com/justone/dockviz).
> **Tip:**
> You can use [a third-party dockviz tool](https://github.com/justone/dockviz)
> or the [Image layers site](https://imagelayers.io/) to display
> visualizations of image data.
A repository potentially holds multiple variants of an image. In the case of
our `ubuntu` image we can see multiple variants covering Ubuntu 10.04, 12.04,
our `ubuntu` image you can see multiple variants covering Ubuntu 10.04, 12.04,
12.10, 13.04, 13.10 and 14.04. Each variant is identified by a tag and you can
refer to a tagged image like so:
ubuntu:14.04
So when we run a container we refer to a tagged image like so:
So when you run a container you refer to a tagged image like so:
$ docker run -t -i ubuntu:14.04 /bin/bash
If instead we wanted to run an Ubuntu 12.04 image we'd use:
If instead you wanted to run an Ubuntu 12.04 image you'd use:
$ docker run -t -i ubuntu:12.04 /bin/bash
@ -87,16 +70,16 @@ If you don't specify a variant, for example you just use `ubuntu`, then Docker
will default to using the `ubuntu:latest` image.
> **Tip:**
> We recommend you always use a specific tagged image, for example
> You recommend you always use a specific tagged image, for example
> `ubuntu:12.04`. That way you always know exactly what variant of an image is
> being used.
## Getting a new image
So how do we get new images? Well Docker will automatically download any image
we use that isn't already present on the Docker host. But this can potentially
add some time to the launch of a container. If we want to pre-load an image we
can download it using the `docker pull` command. Let's say we'd like to
So how do you get new images? Well Docker will automatically download any image
you use that isn't already present on the Docker host. But this can potentially
add some time to the launch of a container. If you want to pre-load an image you
can download it using the `docker pull` command. Suppose you'd like to
download the `centos` image.
$ docker pull centos
@ -109,8 +92,8 @@ download the `centos` image.
Status: Downloaded newer image for centos
We can see that each layer of the image has been pulled down and now we
can run a container from this image and we won't have to wait to
You can see that each layer of the image has been pulled down and now you
can run a container from this image and you won't have to wait to
download the image.
$ docker run -t -i centos /bin/bash
@ -120,14 +103,14 @@ download the image.
One of the features of Docker is that a lot of people have created Docker
images for a variety of purposes. Many of these have been uploaded to
[Docker Hub](https://hub.docker.com). We can search these images on the
[Docker Hub](https://hub.docker.com). You can search these images on the
[Docker Hub](https://hub.docker.com) website.
![indexsearch](search.png)
We can also search for images on the command line using the `docker search`
command. Let's say our team wants an image with Ruby and Sinatra installed on
which to do our web application development. We can search for a suitable image
You can also search for images on the command line using the `docker search`
command. Suppose your team wants an image with Ruby and Sinatra installed on
which to do our web application development. You can search for a suitable image
by using the `docker search` command to find all the images that contain the
term `sinatra`.
@ -142,29 +125,29 @@ term `sinatra`.
bmorearty/sinatra 0
. . .
We can see we've returned a lot of images that use the term `sinatra`. We've
returned a list of image names, descriptions, Stars (which measure the social
popularity of images - if a user likes an image then they can "star" it), and
the Official and Automated build statuses.
[Official Repositories](https://docs.docker.com/docker-hub/official_repos) are a carefully curated set
of Docker repositories supported by Docker, Inc. Automated repositories are
[Automated Builds](dockerrepos.md#automated-builds) that allow you to
validate the source and content of an image.
You can see the command returns a lot of images that use the term `sinatra`.
You've received a list of image names, descriptions, Stars (which measure the
social popularity of images - if a user likes an image then they can "star" it),
and the Official and Automated build statuses. [Official
Repositories](https://docs.docker.com/docker-hub/official_repos) are a carefully
curated set of Docker repositories supported by Docker, Inc. Automated
repositories are [Automated Builds](dockerrepos.md#automated-builds) that allow
you to validate the source and content of an image.
We've reviewed the images available to use and we decided to use the
`training/sinatra` image. So far we've seen two types of images repositories,
You've reviewed the images available to use and you decided to use the
`training/sinatra` image. So far you've seen two types of images repositories,
images like `ubuntu`, which are called base or root images. These base images
are provided by Docker Inc and are built, validated and supported. These can be
identified by their single word names.
We've also seen user images, for example the `training/sinatra` image we've
You've also seen user images, for example the `training/sinatra` image you've
chosen. A user image belongs to a member of the Docker community and is built
and maintained by them. You can identify user images as they are always
prefixed with the user name, here `training`, of the user that created them.
## Pulling our image
We've identified a suitable image, `training/sinatra`, and now we can download it using the `docker pull` command.
You've identified a suitable image, `training/sinatra`, and now you can download it using the `docker pull` command.
$ docker pull training/sinatra
@ -175,24 +158,24 @@ The team can now use this image by running their own containers.
## Creating our own images
The team has found the `training/sinatra` image pretty useful but it's not quite what
they need and we need to make some changes to it. There are two ways we can
update and create images.
The team has found the `training/sinatra` image pretty useful but it's not quite
what they need and you need to make some changes to it. There are two ways you
can update and create images.
1. We can update a container created from an image and commit the results to an image.
2. We can use a `Dockerfile` to specify instructions to create an image.
1. You can update a container created from an image and commit the results to an image.
2. You can use a `Dockerfile` to specify instructions to create an image.
### Updating and committing an image
To update an image we first need to create a container from the image
we'd like to update.
To update an image you first need to create a container from the image
you'd like to update.
$ docker run -t -i training/sinatra /bin/bash
root@0b2616b0e5a8:/#
> **Note:**
> Take note of the container ID that has been created, `0b2616b0e5a8`, as we'll
> Take note of the container ID that has been created, `0b2616b0e5a8`, as you'll
> need it in a moment.
Inside our running container let's add the `json` gem.
@ -202,7 +185,7 @@ Inside our running container let's add the `json` gem.
Once this has completed let's exit our container using the `exit`
command.
Now we have a container with the change we want to make. We can then
Now you have a container with the change you want to make. You can then
commit a copy of this container to an image using the `docker commit`
command.
@ -210,23 +193,23 @@ command.
0b2616b0e5a8 ouruser/sinatra:v2
4f177bd27a9ff0f6dc2a830403925b5360bfe0b93d476f7fc3231110e7f71b1c
Here we've used the `docker commit` command. We've specified two flags: `-m`
Here you've used the `docker commit` command. You've specified two flags: `-m`
and `-a`. The `-m` flag allows us to specify a commit message, much like you
would with a commit on a version control system. The `-a` flag allows us to
specify an author for our update.
We've also specified the container we want to create this new image from,
`0b2616b0e5a8` (the ID we recorded earlier) and we've specified a target for
You've also specified the container you want to create this new image from,
`0b2616b0e5a8` (the ID you recorded earlier) and you've specified a target for
the image:
ouruser/sinatra:v2
Let's break this target down. It consists of a new user, `ouruser`, that we're
writing this image to. We've also specified the name of the image, here we're
keeping the original image name `sinatra`. Finally we're specifying a tag for
Break this target down. It consists of a new user, `ouruser`, that you're
writing this image to. You've also specified the name of the image, here you're
keeping the original image name `sinatra`. Finally you're specifying a tag for
the image: `v2`.
We can then look at our new `ouruser/sinatra` image using the `docker images`
You can then look at our new `ouruser/sinatra` image using the `docker images`
command.
$ docker images
@ -235,7 +218,7 @@ command.
ouruser/sinatra v2 3c59e02ddd1a 10 hours ago 446.7 MB
ouruser/sinatra latest 5db5f8471261 10 hours ago 446.7 MB
To use our new image to create a container we can then:
To use our new image to create a container you can then:
$ docker run -t -i ouruser/sinatra:v2 /bin/bash
root@78e82f680994:/#
@ -244,13 +227,13 @@ To use our new image to create a container we can then:
Using the `docker commit` command is a pretty simple way of extending an image
but it's a bit cumbersome and it's not easy to share a development process for
images amongst a team. Instead we can use a new command, `docker build`, to
images amongst a team. Instead you can use a new command, `docker build`, to
build new images from scratch.
To do this we create a `Dockerfile` that contains a set of instructions that
To do this you create a `Dockerfile` that contains a set of instructions that
tell Docker how to build our image.
Let's create a directory and a `Dockerfile` first.
First, create a directory and a `Dockerfile`.
$ mkdir sinatra
$ cd sinatra
@ -259,8 +242,8 @@ Let's create a directory and a `Dockerfile` first.
If you are using Docker Machine on Windows, you may access your host
directory by `cd` to `/c/Users/your_user_name`.
Each instruction creates a new layer of the image. Let's look at a simple
example now for building our own Sinatra image for our development team.
Each instruction creates a new layer of the image. Try a simple example now for
building your own Sinatra image for your fictitious development team.
# This is a comment
FROM ubuntu:14.04
@ -268,25 +251,22 @@ example now for building our own Sinatra image for our development team.
RUN apt-get update && apt-get install -y ruby ruby-dev
RUN gem install sinatra
Let's look at what our `Dockerfile` does. Each instruction prefixes a statement and is capitalized.
Examine what your `Dockerfile` does. Each instruction prefixes a statement and
is capitalized.
INSTRUCTION statement
> **Note:**
> We use `#` to indicate a comment
> **Note:** You use `#` to indicate a comment
The first instruction `FROM` tells Docker what the source of our image is, in
this case we're basing our new image on an Ubuntu 14.04 image.
this case you're basing our new image on an Ubuntu 14.04 image. The instruction uses the `MAINTAINER` instruction to specify who maintains the new image.
Next we use the `MAINTAINER` instruction to specify who maintains our new image.
Lastly, we've specified two `RUN` instructions. A `RUN` instruction executes
a command inside the image, for example installing a package. Here we're
Lastly, you've specified two `RUN` instructions. A `RUN` instruction executes
a command inside the image, for example installing a package. Here you're
updating our APT cache, installing Ruby and RubyGems and then installing the
Sinatra gem.
> **Note:**
> There are [a lot more instructions available to us in a Dockerfile](../reference/builder.md).
Now let's take our `Dockerfile` and use the `docker build` command to build an image.
@ -454,26 +434,26 @@ Now let's take our `Dockerfile` and use the `docker build` command to build an i
Removing intermediate container 6b81cb6313e5
Successfully built 97feabe5d2ed
We've specified our `docker build` command and used the `-t` flag to identify
You've specified our `docker build` command and used the `-t` flag to identify
our new image as belonging to the user `ouruser`, the repository name `sinatra`
and given it the tag `v2`.
We've also specified the location of our `Dockerfile` using the `.` to
You've also specified the location of our `Dockerfile` using the `.` to
indicate a `Dockerfile` in the current directory.
> **Note:**
> You can also specify a path to a `Dockerfile`.
Now we can see the build process at work. The first thing Docker does is
Now you can see the build process at work. The first thing Docker does is
upload the build context: basically the contents of the directory you're
building in. This is done because the Docker daemon does the actual
build of the image and it needs the local context to do it.
Next we can see each instruction in the `Dockerfile` being executed
step-by-step. We can see that each step creates a new container, runs
Next you can see each instruction in the `Dockerfile` being executed
step-by-step. You can see that each step creates a new container, runs
the instruction inside that container and then commits that change -
just like the `docker commit` work flow we saw earlier. When all the
instructions have executed we're left with the `97feabe5d2ed` image
just like the `docker commit` work flow you saw earlier. When all the
instructions have executed you're left with the `97feabe5d2ed` image
(also helpfully tagged as `ouruser/sinatra:v2`) and all intermediate
containers will get removed to clean things up.
@ -482,7 +462,7 @@ containers will get removed to clean things up.
> This limitation is set globally to encourage optimization of the overall
> size of images.
We can then create a container from our new image.
You can then create a container from our new image.
$ docker run -t -i ouruser/sinatra:v2 /bin/bash
root@8196968dac35:/#
@ -493,14 +473,14 @@ We can then create a container from our new image.
> those instructions in later sections of the Guide or you can refer to the
> [`Dockerfile`](../reference/builder.md) reference for a
> detailed description and examples of every instruction.
> To help you write a clear, readable, maintainable `Dockerfile`, we've also
> To help you write a clear, readable, maintainable `Dockerfile`, you've also
> written a [`Dockerfile` Best Practices guide](../articles/dockerfile_best-practices.md).
## Setting tags on an image
You can also add a tag to an existing image after you commit or build it. We
can do this using the `docker tag` command. Let's add a new tag to our
can do this using the `docker tag` command. Now, add a new tag to your
`ouruser/sinatra` image.
$ docker tag 5db5f8471261 ouruser/sinatra:devel
@ -508,7 +488,7 @@ can do this using the `docker tag` command. Let's add a new tag to our
The `docker tag` command takes the ID of the image, here `5db5f8471261`, and our
user name, the repository name and the new tag.
Let's see our new tag using the `docker images` command.
Now, see your new tag using the `docker images` command.
$ docker images ouruser/sinatra
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
@ -553,7 +533,7 @@ private repository](https://registry.hub.docker.com/plans/).
You can also remove images on your Docker host in a way [similar to
containers](usingdocker.md) using the `docker rmi` command.
Let's delete the `training/sinatra` image as we don't need it anymore.
Delete the `training/sinatra` image as you don't need it anymore.
$ docker rmi training/sinatra
Untagged: training/sinatra:latest
@ -561,13 +541,13 @@ Let's delete the `training/sinatra` image as we don't need it anymore.
Deleted: ed0fffdcdae5eb2c3a55549857a8be7fc8bc4241fb19ad714364cbfd7a56b22f
Deleted: 5c58979d73ae448df5af1d8142436d81116187a7633082650549c52c3a2418f0
> **Note:** In order to remove an image from the host, please make sure
> **Note:** To remove an image from the host, please make sure
> that there are no containers actively based on it.
# Next steps
Until now we've seen how to build individual applications inside Docker
Until now you've seen how to build individual applications inside Docker
containers. Now learn how to build whole application stacks with Docker
by linking together multiple Docker containers.
by networking together multiple Docker containers.
Go to [Linking Containers Together](dockerlinks.md).
Go to [Network containers](networkingcontainers.md).

View file

@ -1,26 +1,27 @@
<!--[metadata]>
+++
title = "Dockerizing applications: A 'Hello world'"
title = "Hello world in a container"
description = "A simple 'Hello world' exercise that introduced you to Docker."
keywords = ["docker guide, docker, docker platform, virtualization framework, how to, dockerize, dockerizing apps, dockerizing applications, container, containers"]
[menu.main]
parent = "smn_applied"
parent="smn_containers"
weight=-6
+++
<![end-metadata]-->
# Dockerizing applications: A "Hello world"
# Hello world in a container
*So what's this Docker thing all about?*
Docker allows you to run applications inside containers. Running an
application inside a container takes a single command: `docker run`.
Docker allows you to run applications, worlds you create, inside containers.
Running an application inside a container takes a single command: `docker run`.
>**Note**: Depending on your Docker system configuration, you may be required to
>preface each `docker` command on this page with `sudo`. To avoid this behavior,
>your system administrator can create a Unix group called `docker` and add users
>to it.
>to it.
## Hello world
## Run a Hello world
Let's try it now.
@ -132,7 +133,7 @@ a really long string:
This really long string is called a *container ID*. It uniquely
identifies a container so we can work with it.
> **Note:**
> **Note:**
> The container ID is a bit long and unwieldy. A bit later,
> we'll see a shorter ID and ways to name our containers to make
> working with them easier.
@ -154,14 +155,14 @@ information about it, starting with a shorter variant of its container ID:
We can also see the image we used to build it, `ubuntu:14.04`, the command it
is running, its status and an automatically assigned name,
`insane_babbage`.
`insane_babbage`.
> **Note:**
> **Note:**
> Docker automatically generates names for any containers started.
> We'll see how to specify your own names a bit later.
Okay, so we now know it's running. But is it doing what we asked it to do? To see this
we're going to look inside the container using the `docker logs`
Okay, so we now know it's running. But is it doing what we asked it to do? To
see this we're going to look inside the container using the `docker logs`
command. Let's use the container name Docker assigned.
$ docker logs insane_babbage
@ -177,7 +178,7 @@ Awesome! Our daemon is working and we've just created our first
Dockerized application!
Now we've established we can create our own containers let's tidy up
after ourselves and stop our daemonized container. To do this we use the
after ourselves and stop our detached container. To do this we use the
`docker stop` command.
$ docker stop insane_babbage
@ -196,8 +197,15 @@ Excellent. Our container has been stopped.
# Next steps
Now we've seen how simple it is to get started with Docker. Let's learn how to
do some more advanced tasks.
So far, you launched your first containers using the `docker run` command. You
ran an *interactive container* that ran in the foreground. You also ran a
*detached container* that ran in the background. In the process you learned
about several Docker commands:
Go to [Working With Containers](usingdocker.md).
* `docker ps` - Lists containers.
* `docker logs` - Shows us the standard output of a container.
* `docker stop` - Stops running containers.
Now, you have the basis learn more about Docker and how to do some more advanced
tasks. Go to ["*Run a simple application*"](usingdocker.md) to actually build a
web application with the Docker client.

View file

@ -1,519 +0,0 @@
<!--[metadata]>
+++
title = "Docker container networking"
description = "How do we connect docker containers within and across hosts ?"
keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"]
[menu.main]
parent = "smn_containers"
weight = 3
+++
<![end-metadata]-->
# Docker container networking
So far we've been introduced to some [basic Docker
concepts](usingdocker.md), seen how to work with [Docker
images](dockerimages.md) as well as learned about basic [networking
and links between containers](dockerlinks.md). In this section
we're going to discuss how you can take control over more advanced
container networking.
This section makes use of `docker network` commands and outputs to explain the
advanced networking functionality supported by Docker.
# Default Networks
By default, docker creates 3 networks using 3 different network drivers :
```
$ sudo docker network ls
NETWORK ID NAME DRIVER
7fca4eb8c647 bridge bridge
9f904ee27bf5 none null
cf03ee007fb4 host host
```
`docker network inspect` gives more information about a network
```
$ sudo docker network inspect bridge
{
"name": "bridge",
"id": "7fca4eb8c647e57e9d46c32714271e0c3f8bf8d17d346629e2820547b2d90039",
"driver": "bridge",
"containers": {}
}
```
By default containers are launched on Bridge network
```
$ sudo docker run -itd --name=container1 busybox
f2870c98fd504370fb86e59f32cd0753b1ac9b69b7d80566ffc7192a82b3ed27
$ sudo docker run -itd --name=container2 busybox
bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727
```
```
$ sudo docker network inspect bridge
{
"name": "bridge",
"id": "7fca4eb8c647e57e9d46c32714271e0c3f8bf8d17d346629e2820547b2d90039",
"driver": "bridge",
"containers": {
"bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727": {
"endpoint": "e0ac95934f803d7e36384a2029b8d1eeb56cb88727aa2e8b7edfeebaa6dfd758",
"mac_address": "02:42:ac:11:00:03",
"ipv4_address": "172.17.0.3/16",
"ipv6_address": ""
},
"f2870c98fd504370fb86e59f32cd0753b1ac9b69b7d80566ffc7192a82b3ed27": {
"endpoint": "31de280881d2a774345bbfb1594159ade4ae4024ebfb1320cb74a30225f6a8ae",
"mac_address": "02:42:ac:11:00:02",
"ipv4_address": "172.17.0.2/16",
"ipv6_address": ""
}
}
}
```
`docker network inspect` command above shows all the connected containers and its network resources on a given network
Containers in a network should be able to communicate with each other using container names
```
$ sudo docker attach container1
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:17 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1382 (1.3 KiB) TX bytes:258 (258.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # ping container2
PING container2 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.125 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.130 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.172 ms
^C
--- container2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.125/0.142/0.172 ms
/ # cat /etc/hosts
172.17.0.2 f2870c98fd50
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 container1
172.17.0.2 container1.bridge
172.17.0.3 container2
172.17.0.3 container2.bridge
```
```
$ sudo docker attach container2
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03
inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # ping container1
PING container1 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.277 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.179 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.130 ms
64 bytes from 172.17.0.2: seq=3 ttl=64 time=0.113 ms
^C
--- container1 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.113/0.174/0.277 ms
/ # cat /etc/hosts
172.17.0.3 bda12f892278
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 container1
172.17.0.2 container1.bridge
172.17.0.3 container2
172.17.0.3 container2.bridge
/ #
```
# User defined Networks
In addition to the inbuilt networks, user can create networks using inbuilt drivers
(such as bridge or overlay driver) or external plugins supplied by the community.
Networks by definition should provides complete isolation for the containers.
```
$ docker network create -d bridge isolated_nw
8b05faa32aeb43215f67678084a9c51afbdffe64cd91e3f5bb8267475f8bf1a7
$ docker network inspect isolated_nw
{
"name": "isolated_nw",
"id": "8b05faa32aeb43215f67678084a9c51afbdffe64cd91e3f5bb8267475f8bf1a7",
"driver": "bridge",
"containers": {}
}
$ docker network ls
NETWORK ID NAME DRIVER
9f904ee27bf5 none null
cf03ee007fb4 host host
7fca4eb8c647 bridge bridge
8b05faa32aeb isolated_nw bridge
```
Container can be launched on a user-defined network using the --net=<NETWORK> option
in `docker run` command
```
$ docker run --net=isolated_nw -itd --name=container3 busybox
777344ef4943d34827a3504a802bf15db69327d7abe4af28a05084ca7406f843
$ docker network inspect isolated_nw
{
"name": "isolated_nw",
"id": "8b05faa32aeb43215f67678084a9c51afbdffe64cd91e3f5bb8267475f8bf1a7",
"driver": "bridge",
"containers": {
"777344ef4943d34827a3504a802bf15db69327d7abe4af28a05084ca7406f843": {
"endpoint": "c7f22f8da07fb8ecc687d08377cfcdb80b4dd8624c2a8208b1a4268985e38683",
"mac_address": "02:42:ac:14:00:01",
"ipv4_address": "172.20.0.1/16",
"ipv6_address": ""
}
}
}
```
# Connecting to Multiple networks
Docker containers can dynamically connect to 1 or more networks with each network backed
by same or different network driver / plugin.
```
$ docker network connect isolated_nw container2
$ docker network inspect isolated_nw
{
"name": "isolated_nw",
"id": "8b05faa32aeb43215f67678084a9c51afbdffe64cd91e3f5bb8267475f8bf1a7",
"driver": "bridge",
"containers": {
"777344ef4943d34827a3504a802bf15db69327d7abe4af28a05084ca7406f843": {
"endpoint": "c7f22f8da07fb8ecc687d08377cfcdb80b4dd8624c2a8208b1a4268985e38683",
"mac_address": "02:42:ac:14:00:01",
"ipv4_address": "172.20.0.1/16",
"ipv6_address": ""
},
"bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727": {
"endpoint": "2ac11345af68b0750341beeda47cc4cce93bb818d8eb25e61638df7a4997cb1b",
"mac_address": "02:42:ac:14:00:02",
"ipv4_address": "172.20.0.2/16",
"ipv6_address": ""
}
}
}
```
Lets check the network resources used by container2.
```
$ docker inspect --format='{{.NetworkSettings.Networks}}' container2
[bridge isolated_nw]
$ sudo docker attach container2
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03
inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:21 errors:0 dropped:0 overruns:0 frame:0
TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1586 (1.5 KiB) TX bytes:1460 (1.4 KiB)
eth1 Link encap:Ethernet HWaddr 02:42:AC:14:00:02
inet addr:172.20.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe14:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
```
In the example discussed in this section thus far, container3 and container2 are
connected to isolated_nw and can talk to each other.
But container3 and container1 are not in the same network and hence they cannot communicate.
```
$ docker attach container3
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:14:00:01
inet addr:172.20.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe14:1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:24 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1944 (1.8 KiB) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # ping container2.isolated_nw
PING container2.isolated_nw (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.217 ms
64 bytes from 172.20.0.2: seq=1 ttl=64 time=0.150 ms
64 bytes from 172.20.0.2: seq=2 ttl=64 time=0.188 ms
64 bytes from 172.20.0.2: seq=3 ttl=64 time=0.176 ms
^C
--- container2.isolated_nw ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.150/0.182/0.217 ms
/ # ping container2
PING container2 (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.120 ms
64 bytes from 172.20.0.2: seq=1 ttl=64 time=0.109 ms
^C
--- container2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.109/0.114/0.120 ms
/ # ping container1
ping: bad address 'container1'
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
^C
--- 172.17.0.2 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
/ # ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
^C
--- 172.17.0.3 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
```
While container2 is attached to both the networks (bridge and isolated_nw) and hence it
can talk to both container1 and container3
```
$ docker attach container2
/ # cat /etc/hosts
172.17.0.3 bda12f892278
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 container1
172.17.0.2 container1.bridge
172.17.0.3 container2
172.17.0.3 container2.bridge
172.20.0.1 container3
172.20.0.1 container3.isolated_nw
172.20.0.2 container2
172.20.0.2 container2.isolated_nw
/ # ping container3
PING container3 (172.20.0.1): 56 data bytes
64 bytes from 172.20.0.1: seq=0 ttl=64 time=0.138 ms
64 bytes from 172.20.0.1: seq=1 ttl=64 time=0.133 ms
64 bytes from 172.20.0.1: seq=2 ttl=64 time=0.133 ms
^C
--- container3 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.133/0.134/0.138 ms
/ # ping container1
PING container1 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.121 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.250 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.133 ms
^C
--- container1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.121/0.168/0.250 ms
/ #
```
Just like it is easy to connect a container to multiple networks, one can
disconnect a container from a network using the `docker network disconnect` command.
```
root@Ubuntu-vm ~$ docker network disconnect isolated_nw container2
$ docker inspect --format='{{.NetworkSettings.Networks}}' container2
[bridge]
root@Ubuntu-vm ~$ docker network inspect isolated_nw
{
"name": "isolated_nw",
"id": "8b05faa32aeb43215f67678084a9c51afbdffe64cd91e3f5bb8267475f8bf1a7",
"driver": "bridge",
"containers": {
"777344ef4943d34827a3504a802bf15db69327d7abe4af28a05084ca7406f843": {
"endpoint": "c7f22f8da07fb8ecc687d08377cfcdb80b4dd8624c2a8208b1a4268985e38683",
"mac_address": "02:42:ac:14:00:01",
"ipv4_address": "172.20.0.1/16",
"ipv6_address": ""
}
}
}
```
Once a container is disconnected from a network, it cannot communicate with other containers
connected to that network. In this example, container2 cannot talk to container3 any more
in isolated_nw
```
$ sudo docker attach container2
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03
inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:26 errors:0 dropped:0 overruns:0 frame:0
TX packets:23 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1964 (1.9 KiB) TX bytes:1838 (1.7 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # ping container3
PING container3 (172.20.0.1): 56 data bytes
^C
--- container3 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
But container2 still has full connectivity to the bridge network
/ # ping container1
PING container1 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.119 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.174 ms
^C
--- container1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.119/0.146/0.174 ms
/ #
```
When all the containers in a network stops or disconnected the network can be removed
```
$ docker network inspect isolated_nw
{
"name": "isolated_nw",
"id": "8b05faa32aeb43215f67678084a9c51afbdffe64cd91e3f5bb8267475f8bf1a7",
"driver": "bridge",
"containers": {}
}
$ docker network rm isolated_nw
$ docker network ls
NETWORK ID NAME DRIVER
9f904ee27bf5 none null
cf03ee007fb4 host host
7fca4eb8c647 bridge bridge
```
# Native Multi-host networking
With the help of libnetwork and the inbuilt `VXLAN based overlay network driver` docker supports multi-host networking natively out of the box. Technical details are documented under https://github.com/docker/libnetwork/blob/master/docs/overlay.md.
Using the exact same above `docker network` UI, the user can exercise the power of multi-host networking.
In order to create a network using the inbuilt overlay driver,
```
$ docker network create -d overlay multi-host-network
```
Since `network` object is globally significant, this feature requires distributed states provided by `libkv`. Using `libkv`, the user can plug any of the supported Key-Value store (such as consul, etcd or zookeeper).
User can specify the Key-Value store of choice using the `--cluster-store` daemon flag, which takes configuration value of format `PROVIDER://URL`, where
`PROVIDER` is the name of the Key-Value store (such as consul, etcd or zookeeper) and
`URL` is the url to reach the Key-Value store.
Example : `docker daemon --cluster-store=consul://localhost:8500`
# Next step
Now that you know how to link Docker containers together, the next step is
learning how to manage data, volumes and mounts inside your containers.
Go to [Managing Data in Containers](dockervolumes.md).

View file

@ -1,28 +1,28 @@
<!--[metadata]>
+++
title = "Get started with Docker Hub"
title = "Store images on Docker Hub"
description = "Learn how to use the Docker Hub to manage Docker images and work flow"
keywords = ["repo, Docker Hub, Docker Hub, registry, index, repositories, usage, pull image, push image, image, documentation"]
[menu.main]
parent = "smn_images"
weight = 2
parent = "smn_containers"
+++
<![end-metadata]-->
# Get started with Docker Hub
# Store images on Docker Hub
So far you've learned how to use the command line to run Docker on your local host.
You've learned how to [pull down images](usingdocker.md) to build containers
from existing images and you've learned how to [create your own images](dockerimages.md).
So far you've learned how to use the command line to run Docker on your local
host. You've learned how to [pull down images](usingdocker.md) to build
containers from existing images and you've learned how to [create your own
images](dockerimages.md).
Next, you're going to learn how to use the [Docker Hub](https://hub.docker.com) to
simplify and enhance your Docker workflows.
Next, you're going to learn how to use the [Docker Hub](https://hub.docker.com)
to simplify and enhance your Docker workflows.
The [Docker Hub](https://hub.docker.com) is a public registry maintained by Docker,
Inc. It contains over 15,000 images you can download and use to build containers. It also
provides authentication, work group structure, workflow tools like webhooks and build
triggers, and privacy tools like private repositories for storing images you don't want
to share publicly.
The [Docker Hub](https://hub.docker.com) is a public registry maintained by
Docker, Inc. It contains images you can download and use to build
containers. It also provides authentication, work group structure, workflow
tools like webhooks and build triggers, and privacy tools like private
repositories for storing images you don't want to share publicly.
## Docker commands and Docker Hub

View file

@ -1,22 +1,20 @@
<!--[metadata]>
+++
title = "Managing data in containers"
title = "Manage data in containers"
description = "How to manage data inside your Docker containers."
keywords = ["Examples, Usage, volume, docker, documentation, user guide, data, volumes"]
[menu.main]
parent = "smn_containers"
weight = 3
+++
<![end-metadata]-->
# Managing data in containers
# Manage data in containers
So far we've been introduced to some [basic Docker
concepts](usingdocker.md), seen how to work with [Docker
images](dockerimages.md) as well as learned about [networking
and links between containers](dockerlinks.md). In this section
we're going to discuss how you can manage data inside and between your
Docker containers.
So far we've been introduced to some [basic Docker concepts](usingdocker.md),
seen how to work with [Docker images](dockerimages.md) as well as learned about
[networking and links between containers](networking/default_network/dockerlinks.md). In this section we're
going to discuss how you can manage data inside and between your Docker
containers.
We're going to look at the two primary ways you can manage data in
Docker.
@ -28,20 +26,20 @@ Docker.
A *data volume* is a specially-designated directory within one or more
containers that bypasses the [*Union File
System*](../reference/glossary.md#union-file-system). Data volumes provide several
System*](../reference/glossary.md#union-file-system). Data volumes provide several
useful features for persistent or shared data:
- Volumes are initialized when a container is created. If the container's
base image contains data at the specified mount point, that existing data is
base image contains data at the specified mount point, that existing data is
copied into the new volume upon volume initialization.
- Data volumes can be shared and reused among containers.
- Changes to a data volume are made directly.
- Changes to a data volume will not be included when you update an image.
- Data volumes persist even if the container itself is deleted.
Data volumes are designed to persist data, independent of the container's life
cycle. Docker therefore *never* automatically delete volumes when you remove
a container, nor will it "garbage collect" volumes that are no longer
Data volumes are designed to persist data, independent of the container's life
cycle. Docker therefore *never* automatically delete volumes when you remove
a container, nor will it "garbage collect" volumes that are no longer
referenced by a container.
### Adding a data volume
@ -55,7 +53,7 @@ application container.
This will create a new volume inside a container at `/webapp`.
> **Note:**
> **Note:**
> You can also use the `VOLUME` instruction in a `Dockerfile` to add one or
> more new volumes to any container created from that image.
@ -81,7 +79,7 @@ volumes. The output should look something similar to the following:
]
...
You will notice in the above 'Source' is specifying the location on the host and
You will notice in the above 'Source' is specifying the location on the host and
'Destination' is specifying the volume location inside the container. `RW` shows
if the volume is read/write.
@ -100,17 +98,17 @@ image, the `/src/webapp` mount overlays but does not remove the pre-existing
content. Once the mount is removed, the content is accessible again. This is
consistent with the expected behavior of the `mount` command.
The `container-dir` must always be an absolute path such as `/src/docs`.
The `host-dir` can either be an absolute path or a `name` value. If you
supply an absolute path for the `host-dir`, Docker bind-mounts to the path
The `container-dir` must always be an absolute path such as `/src/docs`.
The `host-dir` can either be an absolute path or a `name` value. If you
supply an absolute path for the `host-dir`, Docker bind-mounts to the path
you specify. If you supply a `name`, Docker creates a named volume by that `name`.
A `name` value must start with start with an alphanumeric character,
followed by `a-z0-9`, `_` (underscore), `.` (period) or `-` (hyphen).
A `name` value must start with start with an alphanumeric character,
followed by `a-z0-9`, `_` (underscore), `.` (period) or `-` (hyphen).
An absolute path starts with a `/` (forward slash).
For example, you can specify either `/foo` or `foo` for a `host-dir` value.
If you supply the `/foo` value, Docker creates a bind-mount. If you supply
For example, you can specify either `/foo` or `foo` for a `host-dir` value.
If you supply the `/foo` value, Docker creates a bind-mount. If you supply
the `foo` specification, Docker creates a named volume.
If you are using Docker Machine on Mac or Windows, your Docker daemon has only limited access to your OS X or Windows filesystem. Docker Machine tries
@ -124,7 +122,7 @@ docker run -v /Users/<path>:/<container path> ...
On Windows, mount directories using:
```
docker run -v /c/Users/<path>:/<container path> ...`
docker run -v /c/Users/<path>:/<container path> ...`
```
All other paths come from your virtual machine's filesystem. For example, if
@ -153,7 +151,7 @@ Because of [limitations in the `mount`
function](http://lists.linuxfoundation.org/pipermail/containers/2015-April/035788.html),
moving subdirectories within the host's source directory can give
access from the container to the host's file system. This requires a malicious
user with access to host and its mounted directory.
user with access to host and its mounted directory.
>**Note**: The host directory is, by its nature, host-dependent. For this
>reason, you can't mount a host directory from `Dockerfile` because built images
@ -177,20 +175,20 @@ Only the current container can use a private volume.
### Mount a host file as a data volume
The `-v` flag can also be used to mount a single file - instead of *just*
The `-v` flag can also be used to mount a single file - instead of *just*
directories - from the host machine.
$ docker run --rm -it -v ~/.bash_history:/.bash_history ubuntu /bin/bash
This will drop you into a bash shell in a new container, you will have your bash
history from the host and when you exit the container, the host will have the
This will drop you into a bash shell in a new container, you will have your bash
history from the host and when you exit the container, the host will have the
history of the commands typed while in the container.
> **Note:**
> Many tools used to edit files including `vi` and `sed --in-place` may result
> **Note:**
> Many tools used to edit files including `vi` and `sed --in-place` may result
> in an inode change. Since Docker v1.1.0, this will produce an error such as
> "*sed: cannot rename ./sedKdJ9Dy: Device or resource busy*". In the case where
> you want to edit the mounted file, it is often easiest to instead mount the
> "*sed: cannot rename ./sedKdJ9Dy: Device or resource busy*". In the case where
> you want to edit the mounted file, it is often easiest to instead mount the
> parent directory.
## Creating and mounting a data volume container
@ -233,9 +231,9 @@ be deleted. To delete the volume from disk, you must explicitly call
`docker rm -v` against the last container with a reference to the volume. This
allows you to upgrade, or effectively migrate data volumes between containers.
> **Note:** Docker will not warn you when removing a container *without*
> **Note:** Docker will not warn you when removing a container *without*
> providing the `-v` option to delete its volumes. If you remove containers
> without using the `-v` option, you may end up with "dangling" volumes;
> without using the `-v` option, you may end up with "dangling" volumes;
> volumes that are no longer referenced by a container.
> Dangling volumes are difficult to get rid of and can take up a large amount
> of disk space. We're working on improving volume management and you can check

View file

@ -11,10 +11,8 @@ parent = "mn_fun_docker"
# Welcome to the Docker user guide
In the [Introduction](../misc) you got a taste of what Docker is and how it
works. In this guide we're going to take you through the fundamentals of
using Docker and integrating it into your environment.
Well teach you how to use Docker to:
works. This guide takes you through the fundamentals of using Docker and
integrating it into your environment. You'll learn how to use Docker to:
* Dockerize your applications.
* Run your own containers.
@ -22,8 +20,8 @@ Well teach you how to use Docker to:
* Share your Docker images with others.
* And a whole lot more!
We've broken this guide into major sections that take you through
the Docker life cycle:
This guide is broken into major sections that take you through the Docker life
cycle:
## Getting started with Docker Hub
@ -44,6 +42,7 @@ applications. To learn how to Dockerize applications and run them:
Go to [Dockerizing Applications](dockerizing.md).
## Working with containers
*How do I manage my containers?*
@ -63,23 +62,13 @@ learn how to build your own application images with Docker.
Go to [Working with Docker Images](dockerimages.md).
## Linking containers together
## Networking containers
Until now we've seen how to build individual applications inside Docker
containers. Now learn how to build whole application stacks with Docker
by linking together multiple Docker containers.
networking.
Go to [Linking Containers Together](dockerlinks.md).
## Docker container networking
Links provides a very easy and convenient way to connect the containers.
But, it is very opinionated and doesnt provide a lot of flexibility or
choice to the end-users. Now, lets learn about a flexible way to connect
containers together within a host or across multiple hosts in a cluster
using various networking technologies, with the help of extensible plugins.
Go to [Docker Networking](dockernetworks.md).
Go to [Networking Containers](networkingcontainers.md).
## Managing data in containers
@ -136,4 +125,3 @@ Go to [Docker Swarm user guide](https://docs.docker.com/swarm/).
* Get [Docker help](https://stackoverflow.com/search?q=docker) on
StackOverflow
* [Docker.com](https://www.docker.com/)

View file

@ -0,0 +1,103 @@
<!--[metadata]>
+++
title = "Bind container ports to the host"
description = "expose, port, docker, bind publish"
keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"]
[menu.main]
parent = "smn_networking_def"
+++
<![end-metadata]-->
# Bind container ports to the host
The information in this section explains binding container ports within the Docker default bridge. This is a `bridge` network named `bridge` created automatically when you install Docker.
> **Note**: The [Docker networks feature](../dockernetworks.md) allows you to
create user-defined networks in addition to the default bridge network.
By default Docker containers can make connections to the outside world, but the
outside world cannot connect to containers. Each outgoing connection will
appear to originate from one of the host machine's own IP addresses thanks to an
`iptables` masquerading rule on the host machine that the Docker server creates
when it starts:
```
$ sudo iptables -t nat -L -n
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
...
```
The Docker server creates a masquerade rule that let containers connect to IP
addresses in the outside world.
If you want containers to accept incoming connections, you will need to provide
special options when invoking `docker run`. There are two approaches.
First, you can supply `-P` or `--publish-all=true|false` to `docker run` which
is a blanket operation that identifies every port with an `EXPOSE` line in the
image's `Dockerfile` or `--expose <port>` commandline flag and maps it to a host
port somewhere within an _ephemeral port range_. The `docker port` command then
needs to be used to inspect created mapping. The _ephemeral port range_ is
configured by `/proc/sys/net/ipv4/ip_local_port_range` kernel parameter,
typically ranging from 32768 to 61000.
Mapping can be specified explicitly using `-p SPEC` or `--publish=SPEC` option.
It allows you to particularize which port on docker server - which can be any
port at all, not just one within the _ephemeral port range_ -- you want mapped
to which port in the container.
Either way, you should be able to peek at what Docker has accomplished in your
network stack by examining your NAT tables.
```
# What your NAT rules might look like when Docker
# is finished setting up a -P forward:
$ iptables -t nat -L -n
...
Chain DOCKER (2 references)
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:49153 to:172.17.0.2:80
# What your NAT rules might look like when Docker
# is finished setting up a -p 80:80 forward:
Chain DOCKER (2 references)
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.2:80
```
You can see that Docker has exposed these container ports on `0.0.0.0`, the
wildcard IP address that will match any possible incoming port on the host
machine. If you want to be more restrictive and only allow container services to
be contacted through a specific external interface on the host machine, you have
two choices. When you invoke `docker run` you can use either `-p
IP:host_port:container_port` or `-p IP::port` to specify the external interface
for one particular binding.
Or if you always want Docker port forwards to bind to one specific IP address,
you can edit your system-wide Docker server settings and add the option
`--ip=IP_ADDRESS`. Remember to restart your Docker server after editing this
setting.
> **Note**: With hairpin NAT enabled (`--userland-proxy=false`), containers port
exposure is achieved purely through iptables rules, and no attempt to bind the
exposed port is ever made. This means that nothing prevents shadowing a
previously listening service outside of Docker through exposing the same port
for a container. In such conflicting situation, Docker created iptables rules
will take precedence and route to the container.
The `--userland-proxy` parameter, true by default, provides a userland
implementation for inter-container and outside-to-container communication. When
disabled, Docker uses both an additional `MASQUERADE` iptable rule and the
`net.ipv4.route_localnet` kernel parameter which allow the host machine to
connect to a local container exposed port through the commonly used loopback
address: this alternative is preferred for performance reasons.
## Related information
- [Understand Docker container networks](../dockernetworks.md)
- [Work with network commands](../work-with-networks.md)
- [Legacy container links](dockerlinks.md)

View file

@ -0,0 +1,77 @@
<!--[metadata]>
+++
title = "Build your own bridge"
description = "Learn how to build your own bridge interface"
keywords = ["docker, bridge, docker0, network"]
[menu.main]
parent = "smn_networking_def"
+++
<![end-metadata]-->
# Build your own bridge
This section explains building your own bridge to replaced the Docker default
bridge. This is a `bridge` network named `bridge` created automatically when you
install Docker.
> **Note**: The [Docker networks feature](../dockernetworks.md) allows you to
create user-defined networks in addition to the default bridge network.
You can set up your own bridge before starting Docker and use `-b BRIDGE` or
`--bridge=BRIDGE` to tell Docker to use your bridge instead. If you already
have Docker up and running with its default `docker0` still configured, you will
probably want to begin by stopping the service and removing the interface:
```
# Stopping Docker and removing docker0
$ sudo service docker stop
$ sudo ip link set dev docker0 down
$ sudo brctl delbr docker0
$ sudo iptables -t nat -F POSTROUTING
```
Then, before starting the Docker service, create your own bridge and give it
whatever configuration you want. Here we will create a simple enough bridge
that we really could just have used the options in the previous section to
customize `docker0`, but it will be enough to illustrate the technique.
```
# Create our own bridge
$ sudo brctl addbr bridge0
$ sudo ip addr add 192.168.5.1/24 dev bridge0
$ sudo ip link set dev bridge0 up
# Confirming that our bridge is up and running
$ ip addr show bridge0
4: bridge0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state UP group default
link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff
inet 192.168.5.1/24 scope global bridge0
valid_lft forever preferred_lft forever
# Tell Docker about it and restart (on Ubuntu)
$ echo 'DOCKER_OPTS="-b=bridge0"' >> /etc/default/docker
$ sudo service docker start
# Confirming new outgoing NAT masquerade is set up
$ sudo iptables -t nat -L -n
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 192.168.5.0/24 0.0.0.0/0
```
The result should be that the Docker server starts successfully and is now
prepared to bind containers to the new bridge. After pausing to verify the
bridge's configuration, try creating a container -- you will see that its IP
address is in your new IP address range, which Docker will have auto-detected.
You can use the `brctl show` command to see Docker add and remove interfaces
from the bridge as you start and stop containers, and can run `ip addr` and `ip
route` inside a container to see that it has been given an address in the
bridge's IP address range and has been told to use the Docker host's IP address
on the bridge as its default gateway to the rest of the Internet.

View file

@ -0,0 +1,132 @@
<!--[metadata]>
+++
title = "Configure container DNS"
description = "Learn how to configure DNS in Docker"
keywords = ["docker, bridge, docker0, network"]
[menu.main]
parent = "smn_networking_def"
+++
<![end-metadata]-->
# Configure container DNS
The information in this section explains configuring container DNS within
the Docker default bridge. This is a `bridge` network named `bridge` created
automatically when you install Docker.
**Note**: The [Docker networks feature](../dockernetworks.md) allows you to create user-defined networks in addition to the default bridge network.
How can Docker supply each container with a hostname and DNS configuration, without having to build a custom image with the hostname written inside? Its trick is to overlay three crucial `/etc` files inside the container with virtual files where it can write fresh information. You can see this by running `mount` inside a container:
```
$$ mount
...
/dev/disk/by-uuid/1fec...ebdf on /etc/hostname type ext4 ...
/dev/disk/by-uuid/1fec...ebdf on /etc/hosts type ext4 ...
/dev/disk/by-uuid/1fec...ebdf on /etc/resolv.conf type ext4 ...
...
```
This arrangement allows Docker to do clever things like keep `resolv.conf` up to date across all containers when the host machine receives new configuration over DHCP later. The exact details of how Docker maintains these files inside the container can change from one Docker version to the next, so you should leave the files themselves alone and use the following Docker options instead.
Four different options affect container domain name services.
<table>
<tr>
<td>
<p>
<code>-h HOSTNAME</code> or <code>--hostname=HOSTNAME</code>
</p>
</td>
<td>
<p>
Sets the hostname by which the container knows itself. This is written
into <code>/etc/hostname</code>, into <code>/etc/hosts</code> as the name
of the container's host-facing IP address, and is the name that
<code>/bin/bash</code> inside the container will display inside its
prompt. But the hostname is not easy to see from outside the container.
It will not appear in <code>docker ps</code> nor in the
<code>/etc/hosts</code> file of any other container.
</p>
</td>
</tr>
<tr>
<td>
<p>
<code>--link=CONTAINER_NAME</code> or <code>ID:ALIAS</code>
</p>
</td>
<td>
<p>
Using this option as you <code>run</code> a container gives the new
container's <code>/etc/hosts</code> an extra entry named
<code>ALIAS</code> that points to the IP address of the container
identified by <code>CONTAINER_NAME_or_ID<c/ode>. This lets processes
inside the new container connect to the hostname <code>ALIAS</code>
without having to know its IP. The <code>--link=</code> option is
discussed in more detail below. Because Docker may assign a different IP
address to the linked containers on restart, Docker updates the
<code>ALIAS</code> entry in the <code>/etc/hosts</code> file of the
recipient containers.
</p>
</td>
</tr>
<tr>
<td><p>
<code>--dns=IP_ADDRESS...</code>
</p></td>
<td><p>
Sets the IP addresses added as <code>server</code> lines to the container's
<code>/etc/resolv.conf</code> file. Processes in the container, when
confronted with a hostname not in <code>/etc/hosts</code>, will connect to
these IP addresses on port 53 looking for name resolution services. </p></td>
</tr>
<tr>
<td><p>
<code>--dns-search=DOMAIN...</code>
</p></td>
<td><p>
Sets the domain names that are searched when a bare unqualified hostname is
used inside of the container, by writing <code>search</code> lines into the
container's <code>/etc/resolv.conf</code>. When a container process attempts
to access <code>host</code> and the search domain <code>example.com</code>
is set, for instance, the DNS logic will not only look up <code>host</code>
but also <code>host.example.com</code>.
</p>
<p>
Use <code>--dns-search=.</code> if you don't wish to set the search domain.
</p>
</td>
</tr>
<tr>
<td><p>
<code>--dns-opt=OPTION...</code>
</p></td>
<td><p>
Sets the options used by DNS resolvers by writing an <code>options<code>
line into the container's <code>/etc/resolv.conf<code>.
</p>
<p>
See documentation for <code>resolv.conf<code> for a list of valid options
</p></td>
</tr>
<tr>
<td><p></p></td>
<td><p></p></td>
</tr>
</table>
Regarding DNS settings, in the absence of the `--dns=IP_ADDRESS...`, `--dns-search=DOMAIN...`, or `--dns-opt=OPTION...` options, Docker makes each container's `/etc/resolv.conf` look like the `/etc/resolv.conf` of the host machine (where the `docker` daemon runs). When creating the container's `/etc/resolv.conf`, the daemon filters out all localhost IP address `nameserver` entries from the host's original file.
Filtering is necessary because all localhost addresses on the host are unreachable from the container's network. After this filtering, if there are no more `nameserver` entries left in the container's `/etc/resolv.conf` file, the daemon adds public Google DNS nameservers (8.8.8.8 and 8.8.4.4) to the container's DNS configuration. If IPv6 is enabled on the daemon, the public IPv6 Google DNS nameservers will also be added (2001:4860:4860::8888 and 2001:4860:4860::8844).
> **Note**: If you need access to a host's localhost resolver, you must modify your DNS service on the host to listen on a non-localhost address that is reachable from within the container.
You might wonder what happens when the host machine's `/etc/resolv.conf` file changes. The `docker` daemon has a file change notifier active which will watch for changes to the host DNS configuration.
> **Note**: The file change notifier relies on the Linux kernel's inotify feature. Because this feature is currently incompatible with the overlay filesystem driver, a Docker daemon using "overlay" will not be able to take advantage of the `/etc/resolv.conf` auto-update feature.
When the host file changes, all stopped containers which have a matching `resolv.conf` to the host will be updated immediately to this newest host configuration. Containers which are running when the host configuration changes will need to stop and start to pick up the host changes due to lack of a facility to ensure atomic writes of the `resolv.conf` file while the container is running. If the container's `resolv.conf` has been edited since it was started with the default configuration, no replacement will be attempted as it would overwrite the changes performed by the container. If the options (`--dns`, `--dns-search`, or `--dns-opt`) have been used to modify the default host configuration, then the replacement with an updated host's `/etc/resolv.conf` will not happen as well.
> **Note**: For containers which were created prior to the implementation of the `/etc/resolv.conf` update feature in Docker 1.5.0: those containers will **not** receive updates when the host `resolv.conf` file changes. Only containers created with Docker 1.5.0 and above will utilize this auto-update feature.

View file

@ -0,0 +1,110 @@
<!--[metadata]>
+++
draft=true
title = "Configure container DNS"
description = "Learn how to configure DNS in Docker"
keywords = ["docker, bridge, docker0, network"]
[menu.main]
parent = "smn_networking_def"
+++
<![end-metadata]-->
<!--[metadata]>
DRAFT to prevent building. Keeping for one cycle before deleting.
<![end-metadata]-->
# How the default network
The information in this section explains configuring container DNS within tthe Docker default bridge. This is a `bridge` network named `bridge` created
automatically when you install Docker.
**Note**: The [Docker networks feature](../dockernetworks.md) allows you to create user-defined networks in addition to the default bridge network.
While Docker is under active development and continues to tweak and improve its network configuration logic, the shell commands in this section are rough equivalents to the steps that Docker takes when configuring networking for each new container.
## Review some basics
To communicate using the Internet Protocol (IP), a machine needs access to at least one network interface at which packets can be sent and received, and a routing table that defines the range of IP addresses reachable through that interface. Network interfaces do not have to be physical devices. In fact, the `lo` loopback interface available on every Linux machine (and inside each Docker container) is entirely virtual -- the Linux kernel simply copies loopback packets directly from the sender's memory into the receiver's memory.
Docker uses special virtual interfaces to let containers communicate with the host machine -- pairs of virtual interfaces called "peers" that are linked inside of the host machine's kernel so that packets can travel between them. They are simple to create, as we will see in a moment.
The steps with which Docker configures a container are:
- Create a pair of peer virtual interfaces.
- Give one of them a unique name like `veth65f9`, keep it inside of the main Docker host, and bind it to `docker0` or whatever bridge Docker is supposed to be using.
- Toss the other interface over the wall into the new container (which will already have been provided with an `lo` interface) and rename it to the much prettier name `eth0` since, inside of the container's separate and unique network interface namespace, there are no physical interfaces with which this name could collide.
- Set the interface's MAC address according to the `--mac-address` parameter or generate a random one.
- Give the container's `eth0` a new IP address from within the bridge's range of network addresses. The default route is set to the IP address passed to the Docker daemon using the `--default-gateway` option if specified, otherwise to the IP address that the Docker host owns on the bridge. The MAC address is generated from the IP address unless otherwise specified. This prevents ARP cache invalidation problems, when a new container comes up with an IP used in the past by another container with another MAC.
With these steps complete, the container now possesses an `eth0` (virtual) network card and will find itself able to communicate with other containers and the rest of the Internet.
You can opt out of the above process for a particular container by giving the `--net=` option to `docker run`, which takes four possible values.
- `--net=bridge` -- The default action, that connects the container to the Docker bridge as described above.
- `--net=host` -- Tells Docker to skip placing the container inside of a separate network stack. In essence, this choice tells Docker to **not containerize the container's networking**! While container processes will still be confined to their own filesystem and process list and resource limits, a quick `ip addr` command will show you that, network-wise, they live "outside" in the main Docker host and have full access to its network interfaces. Note that this does **not** let the container reconfigure the host network stack -- that would require `--privileged=true` -- but it does let container processes open low-numbered ports like any other root process. It also allows the container to access local network services like D-bus. This can lead to processes in the container being able to do unexpected things like [restart your computer](https://github.com/docker/docker/issues/6401). You should use this option with caution.
- `--net=container:NAME_or_ID` -- Tells Docker to put this container's processes inside of the network stack that has already been created inside of another container. The new container's processes will be confined to their own filesystem and process list and resource limits, but will share the same IP address and port numbers as the first container, and processes on the two containers will be able to connect to each other over the loopback interface.
- `--net=none` -- Tells Docker to put the container inside of its own network stack but not to take any steps to configure its network, leaving you free to build any of the custom configurations explored in the last few sections of this document.
## Manually network
To get an idea of the steps that are necessary if you use `--net=none` as described in that last bullet point, here are the commands that you would run to reach roughly the same configuration as if you had let Docker do all of the configuration:
```
# At one shell, start a container and
# leave its shell idle and running
$ docker run -i -t --rm --net=none base /bin/bash
root@63f36fc01b5f:/#
# At another shell, learn the container process ID
# and create its namespace entry in /var/run/netns/
# for the "ip netns" command we will be using below
$ docker inspect -f '{{.State.Pid}}' 63f36fc01b5f
2778
$ pid=2778
$ sudo mkdir -p /var/run/netns
$ sudo ln -s /proc/$pid/ns/net /var/run/netns/$pid
# Check the bridge's IP address and netmask
$ ip addr show docker0
21: docker0: ...
inet 172.17.42.1/16 scope global docker0
...
# Create a pair of "peer" interfaces A and B,
# bind the A end to the bridge, and bring it up
$ sudo ip link add A type veth peer name B
$ sudo brctl addif docker0 A
$ sudo ip link set A up
# Place B inside the container's network namespace,
# rename to eth0, and activate it with a free IP
$ sudo ip link set B netns $pid
$ sudo ip netns exec $pid ip link set dev B name eth0
$ sudo ip netns exec $pid ip link set eth0 address 12:34:56:78:9a:bc
$ sudo ip netns exec $pid ip link set eth0 up
$ sudo ip netns exec $pid ip addr add 172.17.42.99/16 dev eth0
$ sudo ip netns exec $pid ip route add default via 172.17.42.1
```
At this point your container should be able to perform networking operations as usual.
When you finally exit the shell and Docker cleans up the container, the network namespace is destroyed along with our virtual `eth0` -- whose destruction in turn destroys interface `A` out in the Docker host and automatically un-registers it from the `docker0` bridge. So everything gets cleaned up without our having to run any extra commands! Well, almost everything:
```
# Clean up dangling symlinks in /var/run/netns
find -L /var/run/netns -type l -delete
```
Also note that while the script above used modern `ip` command instead of old deprecated wrappers like `ipconfig` and `route`, these older commands would also have worked inside of our container. The `ip addr` command can be typed as `ip a` if you are in a hurry.
Finally, note the importance of the `ip netns exec` command, which let us reach inside and configure a network namespace as root. The same commands would not have worked if run inside of the container, because part of safe containerization is that Docker strips container processes of the right to configure their own networks. Using `ip netns exec` is what let us finish up the configuration without having to take the dangerous step of running the container itself with `--privileged=true`.

View file

@ -0,0 +1,123 @@
<!--[metadata]>
+++
title = "Understand container communication"
description = "Understand container communication"
keywords = ["docker, container, communication, network"]
[menu.main]
parent = "smn_networking_def"
+++
<![end-metadata]-->
# Understand container communication
The information in this section explains container communication within the
Docker default bridge. This is a `bridge` network named `bridge` created
automatically when you install Docker.
**Note**: The [Docker networks feature](../dockernetworks.md) allows you to create user-defined networks in addition to the default bridge network.
## Communicating to the outside world
Whether a container can talk to the world is governed by two factors. The first
factor is whether the host machine is forwarding its IP packets. The second is
whether the hosts `iptables` allow this particular connections
IP packet forwarding is governed by the `ip_forward` system parameter. Packets
can only pass between containers if this parameter is `1`. Usually you will
simply leave the Docker server at its default setting `--ip-forward=true` and
Docker will go set `ip_forward` to `1` for you when the server starts up. If you
set `--ip-forward=false` and your system's kernel has it enabled, the
`--ip-forward=false` option has no effect. To check the setting on your kernel
or to turn it on manually:
```
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0
$ sysctl net.ipv4.conf.all.forwarding=1
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1
```
Many using Docker will want `ip_forward` to be on, to at least make
communication _possible_ between containers and the wider world. May also be
needed for inter-container communication if you are in a multiple bridge setup.
Docker will never make changes to your system `iptables` rules if you set
`--iptables=false` when the daemon starts. Otherwise the Docker server will
append forwarding rules to the `DOCKER` filter chain.
Docker will not delete or modify any pre-existing rules from the `DOCKER` filter
chain. This allows the user to create in advance any rules required to further
restrict access to the containers.
Docker's forward rules permit all external source IPs by default. To allow only
a specific IP or network to access the containers, insert a negated rule at the
top of the `DOCKER` filter chain. For example, to restrict external access such
that _only_ source IP 8.8.8.8 can access the containers, the following rule
could be added:
```
$ iptables -I DOCKER -i ext_if ! -s 8.8.8.8 -j DROP
```
## Communication between containers
Whether two containers can communicate is governed, at the operating system level, by two factors.
- Does the network topology even connect the containers' network interfaces? By default Docker will attach all containers to a single `docker0` bridge, providing a path for packets to travel between them. See the later sections of this document for other possible topologies.
- Do your `iptables` allow this particular connection? Docker will never make changes to your system `iptables` rules if you set `--iptables=false` when the daemon starts. Otherwise the Docker server will add a default rule to the `FORWARD` chain with a blanket `ACCEPT` policy if you retain the default `--icc=true`, or else will set the policy to `DROP` if `--icc=false`.
It is a strategic question whether to leave `--icc=true` or change it to
`--icc=false` so that `iptables` will protect other containers -- and the main
host -- from having arbitrary ports probed or accessed by a container that gets
compromised.
If you choose the most secure setting of `--icc=false`, then how can containers
communicate in those cases where you _want_ them to provide each other services?
The answer is the `--link=CONTAINER_NAME_or_ID:ALIAS` option, which was
mentioned in the previous section because of its effect upon name services. If
the Docker daemon is running with both `--icc=false` and `--iptables=true`
then, when it sees `docker run` invoked with the `--link=` option, the Docker
server will insert a pair of `iptables` `ACCEPT` rules so that the new
container can connect to the ports exposed by the other container -- the ports
that it mentioned in the `EXPOSE` lines of its `Dockerfile`.
> **Note**: The value `CONTAINER_NAME` in `--link=` must either be an
auto-assigned Docker name like `stupefied_pare` or else the name you assigned
with `--name=` when you ran `docker run`. It cannot be a hostname, which Docker
will not recognize in the context of the `--link=` option.
You can run the `iptables` command on your Docker host to see whether the `FORWARD` chain has a default policy of `ACCEPT` or `DROP`:
```
# When --icc=false, you should see a DROP rule:
$ sudo iptables -L -n
...
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
DROP all -- 0.0.0.0/0 0.0.0.0/0
...
# When a --link= has been created under --icc=false,
# you should see port-specific ACCEPT rules overriding
# the subsequent DROP policy for all other packets:
$ sudo iptables -L -n
...
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
DROP all -- 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- 172.17.0.2 172.17.0.3 tcp spt:80
ACCEPT tcp -- 172.17.0.3 172.17.0.2 tcp dpt:80
```
> **Note**: Docker is careful that its host-wide `iptables` rules fully expose
containers to each other's raw IP addresses, so connections from one container
to another should always appear to be originating from the first container's own
IP address.

View file

@ -0,0 +1,61 @@
<!--[metadata]>
+++
title = "Customize the docker0 bridge"
description = "Customizing docker0"
keywords = ["docker, bridge, docker0, network"]
[menu.main]
parent = "smn_networking_def"
+++
<![end-metadata]-->
# Customize the docker0 bridge
The information in this section explains how to customize the Docker default bridge. This is a `bridge` network named `bridge` created automatically when you install Docker.
**Note**: The [Docker networks feature](../dockernetworks.md) allows you to create user-defined networks in addition to the default bridge network.
By default, the Docker server creates and configures the host system's `docker0` interface as an _Ethernet bridge_ inside the Linux kernel that can pass packets back and forth between other physical or virtual network interfaces so that they behave as a single Ethernet network.
Docker configures `docker0` with an IP address, netmask and IP allocation range. The host machine can both receive and send packets to containers connected to the bridge, and gives it an MTU -- the _maximum transmission unit_ or largest packet length that the interface will allow -- of either 1,500 bytes or else a more specific value copied from the Docker host's interface that supports its default route. These options are configurable at server startup:
- `--bip=CIDR` -- supply a specific IP address and netmask for the `docker0` bridge, using standard CIDR notation like `192.168.1.5/24`.
- `--fixed-cidr=CIDR` -- restrict the IP range from the `docker0` subnet, using the standard CIDR notation like `172.167.1.0/28`. This range must be an IPv4 range for fixed IPs (ex: 10.20.0.0/16) and must be a subset of the bridge IP range (`docker0` or set using `--bridge`). For example with `--fixed-cidr=192.168.1.0/25`, IPs for your containers will be chosen from the first half of `192.168.1.0/24` subnet.
- `--mtu=BYTES` -- override the maximum packet length on `docker0`.
Once you have one or more containers up and running, you can confirm that Docker has properly connected them to the `docker0` bridge by running the `brctl` command on the host machine and looking at the `interfaces` column of the output. Here is a host with two different containers connected:
```
# Display bridge info
$ sudo brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.3a1d7362b4ee no veth65f9
vethdda6
```
If the `brctl` command is not installed on your Docker host, then on Ubuntu you should be able to run `sudo apt-get install bridge-utils` to install it.
Finally, the `docker0` Ethernet bridge settings are used every time you create a new container. Docker selects a free IP address from the range available on the bridge each time you `docker run` a new container, and configures the container's `eth0` interface with that IP address and the bridge's netmask. The Docker host's own IP address on the bridge is used as the default gateway by which each container reaches the rest of the Internet.
```
# The network, as seen from a container
$ docker run -i -t --rm base /bin/bash
$$ ip addr show eth0
24: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 32:6f:e0:35:57:91 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::306f:e0ff:fe35:5791/64 scope link
valid_lft forever preferred_lft forever
$$ ip route
default via 172.17.42.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.3
$$ exit
```
Remember that the Docker host will not be willing to forward container packets out on to the Internet unless its `ip_forward` system setting is `1` -- see the section above on [Communication between containers](#between-containers) for details.

View file

@ -1,36 +1,44 @@
<!--[metadata]>
+++
title = "Linking containers together"
title = "Legacy container links"
description = "Learn how to connect Docker containers together."
keywords = ["Examples, Usage, user guide, links, linking, docker, documentation, examples, names, name, container naming, port, map, network port, network"]
[menu.main]
parent = "smn_containers"
weight = 4
parent = "smn_networking_def"
weight=-2
+++
<![end-metadata]-->
# Linking containers together
# Legacy container links
In [the Using Docker section](usingdocker.md), you saw how you can
connect to a service running inside a Docker container via a network
port. But a port connection is only one way you can interact with services and
applications running inside Docker containers. In this section, we'll briefly revisit
connecting via a network port and then we'll introduce you to another method of access:
container linking.
The information in this section explains legacy container links within the Docker default bridge. This is a `bridge` network named `bridge` created automatically when you install Docker.
Before the [Docker networks feature](../dockernetworks.md), you could use the
Docker link feature to allow containers to discover each other and securely
transfer information about one container to another container. With the
introduction of the Docker networks feature, you can still create links but they
are only supported on the default `bridge` network named `bridge` and appearing
in your network stack as `docker0`.
This section briefly discuss connecting via a network port and then goes into
detail on container linking. While links are still supported on Docker's default
network (`bridge bridge`), you should avoid them in preference of the Docker
networks feature. Linking is expected to be deprecated and removed in a future
release.
## Connect using network port mapping
In [the Using Docker section](usingdocker.md), you created a
In [the Using Docker section](../../usingdocker.md), you created a
container that ran a Python Flask application:
$ docker run -d -P training/webapp python app.py
> **Note:**
> **Note:**
> Containers have an internal network and an IP address
> (as we saw when we used the `docker inspect` command to show the container's
> IP address in the [Using Docker](usingdocker.md) section).
> IP address in the [Using Docker](../../usingdocker.md) section).
> Docker can have a variety of network configurations. You can see more
> information on Docker networking [here](../articles/networking.md).
> information on Docker networking [here](../index.md).
When that container was created, the `-P` flag was used to automatically map
any network port inside it to a random high port within an *ephemeral port
@ -42,7 +50,7 @@ range* on your Docker host. Next, when `docker ps` was run, you saw that port
bc533791f3f5 training/webapp:latest python app.py 5 seconds ago Up 2 seconds 0.0.0.0:49155->5000/tcp nostalgic_morse
You also saw how you can bind a container's ports to a specific port using
the `-p` flag. Here port 80 of the host is mapped to port 5000 of the
the `-p` flag. Here port 80 of the host is mapped to port 5000 of the
container:
$ docker run -d -p 80:5000 training/webapp python app.py
@ -85,15 +93,15 @@ configurations. For example, if you've bound the container port to the
$ docker port nostalgic_morse 5000
127.0.0.1:49155
> **Note:**
> **Note:**
> The `-p` flag can be used multiple times to configure multiple ports.
## Connect with the linking system
Network port mappings are not the only way Docker containers can connect
to one another. Docker also has a linking system that allows you to link
multiple containers together and send connection information from one to another.
When containers are linked, information about a source container can be sent to a
Network port mappings are not the only way Docker containers can connect to one
another. Docker also has a linking system that allows you to link multiple
containers together and send connection information from one to another. When
containers are linked, information about a source container can be sent to a
recipient container. This allows the recipient to see selected data describing
aspects of the source container.
@ -137,11 +145,11 @@ You can also use `docker inspect` to return the container's name.
## Communication across links
Links allow containers to discover each other and securely transfer information about one
container to another container. When you set up a link, you create a conduit between a
source container and a recipient container. The recipient can then access select data
about the source. To create a link, you use the `--link` flag. First, create a new
container, this time one containing a database.
Links allow containers to discover each other and securely transfer information
about one container to another container. When you set up a link, you create a
conduit between a source container and a recipient container. The recipient can
then access select data about the source. To create a link, you use the `--link`
flag. First, create a new container, this time one containing a database.
$ docker run -d --name db training/postgres
@ -200,7 +208,7 @@ recipient container in two ways:
Docker creates several environment variables when you link containers. Docker
automatically creates environment variables in the target container based on
the `--link` parameters. It will also expose all environment variables
the `--link` parameters. It will also expose all environment variables
originating from Docker from the source container. These include variables from:
* the `ENV` commands in the source container's Dockerfile
@ -253,8 +261,8 @@ that port is used for both tcp and udp, then the tcp one is specified.
Finally, Docker also exposes each Docker originated environment variable
from the source container as an environment variable in the target. For each
variable Docker creates an `<alias>_ENV_<name>` variable in the target
container. The variable's value is set to the value Docker used when it
variable Docker creates an `<alias>_ENV_<name>` variable in the target
container. The variable's value is set to the value Docker used when it
started the source container.
Returning back to our database example, you can run the `env`
@ -306,7 +314,7 @@ container:
You can see two relevant host entries. The first is an entry for the `web`
container that uses the Container ID as a host name. The second entry uses the
link alias to reference the IP address of the `db` container. In addition to
link alias to reference the IP address of the `db` container. In addition to
the alias you provide, the linked container's name--if unique from the alias
provided to the `--link` parameter--and the linked container's hostname will
also be added in `/etc/hosts` for the linked container's IP address. You can ping
@ -319,7 +327,7 @@ that host now via any of these entries:
56 bytes from 172.17.0.5: icmp_seq=1 ttl=64 time=0.250 ms
56 bytes from 172.17.0.5: icmp_seq=2 ttl=64 time=0.256 ms
> **Note:**
> **Note:**
> In the example, you'll note you had to install `ping` because it was not included
> in the container initially.
@ -327,7 +335,7 @@ Here, you used the `ping` command to ping the `db` container using its host entr
which resolves to `172.17.0.5`. You can use this host entry to configure an application
to make use of your `db` container.
> **Note:**
> **Note:**
> You can link multiple recipient containers to a single source. For
> example, you could have multiple (differently named) web containers attached to your
>`db` container.
@ -344,10 +352,4 @@ allowing linked communication to continue.
. . .
172.17.0.9 db
# Next step
Now that you know how to link Docker containers together, the next step is
learning how to take complete control over docker networking.
Go to [Docker Networking](dockernetworks.md).
# Related information

View file

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 30 KiB

View file

Before

Width:  |  Height:  |  Size: 66 KiB

After

Width:  |  Height:  |  Size: 66 KiB

View file

Before

Width:  |  Height:  |  Size: 96 KiB

After

Width:  |  Height:  |  Size: 96 KiB

View file

Before

Width:  |  Height:  |  Size: 74 KiB

After

Width:  |  Height:  |  Size: 74 KiB

View file

Before

Width:  |  Height:  |  Size: 175 KiB

After

Width:  |  Height:  |  Size: 175 KiB

View file

@ -0,0 +1,25 @@
<!--[metadata]>
+++
title = "Default bridge network"
description = "Docker networking"
keywords = ["network, networking, bridge, docker, documentation"]
[menu.main]
identifier="smn_networking_def"
parent= "smn_networking"
+++
<![end-metadata]-->
# Docker default bridge network
With the introduction of the Docker networks feature, you can create your own
user-defined networks. The Docker default bridge is created when you install
Docker Engine. It is a `bridge` network and is also named `bridge`. The topics
in this section are related to interacting with that default bridge network.
- [Understand container communication](container-communication.md)
- [Legacy container links](dockerlinks.md)
- [Binding container ports to the host](binding.md)
- [Build your own bridge](build-bridges.md)
- [Configure container DNS](configure-dns.md)
- [Customize the docker0 bridge](custom-docker0.md)
- [IPv6 with Docker](ipv6.md)

View file

@ -0,0 +1,259 @@
<!--[metadata]>
+++
title = "IPv6 with Docker"
description = "How do we connect docker containers within and across hosts ?"
keywords = ["docker, network, IPv6"]
[menu.main]
parent = "smn_networking_def"
weight = 3
+++
<![end-metadata]-->
# IPv6 with Docker
The information in this section explains IPv6 with the Docker default bridge.
This is a `bridge` network named `bridge` created automatically when you install
Docker.
As we are [running out of IPv4
addresses](http://en.wikipedia.org/wiki/IPv4_address_exhaustion) the IETF has
standardized an IPv4 successor, [Internet Protocol Version
6](http://en.wikipedia.org/wiki/IPv6) , in [RFC
2460](https://www.ietf.org/rfc/rfc2460.txt). Both protocols, IPv4 and IPv6,
reside on layer 3 of the [OSI model](http://en.wikipedia.org/wiki/OSI_model).
## How IPv6 works on Docker
By default, the Docker server configures the container network for IPv4 only.
You can enable IPv4/IPv6 dualstack support by running the Docker daemon with the
`--ipv6` flag. Docker will set up the bridge `docker0` with the IPv6 [link-local
address](http://en.wikipedia.org/wiki/Link-local_address) `fe80::1`.
By default, containers that are created will only get a link-local IPv6 address.
To assign globally routable IPv6 addresses to your containers you have to
specify an IPv6 subnet to pick the addresses from. Set the IPv6 subnet via the
`--fixed-cidr-v6` parameter when starting Docker daemon:
```
docker daemon --ipv6 --fixed-cidr-v6="2001:db8:1::/64"
```
The subnet for Docker containers should at least have a size of `/80`. This way
an IPv6 address can end with the container's MAC address and you prevent NDP
neighbor cache invalidation issues in the Docker layer.
With the `--fixed-cidr-v6` parameter set Docker will add a new route to the
routing table. Further IPv6 routing will be enabled (you may prevent this by
starting Docker daemon with `--ip-forward=false`):
```
$ ip -6 route add 2001:db8:1::/64 dev docker0
$ sysctl net.ipv6.conf.default.forwarding=1
$ sysctl net.ipv6.conf.all.forwarding=1
```
All traffic to the subnet `2001:db8:1::/64` will now be routed via the `docker0` interface.
Be aware that IPv6 forwarding may interfere with your existing IPv6
configuration: If you are using Router Advertisements to get IPv6 settings for
your host's interfaces you should set `accept_ra` to `2`. Otherwise IPv6 enabled
forwarding will result in rejecting Router Advertisements. E.g., if you want to
configure `eth0` via Router Advertisements you should set:
```
$ sysctl net.ipv6.conf.eth0.accept_ra=2
```
![](images/ipv6_basic_host_config.svg)
Every new container will get an IPv6 address from the defined subnet. Further a
default route will be added on `eth0` in the container via the address specified
by the daemon option `--default-gateway-v6` if present, otherwise via `fe80::1`:
```
docker run -it ubuntu bash -c "ip -6 addr show dev eth0; ip -6 route show"
15: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500
inet6 2001:db8:1:0:0:242:ac11:3/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:3/64 scope link
valid_lft forever preferred_lft forever
2001:db8:1::/64 dev eth0 proto kernel metric 256
fe80::/64 dev eth0 proto kernel metric 256
default via fe80::1 dev eth0 metric 1024
```
In this example the Docker container is assigned a link-local address with the
network suffix `/64` (here: `fe80::42:acff:fe11:3/64`) and a globally routable
IPv6 address (here: `2001:db8:1:0:0:242:ac11:3/64`). The container will create
connections to addresses outside of the `2001:db8:1::/64` network via the
link-local gateway at `fe80::1` on `eth0`.
Often servers or virtual machines get a `/64` IPv6 subnet assigned (e.g.
`2001:db8:23:42::/64`). In this case you can split it up further and provide
Docker a `/80` subnet while using a separate `/80` subnet for other applications
on the host:
![](images/ipv6_slash64_subnet_config.svg)
In this setup the subnet `2001:db8:23:42::/80` with a range from
`2001:db8:23:42:0:0:0:0` to `2001:db8:23:42:0:ffff:ffff:ffff` is attached to
`eth0`, with the host listening at `2001:db8:23:42::1`. The subnet
`2001:db8:23:42:1::/80` with an address range from `2001:db8:23:42:1:0:0:0` to
`2001:db8:23:42:1:ffff:ffff:ffff` is attached to `docker0` and will be used by
containers.
### Using NDP proxying
If your Docker host is only part of an IPv6 subnet but has not got an IPv6
subnet assigned you can use NDP proxying to connect your containers via IPv6 to
the internet. For example your host has the IPv6 address `2001:db8::c001`, is
part of the subnet `2001:db8::/64` and your IaaS provider allows you to
configure the IPv6 addresses `2001:db8::c000` to `2001:db8::c00f`:
```
$ ip -6 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
inet6 2001:db8::c001/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::601:3fff:fea1:9c01/64 scope link
valid_lft forever preferred_lft forever
```
Let's split up the configurable address range into two subnets
`2001:db8::c000/125` and `2001:db8::c008/125`. The first one can be used by the
host itself, the latter by Docker:
```
docker daemon --ipv6 --fixed-cidr-v6 2001:db8::c008/125
```
You notice the Docker subnet is within the subnet managed by your router that is
connected to `eth0`. This means all devices (containers) with the addresses from
the Docker subnet are expected to be found within the router subnet. Therefore
the router thinks it can talk to these containers directly.
![](images/ipv6_ndp_proxying.svg)
As soon as the router wants to send an IPv6 packet to the first container it
will transmit a neighbor solicitation request, asking, who has `2001:db8::c009`?
But it will get no answer because no one on this subnet has this address. The
container with this address is hidden behind the Docker host. The Docker host
has to listen to neighbor solicitation requests for the container address and
send a response that itself is the device that is responsible for the address.
This is done by a Kernel feature called `NDP Proxy`. You can enable it by
executing
```
$ sysctl net.ipv6.conf.eth0.proxy_ndp=1
```
Now you can add the container's IPv6 address to the NDP proxy table:
```
$ ip -6 neigh add proxy 2001:db8::c009 dev eth0
```
This command tells the Kernel to answer to incoming neighbor solicitation
requests regarding the IPv6 address `2001:db8::c009` on the device `eth0`. As a
consequence of this all traffic to this IPv6 address will go into the Docker
host and it will forward it according to its routing table via the `docker0`
device to the container network:
```
$ ip -6 route show
2001:db8::c008/125 dev docker0 metric 1
2001:db8::/64 dev eth0 proto kernel metric 256
```
You have to execute the `ip -6 neigh add proxy ...` command for every IPv6
address in your Docker subnet. Unfortunately there is no functionality for
adding a whole subnet by executing one command. An alternative approach would be
to use an NDP proxy daemon such as
[ndppd](https://github.com/DanielAdolfsson/ndppd).
## Docker IPv6 cluster
### Switched network environment
Using routable IPv6 addresses allows you to realize communication between
containers on different hosts. Let's have a look at a simple Docker IPv6 cluster
example:
![](images/ipv6_switched_network_example.svg)
The Docker hosts are in the `2001:db8:0::/64` subnet. Host1 is configured to
provide addresses from the `2001:db8:1::/64` subnet to its containers. It has
three routes configured:
- Route all traffic to `2001:db8:0::/64` via `eth0`
- Route all traffic to `2001:db8:1::/64` via `docker0`
- Route all traffic to `2001:db8:2::/64` via Host2 with IP `2001:db8::2`
Host1 also acts as a router on OSI layer 3. When one of the network clients
tries to contact a target that is specified in Host1's routing table Host1 will
forward the traffic accordingly. It acts as a router for all networks it knows:
`2001:db8::/64`, `2001:db8:1::/64` and `2001:db8:2::/64`.
On Host2 we have nearly the same configuration. Host2's containers will get IPv6
addresses from `2001:db8:2::/64`. Host2 has three routes configured:
- Route all traffic to `2001:db8:0::/64` via `eth0`
- Route all traffic to `2001:db8:2::/64` via `docker0`
- Route all traffic to `2001:db8:1::/64` via Host1 with IP `2001:db8:0::1`
The difference to Host1 is that the network `2001:db8:2::/64` is directly
attached to the host via its `docker0` interface whereas it reaches
`2001:db8:1::/64` via Host1's IPv6 address `2001:db8::1`.
This way every container is able to contact every other container. The
containers `Container1-*` share the same subnet and contact each other directly.
The traffic between `Container1-*` and `Container2-*` will be routed via Host1
and Host2 because those containers do not share the same subnet.
In a switched environment every host has to know all routes to every subnet.
You always have to update the hosts' routing tables once you add or remove a
host to the cluster.
Every configuration in the diagram that is shown below the dashed line is
handled by Docker: The `docker0` bridge IP address configuration, the route to
the Docker subnet on the host, the container IP addresses and the routes on the
containers. The configuration above the line is up to the user and can be
adapted to the individual environment.
### Routed network environment
In a routed network environment you replace the layer 2 switch with a layer 3
router. Now the hosts just have to know their default gateway (the router) and
the route to their own containers (managed by Docker). The router holds all
routing information about the Docker subnets. When you add or remove a host to
this environment you just have to update the routing table in the router - not
on every host.
![](images/ipv6_routed_network_example.svg)
In this scenario containers of the same host can communicate directly with each
other. The traffic between containers on different hosts will be routed via
their hosts and the router. For example packet from `Container1-1` to
`Container2-1` will be routed through `Host1`, `Router` and `Host2` until it
arrives at `Container2-1`.
To keep the IPv6 addresses short in this example a `/48` network is assigned to
every host. The hosts use a `/64` subnet of this for its own services and one
for Docker. When adding a third host you would add a route for the subnet
`2001:db8:3::/48` in the router and configure Docker on Host3 with
`--fixed-cidr-v6=2001:db8:3:1::/64`.
Remember the subnet for Docker containers should at least have a size of `/80`.
This way an IPv6 address can end with the container's MAC address and you
prevent NDP neighbor cache invalidation issues in the Docker layer. So if you
have a `/64` for your whole environment use `/78` subnets for the hosts and
`/80` for the containers. This way you can use 4096 hosts with 16 `/80` subnets
each.
Every configuration in the diagram that is visualized below the dashed line is
handled by Docker: The `docker0` bridge IP address configuration, the route to
the Docker subnet on the host, the container IP addresses and the routes on the
containers. The configuration above the line is up to the user and can be
adapted to the individual environment.

View file

@ -0,0 +1,141 @@
<!--[metadata]>
+++
draft=true
title = "Tools and Examples"
keywords = ["docker, bridge, docker0, network"]
[menu.main]
parent = "smn_networking_def"
+++
<![end-metadata]-->
<!--[metadata]>
We may want to add it back in later under another form. Labeled DRAFT for now. Won't be built.
<![end-metadata]-->
# Quick guide to the options
Here is a quick list of the networking-related Docker command-line options, in case it helps you find the section below that you are looking for.
Some networking command-line options can only be supplied to the Docker server when it starts up, and cannot be changed once it is running:
- `-b BRIDGE` or `--bridge=BRIDGE` -- see
[Building your own bridge](#bridge-building)
- `--bip=CIDR` -- see
[Customizing docker0](#docker0)
- `--default-gateway=IP_ADDRESS` -- see
[How Docker networks a container](#container-networking)
- `--default-gateway-v6=IP_ADDRESS` -- see
[IPv6](#ipv6)
- `--fixed-cidr` -- see
[Customizing docker0](#docker0)
- `--fixed-cidr-v6` -- see
[IPv6](#ipv6)
- `-H SOCKET...` or `--host=SOCKET...` --
This might sound like it would affect container networking,
but it actually faces in the other direction:
it tells the Docker server over what channels
it should be willing to receive commands
like "run container" and "stop container."
- `--icc=true|false` -- see
[Communication between containers](#between-containers)
- `--ip=IP_ADDRESS` -- see
[Binding container ports](#binding-ports)
- `--ipv6=true|false` -- see
[IPv6](#ipv6)
- `--ip-forward=true|false` -- see
[Communication between containers and the wider world](#the-world)
- `--iptables=true|false` -- see
[Communication between containers](#between-containers)
- `--mtu=BYTES` -- see
[Customizing docker0](#docker0)
- `--userland-proxy=true|false` -- see
[Binding container ports](#binding-ports)
There are three networking options that can be supplied either at startup or when `docker run` is invoked. When provided at startup, set the default value that `docker run` will later use if the options are not specified:
- `--dns=IP_ADDRESS...` -- see
[Configuring DNS](#dns)
- `--dns-search=DOMAIN...` -- see
[Configuring DNS](#dns)
- `--dns-opt=OPTION...` -- see
[Configuring DNS](#dns)
Finally, several networking options can only be provided when calling `docker run` because they specify something specific to one container:
- `-h HOSTNAME` or `--hostname=HOSTNAME` -- see
[Configuring DNS](#dns) and
[How Docker networks a container](#container-networking)
- `--link=CONTAINER_NAME_or_ID:ALIAS` -- see
[Configuring DNS](#dns) and
[Communication between containers](#between-containers)
- `--net=bridge|none|container:NAME_or_ID|host` -- see
[How Docker networks a container](#container-networking)
- `--mac-address=MACADDRESS...` -- see
[How Docker networks a container](#container-networking)
- `-p SPEC` or `--publish=SPEC` -- see
[Binding container ports](#binding-ports)
- `-P` or `--publish-all=true|false` -- see
[Binding container ports](#binding-ports)
To supply networking options to the Docker server at startup, use the `DOCKER_OPTS` variable in the Docker upstart configuration file. For Ubuntu, edit the variable in `/etc/default/docker` or `/etc/sysconfig/docker` for CentOS.
The following example illustrates how to configure Docker on Ubuntu to recognize a newly built bridge.
Edit the `/etc/default/docker` file:
```
$ echo 'DOCKER_OPTS="-b=bridge0"' >> /etc/default/docker
```
Then restart the Docker server.
```
$ sudo service docker start
```
For additional information on bridges, see [building your own bridge](#building-your-own-bridge) later on this page.

View file

@ -0,0 +1,28 @@
<!--[metadata]>
+++
draft=true
title = "Saved text"
keywords = ["docker, bridge, docker0, network"]
[menu.main]
parent = "smn_networking_def"
+++
<![end-metadata]-->
<!--[metadata]>
This content was extracted from the original introduction. We may want to add it back in later under another form. Labeled DRAFT for now. Won't be built.
<![end-metadata]-->
## A Brief introduction to networking and docker
When Docker starts, it creates a virtual interface named `docker0` on the host machine. It randomly chooses an address and subnet from the private range defined by [RFC 1918](http://tools.ietf.org/html/rfc1918) that are not in use on the host machine, and assigns it to `docker0`. Docker made the choice `172.17.42.1/16` when I started it a few minutes ago, for example -- a 16-bit netmask providing 65,534 addresses for the host machine and its containers. The MAC address is generated using the IP address allocated to the container to avoid ARP collisions, using a range from `02:42:ac:11:00:00` to `02:42:ac:11:ff:ff`.
> **Note:** This document discusses advanced networking configuration and options for Docker. In most cases you won't need this information. If you're looking to get started with a simpler explanation of Docker networking and an introduction to the concept of container linking see the [Docker User Guide](/userguide/networking/networking/default_network/dockerlinks.md/).
But `docker0` is no ordinary interface. It is a virtual _Ethernet bridge_ that automatically forwards packets between any other network interfaces that are attached to it. This lets containers communicate both with the host machine and with each other. Every time Docker creates a container, it creates a pair of "peer" interfaces that are like opposite ends of a pipe -- a packet sent on one will be received on the other. It gives one of the peers to the container to become its `eth0` interface and keeps the other peer, with a unique name like `vethAQI2QT`, out in the namespace of the host machine. By binding every `veth*` interface to the `docker0` bridge, Docker creates a virtual subnet shared between the host machine and every Docker container.
The remaining sections of this document explain all of the ways that you can use Docker options and -- in advanced cases -- raw Linux networking commands to tweak, supplement, or entirely replace Docker's default networking configuration.
## Editing networking config files
Starting with Docker v.1.2.0, you can now edit `/etc/hosts`, `/etc/hostname` and `/etc/resolve.conf` in a running container. This is useful if you need to install bind or other services that might override one of those files.
Note, however, that changes to these files will not be saved by `docker commit`, nor will they be saved during `docker run`. That means they won't be saved in the image, nor will they persist when a container is restarted; they will only "stick" in a running container.

View file

@ -0,0 +1,83 @@
<!--[metadata]>
+++
draft=true
title = "Tools and Examples"
keywords = ["docker, bridge, docker0, network"]
[menu.main]
parent = "smn_networking_def"
+++
<![end-metadata]-->
<!--[metadata]>
Dave Tucker instructed remove this. We may want to add it back in later under another form. Labeled DRAFT for now. Won't be built.
<![end-metadata]-->
# Tools and examples
Before diving into the following sections on custom network topologies, you might be interested in glancing at a few external tools or examples of the same kinds of configuration. Here are two:
- Jérôme Petazzoni has created a `pipework` shell script to help you
connect together containers in arbitrarily complex scenarios:
[https://github.com/jpetazzo/pipework](https://github.com/jpetazzo/pipework)
- Brandon Rhodes has created a whole network topology of Docker
containers for the next edition of Foundations of Python Network
Programming that includes routing, NAT'd firewalls, and servers that
offer HTTP, SMTP, POP, IMAP, Telnet, SSH, and FTP:
[https://github.com/brandon-rhodes/fopnp/tree/m/playground](https://github.com/brandon-rhodes/fopnp/tree/m/playground)
Both tools use networking commands very much like the ones you saw in the previous section, and will see in the following sections.
# Building a point-to-point connection
<a name="point-to-point"></a>
By default, Docker attaches all containers to the virtual subnet implemented by `docker0`. You can create containers that are each connected to some different virtual subnet by creating your own bridge as shown in [Building your own bridge](#bridge-building), starting each container with `docker run --net=none`, and then attaching the containers to your bridge with the shell commands shown in [How Docker networks a container](#container-networking).
But sometimes you want two particular containers to be able to communicate directly without the added complexity of both being bound to a host-wide Ethernet bridge.
The solution is simple: when you create your pair of peer interfaces, simply throw _both_ of them into containers, and configure them as classic point-to-point links. The two containers will then be able to communicate directly (provided you manage to tell each container the other's IP address, of course). You might adjust the instructions of the previous section to go something like this:
```
# Start up two containers in two terminal windows
$ docker run -i -t --rm --net=none base /bin/bash
root@1f1f4c1f931a:/#
$ docker run -i -t --rm --net=none base /bin/bash
root@12e343489d2f:/#
# Learn the container process IDs
# and create their namespace entries
$ docker inspect -f '{{.State.Pid}}' 1f1f4c1f931a
2989
$ docker inspect -f '{{.State.Pid}}' 12e343489d2f
3004
$ sudo mkdir -p /var/run/netns
$ sudo ln -s /proc/2989/ns/net /var/run/netns/2989
$ sudo ln -s /proc/3004/ns/net /var/run/netns/3004
# Create the "peer" interfaces and hand them out
$ sudo ip link add A type veth peer name B
$ sudo ip link set A netns 2989
$ sudo ip netns exec 2989 ip addr add 10.1.1.1/32 dev A
$ sudo ip netns exec 2989 ip link set A up
$ sudo ip netns exec 2989 ip route add 10.1.1.2/32 dev A
$ sudo ip link set B netns 3004
$ sudo ip netns exec 3004 ip addr add 10.1.1.2/32 dev B
$ sudo ip netns exec 3004 ip link set B up
$ sudo ip netns exec 3004 ip route add 10.1.1.1/32 dev B
```
The two containers should now be able to ping each other and make connections successfully. Point-to-point links like this do not depend on a subnet nor a netmask, but on the bare assertion made by `ip route` that some other single IP address is connected to a particular network interface.
Note that point-to-point links can be safely combined with other kinds of network connectivity -- there is no need to start the containers with `--net=none` if you want point-to-point links to be an addition to the container's normal networking instead of a replacement.
A final permutation of this pattern is to create the point-to-point link between the Docker host and one container, which would allow the host to communicate with that one container on some single IP address and thus communicate "out-of-band" of the bridge that connects the other, more usual containers. But unless you have very specific networking needs that drive you to such a solution, it is probably far preferable to use `--icc=false` to lock down inter-container communication, as we explored earlier.

View file

@ -0,0 +1,480 @@
<!--[metadata]>
+++
title = "Docker container networking"
description = "How do we connect docker containers within and across hosts ?"
keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"]
[menu.main]
parent = "smn_networking"
weight = -5
+++
<![end-metadata]-->
# Understand Docker container networks
To build web applications that act in concert but do so securely, use the Docker
networks feature. Networks, by definition, provide complete isolation for
containers. So, it is important to have control over the networks your
applications run on. Docker container networks give you that control.
This section provides an overview of the default networking behavior that Docker
Engine delivers natively. It describes the type of networks created by default
and how to create your own, user--defined networks. It also describes the
resources required to create networks on a single host or across a cluster of
hosts.
## Default Networks
When you install Docker, it creates three networks automatically. You can list
these networks using the `docker network ls` command:
```
$ docker network ls
NETWORK ID NAME DRIVER
7fca4eb8c647 bridge bridge
9f904ee27bf5 none null
cf03ee007fb4 host host
```
Historically, these three networks are part of Docker's implementation. When
you run a container you can use the `--net` flag to specify which network you
want to run a container on. These three networks are still available to you.
The `bridge` network represents the `docker0` network present in all Docker
installations. Unless you specify otherwise with the `docker run
--net=<NETWORK>` option, the Docker daemon connects containers to this network
by default. You can see this bridge as part of a host's network stack by using
the `ifconfig` command on the host.
```
ubuntu@ip-172-31-36-118:~$ ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:47:bc:3a:eb
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:47ff:febc:3aeb/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:17 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1100 (1.1 KB) TX bytes:648 (648.0 B)
```
The `none` network adds a container to a container-specific network stack. That container lacks a network interface. Attaching to such a container and looking at it's stack you see this:
```
ubuntu@ip-172-31-36-118:~$ docker attach nonenetcontainer
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
/ # ifconfig
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ #
```
>**Note**: You can detach from the container and leave it running with `CTRL-p CTRL-q`.
The `host` network adds a container on the hosts network stack. You'll find the
network configuration inside the container is identical to the host.
With the exception of the the `bridge` network, you really don't need to
interact with these default networks. While you can list and inspect them, you
cannot remove them. They are required by your Docker installation. However, you
can add your own user-defined networks and these you can remove when you no
longer need them. Before you learn more about creating your own networks, it is
worth looking at the `default` network a bit.
### The default bridge network in detail
The default bridge network is present on all Docker hosts. The `docker network inspect`
```
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
},
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "9001"
}
}
]
```
The Engine automatically creates a `Subnet` and `Gateway` to the network.
The `docker run` command automatically adds new containers to this network.
```
$ docker run -itd --name=container1 busybox
3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c
$ docker run -itd --name=container2 busybox
94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c
```
Inspecting the `bridge` network again after starting two containers shows both newly launched containers in the network. Their ids show up in the container
```
$ docker network inspect bridge
{[
{
"Name": "bridge",
"Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
},
"Containers": {
"3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c": {
"EndpointID": "647c12443e91faf0fd508b6edfe59c30b642abb60dfab890b4bdccee38750bc1",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c": {
"EndpointID": "b047d090f446ac49747d3c37d63e4307be745876db7f0ceef7b311cbba615f48",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "9001"
}
}
]
```
The `docker network inspect` command above shows all the connected containers and their network resources on a given network. Containers in this default network are able to communicate with each other using IP addresses. Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy `docker run --link` option.
You can `attach` to a running `container` and investigate its configuration:
```
$ docker attach container1
/ # ifconfig
ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:16 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1296 (1.2 KiB) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
```
Then use `ping` for about 3 seconds to test the connectivity of the containers on this `bridge` network.
```
/ # ping -w3 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.096 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.080 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.074 ms
--- 172.17.0.3 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.074/0.083/0.096 ms
```
Finally, use the `cat` command to check the `container1` network configuration:
```
/ # cat /etc/hosts
172.17.0.2 3386a527aa08
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
```
To detach from a `container1` and leave it running use `CTRL-p CTRL-q`.Then, attach to `container2` and repeat these three commands.
```
$ docker attach container2
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03
inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:15 errors:0 dropped:0 overruns:0 frame:0
TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1166 (1.1 KiB) TX bytes:1026 (1.0 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # ping -w3 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.067 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms
--- 172.17.0.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.067/0.071/0.075 ms
/ # cat /etc/hosts
172.17.0.3 94447ca47985
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
```
The default `docker0` bridge network supports the use of port mapping and `docker run --link` to allow communications between containers in the `docker0` network. These techniques are cumbersome to set up and prone to error. While they are still available to you as techniques, it is better to avoid them and define your own bridge networks instead.
## User-defined networks
You can create your own user-defined networks that better isolate containers.
Docker provides some default **network drivers** for use creating these
networks. You can create a new **bridge network** or **overlay network**. You
can also create a **network plugin** or **remote network** written to your own
specifications.
You can create multiple networks. You can add containers to more than one
network. Containers can only communicate within networks but not across
networks. A container attached to two networks can communicate with member
containers in either network.
The next few sections describe each of Docker's built-in network drivers in
greater detail.
### A bridge network
The easiest user-defined network to create is a `bridge` network. This network
is similar to the historical, default `docker0` network. There are some added
features and some old features that aren't available.
```
$ docker network create --driver bridge isolated_nw
c5ee82f76de30319c75554a57164c682e7372d2c694fec41e42ac3b77e570f6b
$ docker network inspect isolated_nw
[
{
"Name": "isolated_nw",
"Id": "c5ee82f76de30319c75554a57164c682e7372d2c694fec41e42ac3b77e570f6b",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{}
]
},
"Containers": {},
"Options": {}
}
]
$ docker network ls
NETWORK ID NAME DRIVER
9f904ee27bf5 none null
cf03ee007fb4 host host
7fca4eb8c647 bridge bridge
c5ee82f76de3 isolated_nw bridge
```
After you create the network, you can launch containers on it using the `docker run --net=<NETWORK>` option.
```
$ docker run --net=isolated_nw -itd --name=container3 busybox
885b7b4f792bae534416c95caa35ba272f201fa181e18e59beba0c80d7d77c1d
$ docker network inspect isolated_nw
[
{
"Name": "isolated_nw",
"Id": "c5ee82f76de30319c75554a57164c682e7372d2c694fec41e42ac3b77e570f6b",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{}
]
},
"Containers": {
"885b7b4f792bae534416c95caa35ba272f201fa181e18e59beba0c80d7d77c1d": {
"EndpointID": "514e1b419074397ea92bcfaa6698d17feb62db49d1320a27393b853ec65319c3",
"MacAddress": "02:42:ac:15:00:02",
"IPv4Address": "172.21.0.2/16",
"IPv6Address": ""
}
},
"Options": {}
}
]
```
The containers you launch into this network must reside on the same Docker host.
Each container in the network can immediately communicate with other containers
in the network. Though, the network itself isolates the containers from external
networks.
![An isolated network](images/bridge_network.png)
Within a user-defined bridge network, linking is not supported. You can
expose and publish container ports on containers in this network. This is useful
if you want make a portion of the `bridge` network available to an outside
network.
![Bridge network](images/network_access.png)
A bridge network is useful in cases where you want to run a relatively small
network on a single host. You can, however, create significantly larger networks
by creating an `overlay` network.
### An overlay network
Docker's `overlay` network driver supports multi-host networking natively
out-of-the-box. This support is accomplished with the help of `libnetwork`, a
built-in VXLAN-based overlay network driver, and Docker's `libkv` library.
The `overlay` network requires a valid key-value store service. Currently,
Docker's supports Consul, Etcd, and Zookeeper (Distributed store). Before
creating a network you must install and configure your chosen key-value store
service. The Docker hosts that you intend to network and the service must be
able to communicate.
![Key-value store](images/key_value.png)
Each host in the network must run a Docker Engine instance. The easiest way to
provision the hosts are with Docker Machine.
![Engine on each host](images/engine_on_net.png)
Once you have several machines provisioned, you can use Docker Swarm to quickly
form them into a swarm which includes a discovery service as well.
To create an overlay network, you configure options on the `daemon` on each
Docker Engine for use with `overlay` network. There are two options to set:
| Option | Description |
|----------------------------------|-----------------------------------------------------------|
| `--cluster-store=PROVIDER://URL` | Describes the location of the KV service. |
| `--cluster-advertise=HOST_IP` | Advertises containers created by the HOST on the network. |
Create an `overlay` network on one of the machines in the Swarm.
$ docker network create --driver overlay my-multi-host-network
This results in a single network spanning multiple hosts. An `overlay` network
provides complete isolation for the containers.
![An overlay network](images/overlay_network.png)
Then, on each host, launch containers making sure to specify the network name.
$ docker run -itd --net=mmy-multi-host-network busybox
Once connected, each container has access to all the containers in the network
regardless of which Docker host the container was launched on.
![Published port](images/overlay-network-final.png)
If you would like to try this for yourself, see the [Getting started for
overlay](get-started-overlay.md).
### Custom network plugin
If you like, you can write your own network driver plugin. A network
driver plugin makes use of Docker's plugin infrastructure. In this
infrastructure, a plugin is a process running on the same Docker host as the
Docker `daemon`.
Network plugins follow the same restrictions and installation rules as other
plugins. All plugins make use of the plugin API. They have a lifecycle that
encompasses installation, starting, stopping and activation.
Once you have created and installed a custom network driver, you use it like the
built-in network drivers. For example:
$ docker network create --driver weave mynet
You can inspect it, add containers too and from it, and so forth. Of course,
different plugins may make use of different technologies or frameworks. Custom
networks can include features not present in Docker's default networks. For more
information on writing plugins, see [Extending Docker](../../extend) and
[Writing a network driver plugin](../../extend/plugins_network.md).
## Legacy links
Before the Docker network feature, you could use the Docker link feature to
allow containers to discover each other and securely transfer information about
one container to another container. With the introduction of Docker networks,
you can still create links but they are only supported on the default `bridge`
network named `bridge` and appearing in your network stack as `docker0`.
While links are still supported in this limited capacity, you should avoid them
in preference of Docker networks. The link feature is expected to be deprecated
and removed in a future release.
## Related information
- [Work with network commands](work-with-networks.md)
- [Get started with multi-host networking](get-started-overlay.md)
- [Managing Data in Containers](../dockervolumes.md)
- [Docker Machine overview](https://docs.docker.com/machine)
- [Docker Swarm overview](https://docs.docker.com/swarm)
- [Investigate the LibNetwork project](https://github.com/docker/libnetwork/blob/master)

View file

@ -0,0 +1,287 @@
<!--[metadata]>
+++
title = "Get started with multi-host networking"
description = "Use overlay for multi-host networking"
keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"]
[menu.main]
parent = "smn_networking"
weight=-3
+++
<![end-metadata]-->
# Get started with multi-host networking
This article uses an example to explain the basics of creating a mult-host
network. Docker Engine supports multi-host-networking out-of-the-box through the
`overlay` network driver. Unlike `bridge` networks overlay networks require
some pre-existing conditions before you can create one. These conditions are:
* A host with a 3.16 kernel version or higher.
* Access to a key-value store. Docker supports Consul, Etcd, and Zookeeper (Distributed store) key-value stores.
* A cluster of hosts with connectivity to the key-value store.
* A properly configured Engine `daemon` on each host in the cluster.
You'll use Docker Machine to create both the the key-value store server and the
host cluster. This example creates a Swarm cluster.
## Prerequisites
Before you begin, make sure you have a system on your network with the latest
version of Docker Engine and Docker Machine installed. The example also relies
on VirtualBox. If you installed on a Mac or Windows with Docker Toolbox, you
have all of these installed already.
If you have not already done so, make sure you upgrade Docker Engine and Docker
Machine to the latest versions.
## Step 1: Set up a key-value store
An overlay network requires a key-value store. The key-value stores information
about the network state which includes discovery, networks, endpoints,
ip-addresses, and more. Docker supports Consul, Etcd, and Zookeeper (Distributed
store) key-value stores. This example uses Consul.
1. Log into a system prepared with the prerequisite Docker Engine, Docker Machine, and VirtualBox software.
2. Provision a VirtualBox machine called `mh-keystore`.
$ docker-machine create -d VirtualBox mh-keystore
When you provision a new machine, the process adds Docker Engine to the
host. This means rather than installing Consul manually, you can create an
instance using the [consul image from Docker
Hub](https://hub.docker.com/r/progrium/consul/). You'll do this in the next step.
3. Start a `progrium/consul` container running on the `mh-keystore` machine.
$ docker $(docker-machine config mh-keystore) run -d \
-p "8500:8500" \
-h "consul" \
progrium/consul -server -bootstrap
You passed the `docker run` command the connection configuration using a bash
expansion `$(docker-machine config mh-keystore)`. The client started a
`progrium/consul` image running in the `mh-keystore` machine. The server is called `consul`and is listening port `8500`.
4. Set your local environment to the `mh-keystore` machine.
$ eval "$(docker-machine env mh-keystore)"
5. Run the `docker ps` command to see the `consul` container.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4d51392253b3 progrium/consul "/bin/start -server -" 25 minutes ago Up 25 minutes 53/tcp, 53/udp, 8300-8302/tcp, 0.0.0.0:8500->8500/tcp, 8400/tcp, 8301-8302/udp admiring_panini
Keep your terminal open and move onto the next step.
## Step 2: Create a Swarm cluster
In this step, you use `docker-machine` to provision the hosts for your network.
At this point, you won't actually created the network. You'll create several
machines in VirtualBox. One of the machines will act as the Swarm master;
you'll create that first. As you create each host, you'll pass the Engine on
that machine options that are needed by the `overlay` network driver.
1. Create a Swarm master.
$ docker-machine create \
-d VirtualBox \
--swarm --swarm-image="swarm" --swarm-master \
--swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500"
--engine-opt="cluster-advertise=eth1:2376" \
mhs-demo0
At creation time, you supply the Engine `daemon` with the ` --cluster-store` option. This option tells the Engine the location of the key-value store for the `overlay` network. The bash expansion `$(docker-machine ip mh-keystore)` resolves to the IP address of the Consul server you created in "STEP 1". The `--cluster-advertise` option advertises the machine on the network.
2. Create another host and add it to the Swarm cluster.
$ docker-machine create -d VirtualBox \
--swarm --swarm-image="swarm:1.0.0-rc2" \
--swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \
--engine-opt="cluster-advertise=eth1:2376" \
mhs-demo1
3. List your machines to confirm they are all up and running.
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default VirtualBox Running tcp://192.168.99.100:2376
mh-keystore VirtualBox Running tcp://192.168.99.103:2376
mhs-demo0 VirtualBox Running tcp://192.168.99.104:2376 mhs-demo0 (master)
mhs-demo1 VirtualBox Running tcp://192.168.99.105:2376 mhs-demo0
At this point you have a set of hosts running on your network. You are ready to create a multi-host network for containers using these hosts.
Leave your terminal open and go onto the next step.
## Step 3: Create the overlay Network
To create an overlay network
1. Set your docker environment to the Swarm master.
$ eval $(docker-machine --swarm env mhs-demo0)
Using the `--swarm` flag with `docker-machine` restricts the `docker` commands to Swarm information alone.
2. Use the `docker info` command to view the Swarm.
$ docker info
Containers: 3
Images: 2
Role: primary
Strategy: spread
Filters: affinity, health, constraint, port, dependency
Nodes: 2
mhs-demo0: 192.168.99.104:2376
└ Containers: 2
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=VirtualBox, storagedriver=aufs
mhs-demo1: 192.168.99.105:2376
└ Containers: 1
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=VirtualBox, storagedriver=aufs
CPUs: 2
Total Memory: 2.043 GiB
Name: 30438ece0915
From this information, you can see that you are running three containers and 2 images on the Master.
3. Create your `overlay` network.
$ docker network create --driver overlay my-net
You only need to create the network on a single host in the cluster. In this case, you used the Swarm master but you could easily have run it on any host in the cluster.
4. Check that the network is running:
$ docker network ls
NETWORK ID NAME DRIVER
412c2496d0eb mhs-demo1/host host
dd51763e6dd2 mhs-demo0/bridge bridge
6b07d0be843f my-net overlay
b4234109bd9b mhs-demo0/none null
1aeead6dd890 mhs-demo0/host host
d0bb78cbe7bd mhs-demo1/bridge bridge
1c0eb8f69ebb mhs-demo1/none null
Because you are in the Swarm master environment, you see all the networks on all Swarm agents. Notice that each `NETWORK ID` is unique. The default networks on each engine and the single overlay network.
5. Switch to each Swarm agent in turn and list the network.
$ eval $(docker-machine env mhs-demo0)
$ docker network ls
NETWORK ID NAME DRIVER
6b07d0be843f my-net overlay
dd51763e6dd2 bridge bridge
b4234109bd9b none null
1aeead6dd890 host host
$ eval $(docker-machine env mhs-demo1)
$ docker network ls
NETWORK ID NAME DRIVER
d0bb78cbe7bd bridge bridge
1c0eb8f69ebb none null
412c2496d0eb host host
6b07d0be843f my-net overlay
Both agents reports it has the `my-net `network with the `6b07d0be843f` id. You have a multi-host container network running!
## Step 4: Run an application on your Network
Once your network is created, you can start a container on any of the hosts and it automatically is part of the network.
1. Point your environment to your `mhs-demo0` instance.
$ eval $(docker-machine env mhs-demo0)
2. Start an Nginx server on `mhs-demo0`.
$ docker run -itd --name=web --net=my-net --env="constraint:node==mhs-demo0" nginx
This command starts a web server on the Swarm master.
3. Point your Machine environment to `mhs-demo1`
$ eval $(docker-machine env mhs-demo1)
2. Run a Busybox instance and get the contents of the Ngnix server's home page.
$ docker run -it --rm --net=my-net --env="constraint:node==mhs-demo1" busybox wget -O- http://web
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
ab2b8a86ca6c: Pull complete
2c5ac3f849df: Pull complete
Digest: sha256:5551dbdfc48d66734d0f01cafee0952cb6e8eeecd1e2492240bf2fd9640c2279
Status: Downloaded newer image for busybox:latest
Connecting to web (10.0.0.2:80)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
- 100% |*******************************| 612 0:00:00 ETA
## Step 5: Extra Credit with Docker Compose
You can try starting a second network on your existing Swarm cluser using Docker Compose.
1. Log into the Swarm master.
2. Install Docker Compose.
3. Create a `docker-compose.yml` file.
4. Add the following content to the file.
web:
image: bfirsh/compose-mongodb-demo
environment:
- "MONGO_HOST=counter_mongo_1"
- "constraint:node==swl-demo0"
ports:
- "80:5000"
mongo:
image: mongo
5. Save and close the file.
6. Start the application with Compose.
$ docker-compose up --x-networking up -d
## Related information
* [Docker Swarm overview](https://docs.docker.com/swarm)
* [Docker Machine overview](https://docs.docker.com/machine)

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 23 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 36 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 26 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 43 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 53 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 44 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 19 KiB

View file

@ -0,0 +1,21 @@
<!--[metadata]>
+++
title = "Network configuration"
description = "Docker networking feature is introduced"
keywords = ["network, networking, bridge, docker, documentation"]
[menu.main]
identifier="smn_networking"
parent= "mn_use_docker"
weight=7
+++
<![end-metadata]-->
# Docker networks feature overview
This sections explains how to use the Docker networks feature. This feature allows users to define their own networks and connect containers to them. Using this feature you can create a network on a single host or a network that spans across multiple hosts.
- [Understand Docker container networks](dockernetworks.md)
- [Work with network commands](work-with-networks.md)
- [Get started with multi-host networking](get-started-overlay.md)
If you are already familiar with Docker's default bridge network, `docker0` that network continues to be supported. It is created automatically in every installation. The default bridge network is also named `bridge`. To see a list of topics related to that network, read the articles listed in the [Docker default bridge network](default_network/index.md).

View file

@ -0,0 +1,463 @@
<!--[metadata]>
+++
title = "Work with network commands"
description = "How to work with docker networks"
keywords = ["commands, Usage, network, docker, cluster"]
[menu.main]
parent = "smn_networking"
weight=-4
+++
<![end-metadata]-->
# Work with network commands
This article provides examples of the network subcommands you can use to interact with Docker networks and the containers in them. The commands are available through the Docker Engine CLI. These commands are:
* `docker network create`
* `docker network connect`
* `docker network ls`
* `docker network rm`
* `docker network disconnect`
* `docker network inspect`
While not required, it is a good idea to read [Understanding Docker
network](dockernetworks.md) before trying the examples in this section. The
examples for the rely on a `bridge` network so that you can try them
immediately. If you would prefer to experiment with an `overlay` network see
the [Getting started with multi-host networks](get-started-overlay.md) instead.
## Create networks
Docker Engine creates a `bridge` network automatically when you install Engine.
This network corresponds to the `docker0` bridge that Engine has traditionally
relied on. In addition to this network, you can create your own `bridge` or `overlay` network.
A `bridge` network resides on a single host running an instance of Docker Engine. An `overlay` network can span multiple hosts running their own engines. If you run `docker network create` and supply only a network name, it creates a bridge network for you.
```bash
$ docker network create simple-network
de792b8258895cf5dc3b43835e9d61a9803500b991654dacb1f4f0546b1c88f8
$ docker network inspect simple-network
[
{
"Name": "simple-network",
"Id": "de792b8258895cf5dc3b43835e9d61a9803500b991654dacb1f4f0546b1c88f8",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{}
]
},
"Containers": {},
"Options": {}
}
]
```
Unlike `bridge` networks, `overlay` networks require some pre-existing conditions
before you can create one. These conditions are:
* Access to a key-value store. Engine supports Consul Etcd, and Zookeeper (Distributed store) key-value stores.
* A cluster of hosts with connectivity to the key-value store.
* A properly configured Engine `daemon` on each host in the swarm.
The `docker daemon` options that support the `overlay` network are:
* `--cluster-store`
* `--cluster-store-opt`
* `--cluster-advertise`
It is also a good idea, though not required, that you install Docker Swarm
to manage the cluster. Swarm provides sophisticated discovery and server
management that can assist your implementation.
When you create a network, Engine creates a non-overlapping subnetwork for the
network by default. You can override this default and specify a subnetwork
directly using the the `--subnet` option. On a `bridge` network you can only
create a single subnet. An `overlay` network supports multiple subnets.
In addition to the `--subnetwork` option, you also specify the `--gateway` `--ip-range` and `--aux-address` options.
```bash
$ docker network create -d overlay
--subnet=192.168.0.0/16 --subnet=192.170.0.0/16
--gateway=192.168.0.100 --gateway=192.170.0.100
--ip-range=192.168.1.0/24
--aux-address a=192.168.1.5 --aux-address b=192.168.1.6
--aux-address a=192.170.1.5 --aux-address b=192.170.1.6
my-multihost-network
```
Be sure that your subnetworks do not overlap. If they do, the network create fails and Engine returns an error.
## Connect containers
You can connect containers dynamically to one or more networks. These networks
can be backed the same or different network drivers. Once connected, the
containers can communicate using another container's IP address or name.
For `overlay` networks or custom plugins that support multi-host
connectivity, containers connected to the same multi-host network but launched
from different hosts can also communicate in this way.
Create two containers for this example:
```bash
$ docker run -itd --name=container1 busybox
18c062ef45ac0c026ee48a83afa39d25635ee5f02b58de4abc8f467bcaa28731
$ docker run -itd --name=container2 busybox
498eaaaf328e1018042c04b2de04036fc04719a6e39a097a4f4866043a2c2152
```
Then create a isolated, `bridge` network to test with.
```bash
$ docker network create -d bridge isolated_nw
f836c8deb6282ee614eade9d2f42d590e603d0b1efa0d99bd88b88c503e6ba7a
```
Connect `container2` to the network and then `inspect` the network to verify the connection:
```
$ docker network connect isolated_nw container2
$ docker network inspect isolated_nw
[[
{
"Name": "isolated_nw",
"Id": "f836c8deb6282ee614eade9d2f42d590e603d0b1efa0d99bd88b88c503e6ba7a",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{}
]
},
"Containers": {
"498eaaaf328e1018042c04b2de04036fc04719a6e39a097a4f4866043a2c2152": {
"EndpointID": "0e24479cfaafb029104999b4e120858a07b19b1b6d956ae56811033e45d68ad9",
"MacAddress": "02:42:ac:15:00:02",
"IPv4Address": "172.21.0.2/16",
"IPv6Address": ""
}
},
"Options": {}
}
]
```
You can see that the Engine automatically assigns an IP address to `container2`.
If you had specified a `--subnetwork` when creating your network, the network
would have used that addressing. Now, start a third container and connect it to
the network on launch using the `docker run` command's `--net` option:
```bash
$ docker run --net=isolated_nw -itd --name=container3 busybox
c282ca437ee7e926a7303a64fc04109740208d2c20e442366139322211a6481c
```
Now, inspect the network resources used by `container3`.
```bash
$ docker inspect --format='{{json .NetworkSettings.Networks}}' container3
{"isolated_nw":{"EndpointID":"e5d077f9712a69c6929fdd890df5e7c1c649771a50df5b422f7e68f0ae61e847","Gateway":"172.21.0.1","IPAddress":"172.21.0.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:15:00:03"}}
```
Repeat this command for `container2`. If you have Python installed, you can pretty print the output.
```bash
$ docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool
{
"bridge": {
"EndpointID": "281b5ead415cf48a6a84fd1a6504342c76e9091fe09b4fdbcc4a01c30b0d3c5b",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:03"
},
"isolated_nw": {
"EndpointID": "0e24479cfaafb029104999b4e120858a07b19b1b6d956ae56811033e45d68ad9",
"Gateway": "172.21.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.21.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:15:00:02"
}
}
```
You should find `container2` belongs to two networks. The `bridge` network
which it joined by default when you launched it and the `isolated_nw` which you
later connected it to.
![](images/working.png)
In the case of `container3`, you connected it through `docker run` to the
`isolated_nw` so that container is not connected to `bridge`.
Use the `docker attach` command to connect to the running `container2` and
examine its networking stack:
```bash
$ docker attach container2
```
If you look a the container's network stack you should see two Ethernet interfaces, one for the default bridge network and one for the `isolated_nw` network.
```bash
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03
inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:648 (648.0 B)
eth1 Link encap:Ethernet HWaddr 02:42:AC:15:00:02
inet addr:172.21.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe15:2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
```
Display the container's `etc/hosts` file:
```bash
/ # cat /etc/hosts
172.17.0.3 498eaaaf328e
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.21.0.3 container3
172.21.0.3 container3.isolated_nw
```
On the `isolated_nw` which was user defined, the Docker network feature updated the `/etc/hosts` with the proper name resolution. Inside of `container2` it is possible to ping `container3` by name.
```bash
/ # ping -w 4 container3
PING container3 (172.21.0.3): 56 data bytes
64 bytes from 172.21.0.3: seq=0 ttl=64 time=0.070 ms
64 bytes from 172.21.0.3: seq=1 ttl=64 time=0.080 ms
64 bytes from 172.21.0.3: seq=2 ttl=64 time=0.080 ms
64 bytes from 172.21.0.3: seq=3 ttl=64 time=0.097 ms
--- container3 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.070/0.081/0.097 ms
```
This isn't the case for the default bridge network. Both `container2` and `container1` are connected to the default bridge network. Docker does not support automatic service discovery on this network. For this reason, pinging `container1` by name fails as you would expect based on the `/etc/hosts` file:
```bash
/ # ping -w 4 container1
ping: bad address 'container1'
```
A ping using the `container1` IP address does succeed though:
```bash
/ # ping -w 4 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.095 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms
64 bytes from 172.17.0.2: seq=3 ttl=64 time=0.101 ms
--- 172.17.0.2 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.072/0.085/0.101 ms
```
If you wanted you could connect `container1` to `container2` with the `docker
run --link` command and that would enable the two containers to interact by name
as well as IP.
Detach from a `container2` and leave it running using `CTRL-p CTRL-q`.
In this example, `container2` is attached to both networks and so can talk to
`container1` and `container3`. But `container3` and `container1` are not in the
same network and cannot communicate. Test, this now by attaching to
`container3` and attempting to ping `container1` by IP address.
```bash
$ docker attach container3
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
^C
--- 172.17.0.2 ping statistics ---
10 packets transmitted, 0 packets received, 100% packet loss
```
To connect a container to a network, the container must be running. If you stop
a container and inspect a network it belongs to, you won't see that container.
The `docker network inspect` command only shows running containers.
## Disconnecting containers
You can disconnect a container from a network using the `docker network
disconnect` command.
```
$ docker network disconnect isolated_nw container2
docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool
{
"bridge": {
"EndpointID": "9e4575f7f61c0f9d69317b7a4b92eefc133347836dd83ef65deffa16b9985dc0",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:03"
}
}
$ docker network inspect isolated_nw
[[
{
"Name": "isolated_nw",
"Id": "f836c8deb6282ee614eade9d2f42d590e603d0b1efa0d99bd88b88c503e6ba7a",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{}
]
},
"Containers": {
"c282ca437ee7e926a7303a64fc04109740208d2c20e442366139322211a6481c": {
"EndpointID": "e5d077f9712a69c6929fdd890df5e7c1c649771a50df5b422f7e68f0ae61e847",
"MacAddress": "02:42:ac:15:00:03",
"IPv4Address": "172.21.0.3/16",
"IPv6Address": ""
}
},
"Options": {}
}
]
```
Once a container is disconnected from a network, it cannot communicate with
other containers connected to that network. In this example, `container2` can no longer talk to `container3` on the `isolated_nw` network.
```
$ docker attach container2
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03
inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:648 (648.0 B) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
/ # ping container3
PING container3 (172.20.0.1): 56 data bytes
^C
--- container3 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
```
The `container2` still has full connectivity to the bridge network
```bash
/ # ping container1
PING container1 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.119 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.174 ms
^C
--- container1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.119/0.146/0.174 ms
/ #
```
## Remove a network
When all the containers in a network are stopped or disconnected, you can remove a network.
```bash
$ docker network disconnect isolated_nw container3
```
```bash
docker network inspect isolated_nw
[
{
"Name": "isolated_nw",
"Id": "f836c8deb6282ee614eade9d2f42d590e603d0b1efa0d99bd88b88c503e6ba7a",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{}
]
},
"Containers": {},
"Options": {}
}
]
$ docker network rm isolated_nw
```
List all your networks to verify the `isolated_nw` was removed:
```
$ docker network ls
NETWORK ID NAME DRIVER
72314fa53006 host host
f7ab26d71dbd bridge bridge
0f32e83e61ac none null
```
## Related information
* [network create](../../reference/commandline/network_create.md)
* [network inspect](../../reference/commandline/network_inspect.md)
* [network connect](../../reference/commandline/network_connect.md)
* [network disconnect](../../reference/commandline/network_disconnect.md)
* [network ls](../../reference/commandline/network_ls.md)
* [network rm](../../reference/commandline/network_rm.md)

View file

@ -0,0 +1,240 @@
<!--[metadata]>
+++
title = "Networking containers"
description = "How to manage data inside your Docker containers."
keywords = ["Examples, Usage, volume, docker, documentation, user guide, data, volumes"]
[menu.main]
parent = "smn_containers"
weight = -3
+++
<![end-metadata]-->
# Networking containers
If you are working your way through the user guide, you just built and ran a
simple application. You've also built in your own images. This section teaches
you how to network your containers.
## Name a container
You've already seen that each container you create has an automatically
created name; indeed you've become familiar with our old friend
`nostalgic_morse` during this guide. You can also name containers
yourself. This naming provides two useful functions:
* You can name containers that do specific functions in a way
that makes it easier for you to remember them, for example naming a
container containing a web application `web`.
* Names provide Docker with a reference point that allows it to refer to other
containers. There are several commands that support this and you'll use one in a exercise later.
You name your container by using the `--name` flag, for example launch a new container called web:
$ docker run -d -P --name web training/webapp python app.py
Use the `docker ps` command to see check the name:
$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aed84ee21bde training/webapp:latest python app.py 12 hours ago Up 2 seconds 0.0.0.0:49154->5000/tcp web
You can also use `docker inspect` with the container's name.
$ docker inspect web
[
{
"Id": "3ce51710b34f5d6da95e0a340d32aa2e6cf64857fb8cdb2a6c38f7c56f448143",
"Created": "2015-10-25T22:44:17.854367116Z",
"Path": "python",
"Args": [
"app.py"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
...
Container names must be unique. That means you can only call one container
`web`. If you want to re-use a container name you must delete the old container
(with `docker rm`) before you can reuse the name with a new container. Go ahead and stop and them remove your `web` container.
$ docker stop web
web
$ docker rm web
web
## Launch a container on the default network
Docker includes support for networking containers through the use of **network
drivers**. By default, Docker provides two network drivers for you, the
`bridge` and the `overlay` driver. You can also write a network driver plugin so
that you can create your own drivers but that is an advanced task.
Every installation of the Docker Engine automatically includes three default networks. You can list them:
$ docker network ls
NETWORK ID NAME DRIVER
18a2866682b8 none null
c288470c46f6 host host
7b369448dccb bridge bridge
The network named `bridge` is a special network. Unless you tell it otherwise, Docker always launches your containers in this network. Try this now:
$ docker run -itd --name=networktest ubuntu
74695c9cea6d9810718fddadc01a727a5dd3ce6a69d09752239736c030599741
Inspecting the network is an easy way to find out the container's IP address.
```bash
[
{
"Name": "bridge",
"Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
},
"Containers": {
"3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c": {
"EndpointID": "647c12443e91faf0fd508b6edfe59c30b642abb60dfab890b4bdccee38750bc1",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c": {
"EndpointID": "b047d090f446ac49747d3c37d63e4307be745876db7f0ceef7b311cbba615f48",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "9001"
}
}
]
```
You can remove a container from a network by disconnecting the container. To do this, you supply both the network name and the container name. You can also use the container id. In this example, though, the name is faster.
$ docker network disconnect bridge networktest
While you can disconnect a container from a network, you cannot remove the builtin `bridge` network named `bridge`. Networks are natural ways to isolate containers from other containers or other networks. So, as you get more experienced with Docker, you'll want to create your own networks.
## Create your own bridge network
Docker Engine natively supports both bridge networks and overlay networks. A bridge network is limited to a single host running Docker Engine. An overlay network can include multiple hosts and is a more advanced topic. For this example, you'll create a bridge network:
$ docker network create -d bridge my-bridge-network
The `-d` flag tells Docker to use the `bridge` driver for the new network. You could have left this flag off as `bridge` is the default value for this flag. Go ahead and list the networks on your machine:
$ docker network ls
NETWORK ID NAME DRIVER
7b369448dccb bridge bridge
615d565d498c my-bridge-network bridge
18a2866682b8 none null
c288470c46f6 host host
If you inspect the network, you'll find that it has nothing in it.
$ docker network inspect my-bridge-network
[
{
"Name": "my-bridge-network",
"Id": "5a8afc6364bccb199540e133e63adb76a557906dd9ff82b94183fc48c40857ac",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{}
]
},
"Containers": {},
"Options": {}
}
]
## Add containers to a network
To build web applications that act in concert but do so securely, create a
network. Networks, by definition, provide complete isolation for containers. You
can add containers to a network when you first run a container.
Launch a container running a PostgreSQL database and pass it the `--net=my-bridge-network` flag to connect it to your new network:
$ docker run -d --net=my-bridge-network --name db training/postgres
If you inspect your `my-bridge-network` you'll see it has a container attached.
You can also inspect your container to see where it is connected:
$ docker inspect --format='{{json .NetworkSettings.Networks}}' db
{"bridge":{"EndpointID":"508b170d56b2ac9e4ef86694b0a76a22dd3df1983404f7321da5649645bf7043","Gateway":"172.17.0.1","IPAddress":"172.17.0.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:11:00:02"}}
Now, go ahead and start your by now familiar web application. This time leave off the `-P` flag and also don't specify a network.
$ docker run -d --name web training/webapp python app.py
Which network is your `web` application running under? Inspect the application and you'll find it is running in the default `bridge` network.
$ docker inspect --format='{{json .NetworkSettings.Networks}}' web
{"bridge":{"EndpointID":"508b170d56b2ac9e4ef86694b0a76a22dd3df1983404f7321da5649645bf7043","Gateway":"172.17.0.1","IPAddress":"172.17.0.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:11:00:02"}}
Then, get the IP address of your `web`
$ docker inspect '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' web
172.17.0.2
Now, open a shell to your running `db` container:
$ docker exec -it db bash
root@a205f0dd33b2:/# ping 172.17.0.2
ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
^C
--- 172.17.0.2 ping statistics ---
44 packets transmitted, 0 received, 100% packet loss, time 43185ms
After a bit, use CTRL-C to end the `ping` and you'll find the ping failed. That is because the two container are running on different networks. You can fix that. Then, use CTRL-C to exit the container.
Docker networking allows you to attach a container to as many networks as you like. You can also attach an already running container. Go ahead and attach your running `web` app to the `my-bridge-network`.
$ docker network connect my-bridge-network Web
Open a shell into the `db` application again and try the ping command. This time just use the container name `web` rather than the IP Address.
$ docker exec -it db bash
root@a205f0dd33b2:/# ping web
PING web (172.19.0.3) 56(84) bytes of data.
64 bytes from web (172.19.0.3): icmp_seq=1 ttl=64 time=0.095 ms
64 bytes from web (172.19.0.3): icmp_seq=2 ttl=64 time=0.060 ms
64 bytes from web (172.19.0.3): icmp_seq=3 ttl=64 time=0.066 ms
^C
--- web ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.060/0.073/0.095/0.018 ms
The `ping` shows it is contacting a different IP address, the address on the `my-bridge-network` which is different from its address on the `bridge` network.
## Next steps
Now that you know how to network containers, see [how to manage data in containers](dockervolumes.md).

View file

@ -1,41 +1,35 @@
<!--[metadata]>
+++
title = "Working with containers"
title = "Run a simple application"
description = "Learn how to manage and operate Docker containers."
keywords = ["docker, the docker guide, documentation, docker.io, monitoring containers, docker top, docker inspect, docker port, ports, docker logs, log, Logs"]
[menu.main]
parent="smn_containers"
weight=-5
+++
<![end-metadata]-->
# Working with containers
# Run a simple application
In the [last section of the Docker User Guide](dockerizing.md)
we launched our first containers. We launched containers using the
`docker run` command:
* Interactive container runs in the foreground.
* Daemonized container runs in the background.
In the process we learned about several Docker commands:
In the ["*Hello world in a container*"](dockerizing.md) you launched your
first containers using the `docker run` command. You ran an *interactive container* that ran in the foreground. You also ran a *detached container* that ran in the background. In the process you learned about several Docker commands:
* `docker ps` - Lists containers.
* `docker logs` - Shows us the standard output of a container.
* `docker stop` - Stops running containers.
> **Tip:**
> Another way to learn about `docker` commands is our
> [interactive tutorial](https://www.docker.com/tryit/).
## Learn about the Docker client
The `docker` client is pretty simple. Each action you can take
with Docker is a command and each command can take a series of
flags and arguments.
If you didn't realize it yet, you've been using the Docker client each time you
typed `docker` in your Bash terminal. The client is a simple command line client
also known as a command-line interface (CLI). Each action you can take with
the client is a command and each command can take a series of flags and arguments.
# Usage: [sudo] docker [command] [flags] [arguments] ..
# Usage: [sudo] docker [subcommand] [flags] [arguments] ..
# Example:
$ docker run -i -t ubuntu /bin/bash
Let's see this in action by using the `docker version` command to return
You can see this in action by using the `docker version` command to return
version information on the currently installed Docker client and daemon.
$ docker version
@ -43,7 +37,7 @@ version information on the currently installed Docker client and daemon.
This command will not only provide you the version of Docker client and
daemon you are using, but also the version of Go (the programming
language powering Docker).
Client:
Version: 1.8.1
API version: 1.20
@ -80,52 +74,52 @@ To see usage for a specific command, specify the command with the `--help` flag:
--no-stdin=false Do not attach stdin
--sig-proxy=true Proxy all received signals to the process
> **Note:**
> **Note:**
> For further details and examples of each command, see the
> [command reference](../reference/commandline/cli.md) in this guide.
## Running a web application in Docker
So now we've learnt a bit more about the `docker` client let's move onto
So now you've learned a bit more about the `docker` client you can move onto
the important stuff: running more containers. So far none of the
containers we've run did anything particularly useful, so let's
containers you've run did anything particularly useful, so you can
change that by running an example web application in Docker.
For our web application we're going to run a Python Flask application.
Let's start with a `docker run` command.
Start with a `docker run` command.
$ docker run -d -P training/webapp python app.py
Let's review what our command did. We've specified two flags: `-d` and
`-P`. We've already seen the `-d` flag which tells Docker to run the
Review what the command did. You've specified two flags: `-d` and
`-P`. You've already seen the `-d` flag which tells Docker to run the
container in the background. The `-P` flag is new and tells Docker to
map any required network ports inside our container to our host. This
lets us view our web application.
We've specified an image: `training/webapp`. This image is a
pre-built image we've created that contains a simple Python Flask web
You've specified an image: `training/webapp`. This image is a
pre-built image you've created that contains a simple Python Flask web
application.
Lastly, we've specified a command for our container to run: `python app.py`. This launches our web application.
Lastly, you've specified a command for our container to run: `python app.py`. This launches our web application.
> **Note:**
> **Note:**
> You can see more detail on the `docker run` command in the [command
> reference](../reference/commandline/run.md) and the [Docker Run
> Reference](../reference/run.md).
## Viewing our web application container
Now let's see our running container using the `docker ps` command.
Now you can see your running container using the `docker ps` command.
$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bc533791f3f5 training/webapp:latest python app.py 5 seconds ago Up 2 seconds 0.0.0.0:49155->5000/tcp nostalgic_morse
You can see we've specified a new flag, `-l`, for the `docker ps`
You can see you've specified a new flag, `-l`, for the `docker ps`
command. This tells the `docker ps` command to return the details of the
*last* container started.
> **Note:**
> **Note:**
> By default, the `docker ps` command only shows information about running
> containers. If you want to see stopped containers too use the `-a` flag.
@ -139,7 +133,7 @@ column.
When we passed the `-P` flag to the `docker run` command Docker mapped any
ports exposed in our image to our host.
> **Note:**
> **Note:**
> We'll learn more about how to expose ports in Docker images when
> [we learn how to build images](dockerimages.md).
@ -158,12 +152,13 @@ This would map port 5000 inside our container to port 80 on our local
host. You might be asking about now: why wouldn't we just want to always
use 1:1 port mappings in Docker containers rather than mapping to high
ports? Well 1:1 mappings have the constraint of only being able to map
one of each port on your local host. Let's say you want to test two
Python applications: both bound to port 5000 inside their own containers.
Without Docker's port mapping you could only access one at a time on the
Docker host.
one of each port on your local host.
So let's now browse to port 49155 in a web browser to
Suppose you want to test two Python applications: both bound to port 5000 inside
their own containers. Without Docker's port mapping you could only access one at
a time on the Docker host.
So you can now browse to port 49155 in a web browser to
see the application.
![Viewing the web application](webapp1.png).
@ -174,10 +169,10 @@ Our Python application is live!
> If you have been using a virtual machine on OS X, Windows or Linux,
> you'll need to get the IP of the virtual host instead of using localhost.
> You can do this by running the `docker-machine ip your_vm_name` from your command line or terminal application, for example:
>
>
> $ docker-machine ip my-docker-vm
> 192.168.99.100
>
>
> In this case you'd browse to `http://192.168.99.100:49155` for the above example.
## A network port shortcut
@ -190,20 +185,20 @@ corresponding public-facing port.
$ docker port nostalgic_morse 5000
0.0.0.0:49155
In this case we've looked up what port is mapped externally to port 5000 inside
In this case you've looked up what port is mapped externally to port 5000 inside
the container.
## Viewing the web application's logs
Let's also find out a bit more about what's happening with our application and
use another of the commands we've learnt, `docker logs`.
You can also find out a bit more about what's happening with our application and
use another of the commands you've learned, `docker logs`.
$ docker logs -f nostalgic_morse
* Running on http://0.0.0.0:5000/
10.0.2.2 - - [23/May/2014 20:16:31] "GET / HTTP/1.1" 200 -
10.0.2.2 - - [23/May/2014 20:16:31] "GET /favicon.ico HTTP/1.1" 404 -
This time though we've added a new flag, `-f`. This causes the `docker
This time though you've added a new flag, `-f`. This causes the `docker
logs` command to act like the `tail -f` command and watch the
container's standard out. We can see here the logs from Flask showing
the application running on port 5000 and the access log entries for it.
@ -228,7 +223,7 @@ configuration and status information for the specified container.
$ docker inspect nostalgic_morse
Let's see a sample of that JSON output.
You can see a sample of that JSON output.
[{
"ID": "bc533791f3f500b280a9626688bc79e342e3ea0d528efe3a86a51ecb28ea20",
@ -246,12 +241,12 @@ Let's see a sample of that JSON output.
We can also narrow down the information we want to return by requesting a
specific element, for example to return the container's IP address we would:
$ docker inspect -f '{{ .NetworkSettings.IPAddress }}' nostalgic_morse
$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' nostalgic_morse
172.17.0.5
## Stopping our web application container
Okay we've seen web application working. Now let's stop it using the
Okay you've seen web application working. Now you can stop it using the
`docker stop` command and the name of our container: `nostalgic_morse`.
$ docker stop nostalgic_morse
@ -266,8 +261,8 @@ been stopped.
Oops! Just after you stopped the container you get a call to say another
developer needs the container back. From here you have two choices: you
can create a new container or restart the old one. Let's look at
starting our previous container back up.
can create a new container or restart the old one. Look at
starting your previous container back up.
$ docker start nostalgic_morse
nostalgic_morse
@ -276,21 +271,21 @@ Now quickly run `docker ps -l` again to see the running container is
back up or browse to the container's URL to see if the application
responds.
> **Note:**
> **Note:**
> Also available is the `docker restart` command that runs a stop and
> then start on the container.
## Removing our web application container
Your colleague has let you know that they've now finished with the container
and won't need it again. So let's remove it using the `docker rm` command.
and won't need it again. Now, you can remove it using the `docker rm` command.
$ docker rm nostalgic_morse
Error: Impossible to remove a running container, please stop it first or use -f
2014/05/24 08:12:56 Error: failed to remove one or more containers
What happened? We can't actually remove a running container. This protects
you from accidentally removing a running container you might need. Let's try
you from accidentally removing a running container you might need. You can try
this again by stopping the container first.
$ docker stop nostalgic_morse
@ -305,9 +300,7 @@ And now our container is stopped and deleted.
# Next steps
Until now we've only used images that we've downloaded from
[Docker Hub](https://hub.docker.com). Next, let's get introduced to
building and sharing our own images.
Until now you've only used images that you've downloaded from Docker Hub. Next,
you can get introduced to building and sharing our own images.
Go to [Working with Docker Images](dockerimages.md).

View file

@ -1,4 +1,4 @@
# Docker Experimental Features
# Docker Experimental Features
This page contains a list of features in the Docker engine which are
experimental. Experimental features are **not** ready for production. They are

View file

@ -16,7 +16,7 @@ The **docker attach** command allows you to attach to a running container using
the container's ID or name, either to view its ongoing output or to control it
interactively. You can attach to the same contained process multiple times
simultaneously, screen sharing style, or quickly view the progress of your
daemonized process.
detached process.
You can detach from the container (and leave it running) with `CTRL-p CTRL-q`
(for a quiet exit) or `CTRL-c` which will send a `SIGKILL` to the container.

View file

@ -81,8 +81,9 @@ format.
URL of the distributed storage backend
**--cluster-advertise**=""
Specifies the 'host:port' combination that this particular daemon instance should use when advertising
itself to the cluster. The daemon is reached by remote hosts on this 'host:port' combination.
Specifies the 'host:port' or `interface:port` combination that this particular
daemon instance should use when advertising itself to the cluster. The daemon
is reached through this value.
**--cluster-store-opt**=""
Specifies options for the Key/Value store.

View file

@ -194,7 +194,7 @@ To get information on a container use its ID or instance name:
To get the IP address of a container use:
$ docker inspect --format='{{.NetworkSettings.IPAddress}}' d2cc496561d6
$ docker inspect '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' d2cc496561d6
172.17.0.2
## Listing all port bindings