Merge pull request #27022 from thaJeztah/docs-cherry-picks-1.12.2

Docs cherry picks 1.12.2
This commit is contained in:
Sebastiaan van Stijn 2016-09-29 17:58:12 +02:00 committed by GitHub
commit cd15b2b300
65 changed files with 2170 additions and 1562 deletions

View file

@ -71,8 +71,8 @@ This adds additional fields to the log depending on the driver, e.g. for
The following logging options are supported for the `json-file` logging driver:
```bash
--log-opt max-size=[0-9+][k|m|g]
--log-opt max-file=[0-9+]
--log-opt max-size=[0-9]+[kmg]
--log-opt max-file=[0-9]+
--log-opt labels=label1,label2
--log-opt env=env1,env2
```

View file

@ -1,14 +1,10 @@
# sshd
#
# VERSION 0.0.2
FROM ubuntu:14.04
FROM ubuntu:16.04
MAINTAINER Sven Dowideit <SvenDowideit@docker.com>
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

View file

@ -16,48 +16,52 @@ The following `Dockerfile` sets up an SSHd service in a container that you
can use to connect to and inspect other container's volumes, or to get
quick access to a test container.
# sshd
#
# VERSION 0.0.2
```Dockerfile
FROM ubuntu:16.04
MAINTAINER Sven Dowideit <SvenDowideit@docker.com>
FROM ubuntu:14.04
MAINTAINER Sven Dowideit <SvenDowideit@docker.com>
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
```
Build the image using:
$ docker build -t eg_sshd .
```bash
$ docker build -t eg_sshd .
```
## Run a `test_sshd` container
Then run it. You can then use `docker port` to find out what host port
the container's port 22 is mapped to:
$ docker run -d -P --name test_sshd eg_sshd
$ docker port test_sshd 22
0.0.0.0:49154
```bash
$ docker run -d -P --name test_sshd eg_sshd
$ docker port test_sshd 22
0.0.0.0:49154
```
And now you can ssh as `root` on the container's IP address (you can find it
with `docker inspect`) or on port `49154` of the Docker daemon's host IP address
(`ip address` or `ifconfig` can tell you that) or `localhost` if on the
Docker daemon host:
$ ssh root@192.168.1.2 -p 49154
# The password is ``screencast``.
root@f38c87f2a42d:/#
```bash
$ ssh root@192.168.1.2 -p 49154
# The password is ``screencast``.
root@f38c87f2a42d:/#
```
## Environment variables
@ -78,7 +82,8 @@ short script to do the same before you start `sshd -D` and then replace the
Finally, clean up after your test by stopping and removing the
container, and then removing the image.
$ docker stop test_sshd
$ docker rm test_sshd
$ docker rmi eg_sshd
```bash
$ docker stop test_sshd
$ docker rm test_sshd
$ docker rmi eg_sshd
```

View file

@ -151,7 +151,7 @@ drwxr-xr-x 3 root root 4096 Aug 8 17:56 cd851ce43a403
"Capabilities": [
"CAP_SYS_ADMIN"
],
"ManifestVersion": "v0.1",
"ManifestVersion": "v0",
"Description": "sshFS plugin for Docker",
"Documentation": "https://docs.docker.com/engine/extend/plugins/",
"Interface": {
@ -212,23 +212,23 @@ $ docker rmi rootfs
```
`manifest.json` describes the plugin and `plugin-config.json` contains some
runtime parameters. For example:
runtime parameters. [See the Plugins Manifest reference](manifest.md). For example:
```bash
# cat manifest.json
{
"manifestVersion": "v0.1",
"manifestVersion": "v0",
"description": "sshFS plugin for Docker",
"documentation": "https://docs.docker.com/engine/extend/plugins/",
"entrypoint": ["/go/bin/docker-volume-sshfs"],
"network": {
"type": "host"
},
"interface" : {
"types": ["docker.volumedriver/1.0"],
"socket": "sshfs.sock"
},
"capabilities": ["CAP_SYS_ADMIN"]
"interface" : {
"types": ["docker.volumedriver/1.0"],
"socket": "sshfs.sock"
},
"capabilities": ["CAP_SYS_ADMIN"]
}
```

View file

@ -70,6 +70,7 @@ Plugin
[NetApp Plugin](https://github.com/NetApp/netappdvp) (nDVP) | A volume plugin that provides direct integration with the Docker ecosystem for the NetApp storage portfolio. The nDVP package supports the provisioning and management of storage resources from the storage platform to Docker hosts, with a robust framework for adding additional platforms in the future.
[Netshare plugin](https://github.com/ContainX/docker-volume-netshare) | A volume plugin that provides volume management for NFS 3/4, AWS EFS and CIFS file systems.
[OpenStorage Plugin](https://github.com/libopenstorage/openstorage) | A cluster-aware volume plugin that provides volume management for file and block storage solutions. It implements a vendor neutral specification for implementing extensions such as CoS, encryption, and snapshots. It has example drivers based on FUSE, NFS, NBD and EBS to name a few.
[Portworx Volume Plugin](https://github.com/portworx/px-dev) | A volume plugin that turns any server into a scale-out converged compute/storage node, providing container granular storage and highly available volumes across any node, using a shared-nothing storage backend that works with any docker scheduler.
[Quobyte Volume Plugin](https://github.com/quobyte/docker-volume) | A volume plugin that connects Docker to [Quobyte](http://www.quobyte.com/containers)'s data center file system, a general-purpose scalable and fault-tolerant storage platform.
[REX-Ray plugin](https://github.com/emccode/rexray) | A volume plugin which is written in Go and provides advanced storage functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.
[Virtuozzo Storage and Ploop plugin](https://github.com/virtuozzo/docker-volume-ploop) | A volume plugin with support for Virtuozzo Storage distributed cloud file system as well as ploop devices.

222
docs/extend/manifest.md Normal file
View file

@ -0,0 +1,222 @@
<!--[metadata]>
+++
aliases = [
"/engine/extend/"
]
title = "Plugin manifest"
description = "How develop and use a plugin with the managed plugin system"
keywords = ["API, Usage, plugins, documentation, developer"]
advisory = "experimental"
[menu.main]
parent = "engine_extend"
weight=1
+++
<![end-metadata]-->
# Plugin Manifest Version 0 of Plugin V2
This document outlines the format of the V0 plugin manifest. The plugin
manifest described herein was introduced in the Docker daemon (experimental version) in the [v1.12.0
release](https://github.com/docker/docker/commit/f37117045c5398fd3dca8016ea8ca0cb47e7312b).
Plugin manifests describe the various constituents of a docker plugin. Plugin
manifests can be serialized to JSON format with the following media types:
Manifest Type | Media Type
------------- | -------------
manifest | "application/vnd.docker.plugin.v0+json"
## *Manifest* Field Descriptions
Manifest provides the base accessible fields for working with V0 plugin format
in the registry.
- **`manifestVersion`** *string*
version of the plugin manifest (This version uses V0)
- **`description`** *string*
description of the plugin
- **`documentation`** *string*
link to the documentation about the plugin
- **`interface`** *PluginInterface*
interface implemented by the plugins, struct consisting of the following fields
- **`types`** *string array*
types indicate what interface(s) the plugin currently implements.
currently supported:
- **docker.volumedriver/1.0**
- **`socket`** *string*
socket is the name of the socket the engine should use to communicate with the plugins.
the socket will be created in `/run/docker/plugins`.
- **`entrypoint`** *string array*
entrypoint of the plugin, see [`ENTRYPOINT`](../reference/builder.md#entrypoint)
- **`workdir`** *string*
workdir of the plugin, see [`WORKDIR`](../reference/builder.md#workdir)
- **`network`** *PluginNetwork*
network of the plugin, struct consisting of the following fields
- **`type`** *string*
network type.
currently supported:
- **bridge**
- **host**
- **none**
- **`capabilities`** *array*
capabilities of the plugin (*Linux only*), see list [`here`](https://github.com/opencontainers/runc/blob/master/libcontainer/SPEC.md#security)
- **`mounts`** *PluginMount array*
mount of the plugin, struct consisting of the following fields, see [`MOUNTS`](https://github.com/opencontainers/runtime-spec/blob/master/config.md#mounts)
- **`name`** *string*
name of the mount.
- **`description`** *string*
description of the mount.
- **`source`** *string*
source of the mount.
- **`destination`** *string*
destination of the mount.
- **`type`** *string*
mount type.
- **`options`** *string array*
options of the mount.
- **`devices`** *PluginDevice array*
device of the plugin, (*Linux only*), struct consisting of the following fields, see [`DEVICES`](https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#devices)
- **`name`** *string*
name of the device.
- **`description`** *string*
description of the device.
- **`path`** *string*
path of the device.
- **`env`** *PluginEnv array*
env of the plugin, struct consisting of the following fields
- **`name`** *string*
name of the env.
- **`description`** *string*
description of the env.
- **`value`** *string*
value of the env.
- **`args`** *PluginArgs*
args of the plugin, struct consisting of the following fields
- **`name`** *string*
name of the env.
- **`description`** *string*
description of the env.
- **`value`** *string array*
values of the args.
## Example Manifest
*Example showing the 'tiborvass/no-remove' plugin manifest.*
```
{
"manifestVersion": "v0",
"description": "A test plugin for Docker",
"documentation": "https://docs.docker.com/engine/extend/plugins/",
"entrypoint": ["plugin-no-remove", "/data"],
"interface" : {
"types": ["docker.volumedriver/1.0"],
"socket": "plugins.sock"
},
"network": {
"type": "host"
},
"mounts": [
{
"source": "/data",
"destination": "/data",
"type": "bind",
"options": ["shared", "rbind"]
},
{
"destination": "/foobar",
"type": "tmpfs"
}
],
"args": {
"name": "args",
"description": "command line arguments",
"value": []
},
"env": [
{
"name": "DEBUG",
"description": "If set, prints debug messages",
"value": "1"
}
],
"devices": [
{
"name": "device",
"description": "a host device to mount",
"path": "/dev/cpu_dma_latency"
}
]
}
```

View file

@ -58,6 +58,7 @@ If you are using Docker for Mac, Docker for Windows, or Docker on Linux, you wil
The getting started tour uses Docker Engine CLI commands entered on the command line of a terminal window. You don't need to be a wizard at the command line, but you should be familiar with how to open your favorite shell or terminal, and run basic commands in that environment. It helps (but isn't required) to know how to navigate a directory tree, manipulate files, list running process, and so forth.
## Where to go next
Go to [the next page to install](step_one.md).

View file

@ -99,7 +99,7 @@ commands to run. Your recipe is going to be very short.
2. Now, build your new image by typing the `docker build -t docker-whale .` command in your terminal (don't forget the . period).
$ docker build -t docker-whale .
Sending build context to Docker daemon 158.8 MB
Sending build context to Docker daemon 2.048 kB
...snip...
Removing intermediate container a8e6faa88df3
Successfully built 7d9495d03763
@ -117,7 +117,7 @@ complex. In this section, you learn what each message means.
First Docker checks to make sure it has everything it needs to build.
Sending build context to Docker daemon 158.8 MB
Sending build context to Docker daemon 2.048 kB
Then, Docker loads with the `whalesay` image. It already has this image
locally as you might recall from the last page. So, Docker doesn't need to
@ -143,9 +143,6 @@ manager. This takes a lot of lines, no need to list them all again here.
Then, Docker installs the new `fortunes` software.
Removing intermediate container e2a84b5f390f
Step 3 : RUN apt-get install -y fortunes
---> Running in 23aa52c1897c
Reading package lists...
Building dependency tree...
Reading state information...
@ -167,7 +164,7 @@ Then, Docker installs the new `fortunes` software.
Finally, Docker finishes the build and reports its outcome.
Step 4 : CMD /usr/games/fortune -a | cowsay
Step 3 : CMD /usr/games/fortune -a | cowsay
---> Running in a8e6faa88df3
---> 7d9495d03763
Removing intermediate container a8e6faa88df3

View file

@ -1,5 +1,6 @@
<!--[metadata]>
+++
aliases = ["/engine/installation/linux/frugalware/","/engine/installation/frugalware/"]
title = "Install"
description = "Lists the installation methods"
keywords = ["Docker install "]
@ -20,7 +21,6 @@ Docker Engine is supported on Linux, Cloud, Windows, and OS X. Installation inst
* [CRUX Linux](linux/cruxlinux.md)
* [Debian](linux/debian.md)
* [Fedora](linux/fedora.md)
* [FrugalWare](linux/frugalware.md)
* [Gentoo](linux/gentoolinux.md)
* [Oracle Linux](linux/oracle.md)
* [Red Hat Enterprise Linux](linux/rhel.md)

View file

@ -16,114 +16,151 @@ Docker runs on CentOS 7.X. An installation on other binary compatible EL7
distributions such as Scientific Linux might succeed, but Docker does not test
or support Docker on these distributions.
This page instructs you to install using Docker-managed release packages and
installation mechanisms. Using these packages ensures you get the latest release
of Docker. If you wish to install using CentOS-managed packages, consult your
CentOS documentation.
These instructions install Docker using release packages and installation
mechanisms managed by Docker, to be sure that you get the latest version
of Docker. If you wish to install using CentOS-managed packages, consult
your CentOS release documentation.
## Prerequisites
Docker requires a 64-bit installation regardless of your CentOS version. Also,
your kernel must be 3.10 at minimum, which CentOS 7 runs.
Docker requires a 64-bit OS and version 3.10 or higher of the Linux kernel.
To check your current kernel version, open a terminal and use `uname -r` to
display your kernel version:
$ uname -r
3.10.0-229.el7.x86_64
```bash
$ uname -r
3.10.0-229.el7.x86_64
```
Finally, it is recommended that you fully update your system. Please keep in
mind that your system should be fully patched to fix any potential kernel bugs.
Finally, it is recommended that you fully update your system. Keep in mind
that your system should be fully patched to fix any potential kernel bugs.
Any reported kernel bugs may have already been fixed on the latest kernel
packages.
## Install
## Install Docker Engine
There are two ways to install Docker Engine. You can install using the `yum`
package manager. Or you can use `curl` with the `get.docker.com` site. This
second method runs an installation script which also installs via the `yum`
package manager.
There are two ways to install Docker Engine. You can [install using the `yum`
package manager](#install-with-yum). Or you can use `curl` with the [`get.docker.com`
site](#install-with-the-script). This second method runs an installation script
which also installs via the `yum` package manager.
### Install with yum
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Make sure your existing yum packages are up-to-date.
2. Make sure your existing packages are up-to-date.
$ sudo yum update
```bash
$ sudo yum update
```
3. Add the yum repo.
3. Add the `yum` repo.
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
```bash
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
```
4. Install the Docker package.
$ sudo yum install docker-engine
```bash
$ sudo yum install docker-engine
```
5. Start the Docker daemon.
5. Enable the service.
$ sudo service docker start
```bash
$ sudo systemctl enable docker.service
```
6. Verify `docker` is installed correctly by running a test image in a container.
6. Start the Docker daemon.
```bash
$ sudo systemctl start docker
```
7. Verify `docker` is installed correctly by running a test image in a container.
$ sudo docker run --rm hello-world
$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from hello-world
a8219747be10: Pull complete
91c95931e552: Already exists
hello-world:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:aa03e5d0d5553b4c3473e89c8619cf79df368babd1.7.1cf5daeb82aab55838d
Status: Downloaded newer image for hello-world:latest
Hello from Docker.
This message shows that your installation appears to be working correctly.
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(Assuming it was not already locally available.)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
Hello from Docker!
This message shows that your installation appears to be working correctly.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
For more examples and ideas, visit:
http://docs.docker.com/userguide/
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker Hub account:
https://hub.docker.com
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
If you need to add an HTTP Proxy, set a different directory or partition for the
Docker runtime files, or make other customizations, read our Systemd article to
learn how to [customize your Systemd Docker daemon options](../../admin/systemd.md).
### Install with the script
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Make sure your existing yum packages are up-to-date.
2. Make sure your existing packages are up-to-date.
$ sudo yum update
```bash
$ sudo yum update
```
3. Run the Docker installation script.
$ curl -fsSL https://get.docker.com/ | sh
```bash
$ curl -fsSL https://get.docker.com/ | sh
```
This script adds the `docker.repo` repository and installs Docker.
This script adds the `docker.repo` repository and installs Docker.
4. Start the Docker daemon.
4. Enable the service.
$ sudo service docker start
```bash
$ sudo systemctl enable docker.service
```
5. Verify `docker` is installed correctly by running a test image in a container.
5. Start the Docker daemon.
$ sudo docker run hello-world
```bash
$ sudo systemctl start docker
```
6. Verify `docker` is installed correctly by running a test image in a container.
## Create a docker group
```bash
$ sudo docker run hello-world
```
If you need to add an HTTP Proxy, set a different directory or partition for the
Docker runtime files, or make other customizations, read our Systemd article to
learn how to [customize your Systemd Docker daemon options](../../admin/systemd.md).
## Create a docker group
The `docker` daemon binds to a Unix socket instead of a TCP port. By default
that Unix socket is owned by the user `root` and other users can access it with
@ -139,54 +176,63 @@ makes the ownership of the Unix socket read/writable by the `docker` group.
To create the `docker` group and add your user:
1. Log into Centos as a user with `sudo` privileges.
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Create the `docker` group.
`sudo groupadd docker`
```bash
$ sudo groupadd docker
```
3. Add your user to `docker` group.
`sudo usermod -aG docker your_username`
```bash
$ sudo usermod -aG docker your_username`
```
4. Log out and log back in.
This ensures your user is running with the correct permissions.
5. Verify your work by running `docker` without `sudo`.
5. Verify that your user is in the docker group by running `docker` without `sudo`.
$ docker run hello-world
```bash
$ docker run hello-world
```
## Start the docker daemon at boot
To ensure Docker starts when you boot your system, do the following:
$ sudo chkconfig docker on
If you need to add an HTTP Proxy, set a different directory or partition for the
Docker runtime files, or make other customizations, read our Systemd article to
learn how to [customize your Systemd Docker daemon options](../../admin/systemd.md).
Configure the Docker daemon to start automatically when the host starts:
```bash
$ sudo systemctl enable docker
```
## Uninstall
You can uninstall the Docker software with `yum`.
You can uninstall the Docker software with `yum`.
1. List the package you have installed.
1. List the installed Docker packages.
$ yum list installed | grep docker
yum list installed | grep docker
docker-engine.x86_64 1.7.1-1.el7 @/docker-engine-1.7.1-1.el7.x86_64.rpm
```bash
$ yum list installed | grep docker
docker-engine.x86_64 1.7.1-0.1.el7@/docker-engine-1.7.1-0.1.el7.x86_64
```
2. Remove the package.
$ sudo yum -y remove docker-engine.x86_64
```bash
$ sudo yum -y remove docker-engine.x86_64
```
This command does not remove images, containers, volumes, or user-created
configuration files on your host.
3. To delete all images, containers, and volumes, run the following command:
$ rm -rf /var/lib/docker
```bash
$ rm -rf /var/lib/docker
```
4. Locate and delete any user-created configuration files.

View file

@ -12,80 +12,94 @@ weight=-3
# Fedora
Docker is supported on Fedora version 22, 23, and 24. This page instructs you to install
using Docker-managed release packages and installation mechanisms. Using these
packages ensures you get the latest release of Docker. If you wish to install
using Fedora-managed packages, consult your Fedora release documentation for
information on Fedora's Docker support.
Docker is supported on Fedora version 22, 23, and 24. These instructions install
Docker using release packages and installation mechanisms managed by Docker, to
be sure that you get the latest version of Docker. If you wish to install using
Fedora-managed packages, consult your Fedora release documentation.
## Prerequisites
Docker requires a 64-bit installation regardless of your Fedora version. Also, your kernel must be 3.10 at minimum. To check your current kernel
version, open a terminal and use `uname -r` to display your kernel version:
Docker requires a 64-bit OS and version 3.10 or higher of the Linux kernel.
$ uname -r
3.19.5-100.fc21.x86_64
To check your current kernel version, open a terminal and use `uname -r` to
display your kernel version:
```bash
$ uname -r
3.19.5-100.fc21.x86_64
```
If your kernel is at an older version, you must update it.
Finally, is it recommended that you fully update your system. Please keep in
mind that your system should be fully patched to fix any potential kernel bugs. Any
reported kernel bugs may have already been fixed on the latest kernel packages
Finally, it is recommended that you fully update your system. Keep in mind
that your system should be fully patched to fix any potential kernel bugs.
Any reported kernel bugs may have already been fixed on the latest kernel
packages.
## Install Docker Engine
## Install
There are two ways to install Docker Engine. You can install with the `dnf` package manager. Or you can use `curl` with the `get.docker.com` site. This second method runs an installation script which also installs via the `dnf` package manager.
There are two ways to install Docker Engine. You can [install using the `dnf`
package manager](#install-with-dnf). Or you can use `curl` [with the `get.docker.com`
site](#install-with-the-script). This second method runs an installation script
which also installs via the `dnf` package manager.
### Install with DNF
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Make sure your existing dnf packages are up-to-date.
2. Make sure your existing packages are up-to-date.
$ sudo dnf update
```bash
$ sudo dnf update
```
3. Add the yum repo yourself.
3. Add the `yum` repo.
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/fedora/$releasever/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
```bash
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/fedora/$releasever/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
```
4. Install the Docker package.
$ sudo dnf install docker-engine
```bash
$ sudo dnf install docker-engine
```
5. Enable the service.
$ sudo systemctl enable docker.service
```bash
$ sudo systemctl enable docker.service
```
6. Start the Docker daemon.
$ sudo systemctl start docker
```bash
$ sudo systemctl start docker
```
7. Verify `docker` is installed correctly by running a test image in a container.
$ sudo docker run --rm hello-world
$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from hello-world
a8219747be10: Pull complete
91c95931e552: Already exists
hello-world:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:aa03e5d0d5553b4c3473e89c8619cf79df368babd1.7.1cf5daeb82aab55838d
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest
Hello from Docker.
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(Assuming it was not already locally available.)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
@ -94,36 +108,57 @@ There are two ways to install Docker Engine. You can install with the `dnf` pac
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
For more examples and ideas, visit:
http://docs.docker.com/userguide/
Share images, automate workflows, and more with a free Docker Hub account:
https://hub.docker.com
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
If you need to add an HTTP Proxy, set a different directory or partition for the
Docker runtime files, or make other customizations, read our Systemd article to
learn how to [customize your Systemd Docker daemon options](../../admin/systemd.md).
### Install with the script
You use the same installation procedure for all versions of Fedora.
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Make sure your existing dnf packages are up-to-date.
2. Make sure your existing packages are up-to-date.
$ sudo dnf update
```bash
$ sudo dnf update
```
3. Run the Docker installation script.
$ curl -fsSL https://get.docker.com/ | sh
```bash
$ curl -fsSL https://get.docker.com/ | sh
```
This script adds the `docker.repo` repository and installs Docker.
This script adds the `docker.repo` repository and installs Docker.
4. Enable the service.
$ sudo systemctl enable docker.service
```bash
$ sudo systemctl enable docker.service
```
5. Start the Docker daemon.
$ sudo systemctl start docker
```bash
$ sudo systemctl start docker
```
6. Verify `docker` is installed correctly by running a test image in a container.
$ sudo docker run hello-world
```bash
$ sudo docker run hello-world
```
If you need to add an HTTP Proxy, set a different directory or partition for the
Docker runtime files, or make other customizations, read our Systemd article to
learn how to [customize your Systemd Docker daemon options](../../admin/systemd.md).
## Create a docker group
@ -141,27 +176,37 @@ makes the ownership of the Unix socket read/writable by the `docker` group.
To create the `docker` group and add your user:
1. Log into your system as a user with `sudo` privileges.
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Create the `docker` group.
`sudo groupadd docker`
```bash
$ sudo groupadd docker
```
3. Add your user to `docker` group.
`sudo usermod -aG docker your_username`
```bash
$ sudo usermod -aG docker your_username`
```
4. Log out and log back in.
This ensures your user is running with the correct permissions.
5. Verify your work by running `docker` without `sudo`.
5. Verify that your user is in the docker group by running `docker` without `sudo`.
$ docker run hello-world
```bash
$ docker run hello-world
```
If you need to add an HTTP Proxy, set a different directory or partition for the
Docker runtime files, or make other customizations, read our Systemd article to
learn how to [customize your Systemd Docker daemon options](../../admin/systemd.md).
## Start the docker daemon at boot
Configure the Docker daemon to start automatically when the host starts:
```bash
$ sudo systemctl enable docker
```
## Running Docker with a manually-defined network
@ -186,20 +231,27 @@ This configuration allows IP forwarding from the container as expected.
You can uninstall the Docker software with `dnf`.
1. List the package you have installed.
1. List the installed Docker packages.
$ dnf list installed | grep docker
docker-engine.x86_64 1.7.1-0.1.fc21 @/docker-engine-1.7.1-0.1.fc21.el7.x86_64
```bash
$ dnf list installed | grep docker
docker-engine.x86_64 1.7.1-0.1.fc21 @/docker-engine-1.7.1-0.1.fc21.el7.x86_64
```
2. Remove the package.
$ sudo dnf -y remove docker-engine.x86_64
```bash
$ sudo dnf -y remove docker-engine.x86_64
```
This command does not remove images, containers, volumes, or user-created
configuration files on your host.
3. To delete all images, containers, and volumes, run the following command:
$ rm -rf /var/lib/docker
```bash
$ rm -rf /var/lib/docker
```
4. Locate and delete any user-created configuration files.

View file

@ -1,75 +0,0 @@
<!--[metadata]>
+++
aliases = [ "/engine/installation/frugalware/"]
title = "Installation on FrugalWare"
description = "Installation instructions for Docker on FrugalWare."
keywords = ["frugalware linux, docker, documentation, installation"]
[menu.main]
parent = "engine_linux"
+++
<![end-metadata]-->
# FrugalWare
Installing on FrugalWare is handled via the official packages:
- [lxc-docker i686](http://www.frugalware.org/packages/200141)
- [lxc-docker x86_64](http://www.frugalware.org/packages/200130)
The lxc-docker package will install the latest tagged version of Docker.
## Dependencies
Docker depends on several packages which are specified as dependencies
in the packages. The core dependencies are:
- systemd
- lvm2
- sqlite3
- libguestfs
- lxc
- iproute2
- bridge-utils
## Installation
A simple
$ sudo pacman -S lxc-docker
is all that is needed.
## Starting Docker
There is a systemd service unit created for Docker. To start Docker as
service:
$ sudo systemctl start lxc-docker
To start on system boot:
$ sudo systemctl enable lxc-docker
## Custom daemon options
If you need to add an HTTP Proxy, set a different directory or partition for the
Docker runtime files, or make other customizations, read our systemd article to
learn how to [customize your systemd Docker daemon options](../../admin/systemd.md).
## Uninstallation
To uninstall the Docker package:
$ sudo pacman -R lxc-docker
To uninstall the Docker package and dependencies that are no longer needed:
$ sudo pacman -Rns lxc-docker
The above commands will not remove images, containers, volumes, or user created
configuration files on your host. If you wish to delete all images, containers,
and volumes run the following command:
$ rm -rf /var/lib/docker
You must delete the user created configuration files manually.

View file

@ -50,7 +50,7 @@ IRC channel on the Freenode network.
| btrfs | |Enables dependencies for the "btrfs" graph driver, including necessary kernel flags.|
| contrib | Yes |Install additional contributed scripts and components.|
| device-mapper | Yes |Enables dependencies for the "devicemapper" graph driver, including necessary kernel flags.|
| doc | |Add extra documentation (API, Javadoc, etc). It is recommended to enable per package instead of globally.|
| doc | |Add extra documentation, such as API and Javadoc. It is recommended to enable per package instead of globally.|
| vim-syntax | |Pulls in related vim syntax scripts.|
| zsh-completion| |Enable zsh completion support.|

View file

@ -19,7 +19,6 @@ Docker Engine is supported on several Linux distributions. Installation instruct
* [CRUX Linux](cruxlinux.md)
* [Debian](debian.md)
* [Fedora](fedora.md)
* [FrugalWare](frugalware.md)
* [Gentoo](gentoolinux.md)
* [Oracle Linux](oracle.md)
* [Red Hat Enterprise Linux](rhel.md)

View file

@ -11,7 +11,7 @@ parent = "engine_linux"
# Oracle Linux
Docker is supported Oracle Linux 6 and 7. You do not require an Oracle Linux
Docker is supported on Oracle Linux 6 and 7. You do not require an Oracle Linux
Support subscription to install Docker on Oracle Linux.
## Prerequisites
@ -110,11 +110,11 @@ To create the `docker` group and add your user:
2. Create the `docker` group.
sudo groupadd docker
$ sudo groupadd docker
3. Add your user to `docker` group.
sudo usermod -aG docker username
$ sudo usermod -aG docker username
4. Log out and log back in.

View file

@ -12,110 +12,151 @@ weight = -5
# Red Hat Enterprise Linux
Docker is supported on Red Hat Enterprise Linux 7. This page instructs you to
install using Docker-managed release packages and installation mechanisms. Using
these packages ensures you get the latest release of Docker. If you wish to
install using Red Hat-managed packages, consult your Red Hat release
documentation for information on Red Hat's Docker support.
Docker is supported on Red Hat Enterprise Linux 7. These instructions install
Docker using release packages and installation mechanisms managed by Docker,
to be sure that you get the latest version of Docker. If you wish to install
using Red Hat-managed packages, consult your Red Hat release documentation.
## Prerequisites
Docker requires a 64-bit installation regardless of your Red Hat version. Docker
requires that your kernel must be 3.10 at minimum, which Red Hat 7 runs.
Docker requires a 64-bit OS and version 3.10 or higher of the Linux kernel.
To check your current kernel version, open a terminal and use `uname -r` to
display your kernel version:
$ uname -r
3.10.0-229.el7.x86_64
```bash
$ uname -r
3.10.0-229.el7.x86_64
```
Finally, is it recommended that you fully update your system. Please keep in
mind that your system should be fully patched to fix any potential kernel bugs.
Finally, it is recommended that you fully update your system. Keep in mind
that your system should be fully patched to fix any potential kernel bugs.
Any reported kernel bugs may have already been fixed on the latest kernel
packages.
## Install Docker Engine
There are two ways to install Docker Engine. You can install with the `yum` package manager directly yourself. Or you can use `curl` with the `get.docker.com` site. This second method runs an installation script which installs via the `yum` package manager.
There are two ways to install Docker Engine. You can [install using the `yum`
package manager](#install-with-yum). Or you can use `curl` with the [`get.docker.com`
site](#install-with-the-script). This second method runs an installation script
which also installs via the `yum` package manager.
### Install with yum
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Make sure your existing yum packages are up-to-date.
2. Make sure your existing packages are up-to-date.
$ sudo yum update
```bash
$ sudo yum update
```
3. Add the yum repo yourself.
3. Add the `yum` repo.
$ sudo tee /etc/yum.repos.d/docker.repo <<-EOF
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
```bash
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
```
4. Install the Docker package.
$ sudo yum install docker-engine
```bash
$ sudo yum install docker-engine
```
5. Start the Docker daemon.
5. Enable the service.
$ sudo service docker start
```bash
$ sudo systemctl enable docker.service
```
6. Verify `docker` is installed correctly by running a test image in a container.
6. Start the Docker daemon.
```bash
$ sudo systemctl start docker
```
7. Verify `docker` is installed correctly by running a test image in a container.
$ sudo docker run --rm hello-world
$ sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from hello-world
a8219747be10: Pull complete
91c95931e552: Already exists
hello-world:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:aa03e5d0d5553b4c3473e89c8619cf79df368babd1.7.1cf5daeb82aab55838d
Status: Downloaded newer image for hello-world:latest
Hello from Docker.
This message shows that your installation appears to be working correctly.
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(Assuming it was not already locally available.)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
Hello from Docker!
This message shows that your installation appears to be working correctly.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
For more examples and ideas, visit:
http://docs.docker.com/userguide/
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker Hub account:
https://hub.docker.com
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
If you need to add an HTTP Proxy, set a different directory or partition for the
Docker runtime files, or make other customizations, read our Systemd article to
learn how to [customize your Systemd Docker daemon options](../../admin/systemd.md).
### Install with the script
You use the same installation procedure for all versions of CentOS.
1. Log into your machine as a user with `sudo` or `root` privileges.
2. Make sure your existing yum packages are up-to-date.
2. Make sure your existing packages are up-to-date.
$ sudo yum update
```bash
$ sudo yum update
```
3. Run the Docker installation script.
$ curl -fsSL https://get.docker.com/ | sh
```bash
$ curl -fsSL https://get.docker.com/ | sh
```
4. Start the Docker daemon.
This script adds the `docker.repo` repository and installs Docker.
$ sudo service docker start
4. Enable the service.
5. Verify `docker` is installed correctly by running a test image in a container.
```bash
$ sudo systemctl enable docker.service
```
$ sudo docker run hello-world
5. Start the Docker daemon.
## Create a docker group
```bash
$ sudo systemctl start docker
```
6. Verify `docker` is installed correctly by running a test image in a container.
```bash
$ sudo docker run hello-world
```
If you need to add an HTTP Proxy, set a different directory or partition for the
Docker runtime files, or make other customizations, read our Systemd article to
learn how to [customize your Systemd Docker daemon options](../../admin/systemd.md).
## Create a docker group
The `docker` daemon binds to a Unix socket instead of a TCP port. By default
that Unix socket is owned by the user `root` and other users can access it with
@ -135,50 +176,59 @@ To create the `docker` group and add your user:
2. Create the `docker` group.
`sudo groupadd docker`
```bash
$ sudo groupadd docker
```
3. Add your user to `docker` group.
`sudo usermod -aG docker your_username`
```bash
$ sudo usermod -aG docker your_username`
```
4. Log out and log back in.
This ensures your user is running with the correct permissions.
5. Verify your work by running `docker` without `sudo`.
5. Verify that your user is in the docker group by running `docker` without `sudo`.
$ docker run hello-world
```bash
$ docker run hello-world
```
## Start the docker daemon at boot
To ensure Docker starts when you boot your system, do the following:
$ sudo chkconfig docker on
If you need to add an HTTP Proxy, set a different directory or partition for the
Docker runtime files, or make other customizations, read our Systemd article to
learn how to [customize your Systemd Docker daemon options](../../admin/systemd.md).
Configure the Docker daemon to start automatically when the host starts:
```bash
$ sudo systemctl enable docker
```
## Uninstall
You can uninstall the Docker software with `yum`.
You can uninstall the Docker software with `yum`.
1. List the package you have installed.
1. List the installed Docker packages.
$ yum list installed | grep docker
yum list installed | grep docker
docker-engine.x86_64 1.7.1-0.1.el7@/docker-engine-1.7.1-0.1.el7.x86_64
```bash
$ yum list installed | grep docker
docker-engine.x86_64 1.7.1-0.1.el7@/docker-engine-1.7.1-0.1.el7.x86_64
```
2. Remove the package.
$ sudo yum -y remove docker-engine.x86_64
```bash
$ sudo yum -y remove docker-engine.x86_64
```
This command does not remove images, containers, volumes, or user created
This command does not remove images, containers, volumes, or user-created
configuration files on your host.
3. To delete all images, containers, and volumes run the following command:
3. To delete all images, containers, and volumes, run the following command:
$ rm -rf /var/lib/docker
```bash
$ rm -rf /var/lib/docker
```
4. Locate and delete any user-created configuration files.

View file

@ -53,4 +53,4 @@ Your Mac must be running OS X 10.8 "Mountain Lion" or newer to install the Docke
* If you are interested in using the Kitematic GUI, see the [Kitematic user guide](https://docs.docker.com/kitematic/userguide/).
> **Note**: The Boot2Docker command line was deprecated several releases > back in favor of Docker Machine, and now Docker for Windows.
> **Note**: The Boot2Docker command line was deprecated several releases back in favor of Docker Machine, and now Docker for Mac.

View file

@ -93,7 +93,7 @@ wget --no-check-certificate --certificate=$DOCKER_CERT_PATH/cert.pem \
The following diagram depicts the container states accessible through the API.
![States](images/event_state.png)
[![States](images/event_state.png)](../images/event_state.png)
Some container-related events are not affected by container state, so they are not included in this diagram. These events are:
@ -121,7 +121,7 @@ This section lists each version from latest to oldest. Each listing includes a
with ContainerD in Docker 1.11.
* `GET /networks` now supports filtering by `label` and `driver`.
* `GET /containers/json` now supports filtering containers by `network` name or id.
* `POST /containers/create` now takes `MaximumIOps` and `MaximumIOBps` fields. Windows daemon only.
* `POST /containers/create` now takes `IOMaximumBandwidth` and `IOMaximumIOps` fields. Windows daemon only.
* `POST /containers/create` now returns an HTTP 400 "bad parameter" message
if no command is specified (instead of an HTTP 500 "server error")
* `GET /images/search` now takes a `filters` query parameter.

View file

@ -230,10 +230,13 @@ Create a container
- **ExposedPorts** - An object mapping ports to an empty object in the form of:
`"ExposedPorts": { "<port>/<tcp|udp>: {}" }`
- **HostConfig**
- **Binds** A list of volume bindings for this container. Each volume binding is a string in one of these forms:
+ `container_path` to create a new volume for the container
+ `host_path:container_path` to bind-mount a host path into the container
+ `host_path:container_path:ro` to make the bind-mount read-only inside the container.
- **Binds** A list of bind-mounts for this container. Each item is a string in one of these forms:
+ `host-src:container-dest` to bind-mount a host path into the
container. Both `host-src`, and `container-dest` must be an
_absolute_ path.
+ `host-src:container-dest:ro` to make the bind-mount read-only
inside the container. Both `host-src`, and `container-dest` must be
an _absolute_ path.
- **Links** - A list of links for the container. Each link entry should be
in the form of `container_name:alias`.
- **LxcConf** - LXC specific configurations. These configurations only
@ -921,43 +924,43 @@ Attach to the container `id`
- **404** no such container
- **500** server error
**Stream details**:
**Stream details**:
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
The format is a **Header** and a **Payload** (frame).
The format is a **Header** and a **Payload** (frame).
**HEADER**
**HEADER**
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
It is encoded on the first eight bytes like this:
It is encoded on the first eight bytes like this:
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
`STREAM_TYPE` can be:
`STREAM_TYPE` can be:
- 0: `stdin` (is written on `stdout`)
- 1: `stdout`
- 2: `stderr`
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
**PAYLOAD**
**PAYLOAD**
The payload is the raw stream.
The payload is the raw stream.
**IMPLEMENTATION**
**IMPLEMENTATION**
The simplest way to implement the Attach protocol is the following:
The simplest way to implement the Attach protocol is the following:
1. Read eight bytes.
2. Choose `stdout` or `stderr` depending on the first byte.
@ -1241,7 +1244,7 @@ Create an image either by pulling it from the registry or by importing it
**Example request**:
POST /images/create?fromImage=ubuntu HTTP/1.1
POST /images/create?fromImage=busybox&tag=latest HTTP/1.1
**Example response**:
@ -1263,7 +1266,8 @@ a base64-encoded AuthConfig object.
- **fromSrc** Source to import. The value may be a URL from which the image
can be retrieved or `-` to read the image from the request body.
- **repo** Repository name.
- **tag** Tag.
- **tag** Tag. If empty when pulling an image, this causes all tags
for the given image to be pulled.
**Request Headers**:
@ -1883,15 +1887,13 @@ Sets up an exec instance in a running container `id`
POST /containers/e90e34656806/exec HTTP/1.1
Content-Type: application/json
{
"AttachStdin": false,
"AttachStdout": true,
"AttachStderr": true,
"Tty": false,
"Cmd": [
"date"
]
}
{
"AttachStdin": true,
"AttachStdout": true,
"AttachStderr": true,
"Cmd": ["sh"],
"Tty": true
}
**Example response**:
@ -1952,8 +1954,9 @@ interactive session with the `exec` command.
- **200** no error
- **404** no such exec instance
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
### Exec Resize

View file

@ -235,10 +235,13 @@ Create a container
- **ExposedPorts** - An object mapping ports to an empty object in the form of:
`"ExposedPorts": { "<port>/<tcp|udp>: {}" }`
- **HostConfig**
- **Binds** A list of volume bindings for this container. Each volume binding is a string in one of these forms:
+ `container_path` to create a new volume for the container
+ `host_path:container_path` to bind-mount a host path into the container
+ `host_path:container_path:ro` to make the bind-mount read-only inside the container.
- **Binds** A list of bind-mounts for this container. Each item is a string in one of these forms:
+ `host-src:container-dest` to bind-mount a host path into the
container. Both `host-src`, and `container-dest` must be an
_absolute_ path.
+ `host-src:container-dest:ro` to make the bind-mount read-only
inside the container. Both `host-src`, and `container-dest` must be
an _absolute_ path.
- **Links** - A list of links for the container. Each link entry should be
in the form of `container_name:alias`.
- **LxcConf** - LXC specific configurations. These configurations only
@ -958,43 +961,43 @@ Attach to the container `id`
- **404** no such container
- **500** server error
**Stream details**:
**Stream details**:
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
The format is a **Header** and a **Payload** (frame).
The format is a **Header** and a **Payload** (frame).
**HEADER**
**HEADER**
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
It is encoded on the first eight bytes like this:
It is encoded on the first eight bytes like this:
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
`STREAM_TYPE` can be:
`STREAM_TYPE` can be:
- 0: `stdin` (is written on `stdout`)
- 1: `stdout`
- 2: `stderr`
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
**PAYLOAD**
**PAYLOAD**
The payload is the raw stream.
The payload is the raw stream.
**IMPLEMENTATION**
**IMPLEMENTATION**
The simplest way to implement the Attach protocol is the following:
The simplest way to implement the Attach protocol is the following:
1. Read eight bytes.
2. Choose `stdout` or `stderr` depending on the first byte.
@ -1285,7 +1288,7 @@ Create an image either by pulling it from the registry or by importing it
**Example request**:
POST /images/create?fromImage=ubuntu HTTP/1.1
POST /images/create?fromImage=busybox&tag=latest HTTP/1.1
**Example response**:
@ -1307,7 +1310,8 @@ a base64-encoded AuthConfig object.
- **fromSrc** Source to import. The value may be a URL from which the image
can be retrieved or `-` to read the image from the request body.
- **repo** Repository name.
- **tag** Tag.
- **tag** Tag. If empty when pulling an image, this causes all tags
for the given image to be pulled.
**Request Headers**:
@ -1961,15 +1965,14 @@ Sets up an exec instance in a running container `id`
POST /containers/e90e34656806/exec HTTP/1.1
Content-Type: application/json
{
"AttachStdin": false,
"AttachStdout": true,
"AttachStderr": true,
"Tty": false,
"Cmd": [
"date"
]
}
{
"AttachStdin": true,
"AttachStdout": true,
"AttachStderr": true,
"Cmd": ["sh"],
"Tty": true,
"User": "123:456"
}
**Example response**:
@ -1988,7 +1991,9 @@ Sets up an exec instance in a running container `id`
- **AttachStderr** - Boolean value, attaches to `stderr` of the `exec` command.
- **Tty** - Boolean value to allocate a pseudo-TTY.
- **Cmd** - Command to run specified as a string or an array of strings.
- **User** - A string value specifying the user, and optionally, group to run
the exec process inside the container. Format is one of: `"user"`,
`"user:group"`, `"uid"`, or `"uid:gid"`.
**Status codes**:
@ -2030,8 +2035,9 @@ interactive session with the `exec` command.
- **200** no error
- **404** no such exec instance
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
### Exec Resize

View file

@ -237,10 +237,13 @@ Create a container
- **ExposedPorts** - An object mapping ports to an empty object in the form of:
`"ExposedPorts": { "<port>/<tcp|udp>: {}" }`
- **HostConfig**
- **Binds** A list of volume bindings for this container. Each volume binding is a string in one of these forms:
+ `container_path` to create a new volume for the container
+ `host_path:container_path` to bind-mount a host path into the container
+ `host_path:container_path:ro` to make the bind-mount read-only inside the container.
- **Binds** A list of bind-mounts for this container. Each item is a string in one of these forms:
+ `host-src:container-dest` to bind-mount a host path into the
container. Both `host-src`, and `container-dest` must be an
_absolute_ path.
+ `host-src:container-dest:ro` to make the bind-mount read-only
inside the container. Both `host-src`, and `container-dest` must be
an _absolute_ path.
- **Links** - A list of links for the container. Each link entry should be
in the form of `container_name:alias`.
- **LxcConf** - LXC specific configurations. These configurations only
@ -967,43 +970,43 @@ Attach to the container `id`
- **404** no such container
- **500** server error
**Stream details**:
**Stream details**:
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
The format is a **Header** and a **Payload** (frame).
The format is a **Header** and a **Payload** (frame).
**HEADER**
**HEADER**
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
It is encoded on the first eight bytes like this:
It is encoded on the first eight bytes like this:
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
`STREAM_TYPE` can be:
`STREAM_TYPE` can be:
- 0: `stdin` (is written on `stdout`)
- 1: `stdout`
- 2: `stderr`
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
**PAYLOAD**
**PAYLOAD**
The payload is the raw stream.
The payload is the raw stream.
**IMPLEMENTATION**
**IMPLEMENTATION**
The simplest way to implement the Attach protocol is the following:
The simplest way to implement the Attach protocol is the following:
1. Read eight bytes.
2. Choose `stdout` or `stderr` depending on the first byte.
@ -1437,7 +1440,7 @@ Create an image either by pulling it from the registry or by importing it
**Example request**:
POST /images/create?fromImage=ubuntu HTTP/1.1
POST /images/create?fromImage=busybox&tag=latest HTTP/1.1
**Example response**:
@ -1459,7 +1462,8 @@ a base64-encoded AuthConfig object.
- **fromSrc** Source to import. The value may be a URL from which the image
can be retrieved or `-` to read the image from the request body.
- **repo** Repository name.
- **tag** Tag.
- **tag** Tag. If empty when pulling an image, this causes all tags
for the given image to be pulled.
**Request Headers**:
@ -2114,15 +2118,14 @@ Sets up an exec instance in a running container `id`
POST /containers/e90e34656806/exec HTTP/1.1
Content-Type: application/json
{
"AttachStdin": false,
"AttachStdout": true,
"AttachStderr": true,
"Tty": false,
"Cmd": [
"date"
]
}
{
"AttachStdin": true,
"AttachStdout": true,
"AttachStderr": true,
"Cmd": ["sh"],
"Tty": true,
"User": "123:456"
}
**Example response**:
@ -2141,7 +2144,9 @@ Sets up an exec instance in a running container `id`
- **AttachStderr** - Boolean value, attaches to `stderr` of the `exec` command.
- **Tty** - Boolean value to allocate a pseudo-TTY.
- **Cmd** - Command to run specified as a string or an array of strings.
- **User** - A string value specifying the user, and optionally, group to run
the exec process inside the container. Format is one of: `"user"`,
`"user:group"`, `"uid"`, or `"uid:gid"`.
**Status codes**:
@ -2183,8 +2188,9 @@ interactive session with the `exec` command.
- **200** no error
- **404** no such exec instance
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
### Exec Resize

View file

@ -248,11 +248,17 @@ Create a container
- **StopSignal** - Signal to stop a container as a string or unsigned integer. `SIGTERM` by default.
- **HostConfig**
- **Binds** A list of volume bindings for this container. Each volume binding is a string in one of these forms:
+ `container_path` to create a new volume for the container
+ `host_path:container_path` to bind-mount a host path into the container
+ `host_path:container_path:ro` to make the bind-mount read-only inside the container.
+ `volume_name:container_path` to bind-mount a volume managed by a volume plugin into the container.
+ `volume_name:container_path:ro` to make the bind mount read-only inside the container.
+ `host-src:container-dest` to bind-mount a host path into the
container. Both `host-src`, and `container-dest` must be an
_absolute_ path.
+ `host-src:container-dest:ro` to make the bind-mount read-only
inside the container. Both `host-src`, and `container-dest` must be
an _absolute_ path.
+ `volume-name:container-dest` to bind-mount a volume managed by a
volume driver into the container. `container-dest` must be an
_absolute_ path.
+ `volume-name:container-dest:ro` to mount the volume read-only
inside the container. `container-dest` must be an _absolute_ path.
- **Links** - A list of links for the container. Each link entry should be
in the form of `container_name:alias`.
- **LxcConf** - LXC specific configurations. These configurations only
@ -1045,43 +1051,43 @@ Attach to the container `id`
- **404** no such container
- **500** server error
**Stream details**:
**Stream details**:
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
The format is a **Header** and a **Payload** (frame).
The format is a **Header** and a **Payload** (frame).
**HEADER**
**HEADER**
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
It is encoded on the first eight bytes like this:
It is encoded on the first eight bytes like this:
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
`STREAM_TYPE` can be:
`STREAM_TYPE` can be:
- 0: `stdin` (is written on `stdout`)
- 1: `stdout`
- 2: `stderr`
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
**PAYLOAD**
**PAYLOAD**
The payload is the raw stream.
The payload is the raw stream.
**IMPLEMENTATION**
**IMPLEMENTATION**
The simplest way to implement the Attach protocol is the following:
The simplest way to implement the Attach protocol is the following:
1. Read eight bytes.
2. Choose `stdout` or `stderr` depending on the first byte.
@ -1521,7 +1527,7 @@ Create an image either by pulling it from the registry or by importing it
**Example request**:
POST /images/create?fromImage=ubuntu HTTP/1.1
POST /images/create?fromImage=busybox&tag=latest HTTP/1.1
**Example response**:
@ -1547,7 +1553,8 @@ a base64-encoded AuthConfig object.
- **repo** Repository name given to an image when it is imported.
The repo may include a tag. This parameter may only be used when importing
an image.
- **tag** Tag or digest.
- **tag** Tag or digest. If empty when pulling an image, this causes all tags
for the given image to be pulled.
**Request Headers**:
@ -2265,15 +2272,15 @@ Sets up an exec instance in a running container `id`
POST /containers/e90e34656806/exec HTTP/1.1
Content-Type: application/json
{
"AttachStdin": false,
"AttachStdout": true,
"AttachStderr": true,
"Tty": false,
"Cmd": [
"date"
]
}
{
"AttachStdin": true,
"AttachStdout": true,
"AttachStderr": true,
"Cmd": ["sh"],
"Privileged": true,
"Tty": true,
"User": "123:456"
}
**Example response**:
@ -2292,7 +2299,10 @@ Sets up an exec instance in a running container `id`
- **AttachStderr** - Boolean value, attaches to `stderr` of the `exec` command.
- **Tty** - Boolean value to allocate a pseudo-TTY.
- **Cmd** - Command to run specified as a string or an array of strings.
- **Privileged** - Boolean value, runs the exec process with extended privileges.
- **User** - A string value specifying the user, and optionally, group to run
the exec process inside the container. Format is one of: `"user"`,
`"user:group"`, `"uid"`, or `"uid:gid"`.
**Status codes**:
@ -2337,8 +2347,9 @@ interactive session with the `exec` command.
- **404** no such exec instance
- **409** - container is paused
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
### Exec Resize

View file

@ -351,10 +351,17 @@ Create a container
- **StopSignal** - Signal to stop a container as a string or unsigned integer. `SIGTERM` by default.
- **HostConfig**
- **Binds** A list of volume bindings for this container. Each volume binding is a string in one of these forms:
+ `host_path:container_path` to bind-mount a host path into the container
+ `host_path:container_path:ro` to make the bind-mount read-only inside the container.
+ `volume_name:container_path` to bind-mount a volume managed by a volume plugin into the container.
+ `volume_name:container_path:ro` to make the bind mount read-only inside the container.
+ `host-src:container-dest` to bind-mount a host path into the
container. Both `host-src`, and `container-dest` must be an
_absolute_ path.
+ `host-src:container-dest:ro` to make the bind-mount read-only
inside the container. Both `host-src`, and `container-dest` must be
an _absolute_ path.
+ `volume-name:container-dest` to bind-mount a volume managed by a
volume driver into the container. `container-dest` must be an
_absolute_ path.
+ `volume-name:container-dest:ro` to mount the volume read-only
inside the container. `container-dest` must be an _absolute_ path.
- **Links** - A list of links for the container. Each link entry should be
in the form of `container_name:alias`.
- **Memory** - Memory limit in bytes.
@ -1219,43 +1226,43 @@ Attach to the container `id`
- **409** - container is paused
- **500** server error
**Stream details**:
**Stream details**:
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
The format is a **Header** and a **Payload** (frame).
The format is a **Header** and a **Payload** (frame).
**HEADER**
**HEADER**
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
It is encoded on the first eight bytes like this:
It is encoded on the first eight bytes like this:
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
`STREAM_TYPE` can be:
`STREAM_TYPE` can be:
- 0: `stdin` (is written on `stdout`)
- 1: `stdout`
- 2: `stderr`
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
**PAYLOAD**
**PAYLOAD**
The payload is the raw stream.
The payload is the raw stream.
**IMPLEMENTATION**
**IMPLEMENTATION**
The simplest way to implement the Attach protocol is the following:
The simplest way to implement the Attach protocol is the following:
1. Read eight bytes.
2. Choose `stdout` or `stderr` depending on the first byte.
@ -1699,7 +1706,7 @@ Create an image either by pulling it from the registry or by importing it
**Example request**:
POST /images/create?fromImage=ubuntu HTTP/1.1
POST /images/create?fromImage=busybox&tag=latest HTTP/1.1
**Example response**:
@ -1726,7 +1733,8 @@ a base64-encoded AuthConfig object.
- **repo** Repository name given to an image when it is imported.
The repo may include a tag. This parameter may only be used when importing
an image.
- **tag** Tag or digest.
- **tag** Tag or digest. If empty when pulling an image, this causes all tags
for the given image to be pulled.
**Request Headers**:
@ -2654,16 +2662,16 @@ Sets up an exec instance in a running container `id`
POST /containers/e90e34656806/exec HTTP/1.1
Content-Type: application/json
{
"AttachStdin": false,
"AttachStdout": true,
"AttachStderr": true,
"DetachKeys": "ctrl-p,ctrl-q",
"Tty": false,
"Cmd": [
"date"
]
}
{
"AttachStdin": true,
"AttachStdout": true,
"AttachStderr": true,
"Cmd": ["sh"],
"DetachKeys": "ctrl-p,ctrl-q",
"Privileged": true,
"Tty": true,
"User": "123:456"
}
**Example response**:
@ -2685,7 +2693,10 @@ Sets up an exec instance in a running container `id`
where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`.
- **Tty** - Boolean value to allocate a pseudo-TTY.
- **Cmd** - Command to run specified as a string or an array of strings.
- **Privileged** - Boolean value, runs the exec process with extended privileges.
- **User** - A string value specifying the user, and optionally, group to run
the exec process inside the container. Format is one of: `"user"`,
`"user:group"`, `"uid"`, or `"uid:gid"`.
**Status codes**:
@ -2730,8 +2741,9 @@ interactive session with the `exec` command.
- **404** no such exec instance
- **409** - container is paused
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
### Exec Resize

View file

@ -374,10 +374,17 @@ Create a container
- **StopSignal** - Signal to stop a container as a string or unsigned integer. `SIGTERM` by default.
- **HostConfig**
- **Binds** A list of volume bindings for this container. Each volume binding is a string in one of these forms:
+ `host_path:container_path` to bind-mount a host path into the container
+ `host_path:container_path:ro` to make the bind-mount read-only inside the container.
+ `volume_name:container_path` to bind-mount a volume managed by a volume plugin into the container.
+ `volume_name:container_path:ro` to make the bind mount read-only inside the container.
+ `host-src:container-dest` to bind-mount a host path into the
container. Both `host-src`, and `container-dest` must be an
_absolute_ path.
+ `host-src:container-dest:ro` to make the bind-mount read-only
inside the container. Both `host-src`, and `container-dest` must be
an _absolute_ path.
+ `volume-name:container-dest` to bind-mount a volume managed by a
volume driver into the container. `container-dest` must be an
_absolute_ path.
+ `volume-name:container-dest:ro` to mount the volume read-only
inside the container. `container-dest` must be an _absolute_ path.
- **Links** - A list of links for the container. Each link entry should be
in the form of `container_name:alias`.
- **Memory** - Memory limit in bytes.
@ -1252,43 +1259,43 @@ Attach to the container `id`
- **409** - container is paused
- **500** server error
**Stream details**:
**Stream details**:
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
The format is a **Header** and a **Payload** (frame).
The format is a **Header** and a **Payload** (frame).
**HEADER**
**HEADER**
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
It is encoded on the first eight bytes like this:
It is encoded on the first eight bytes like this:
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
`STREAM_TYPE` can be:
`STREAM_TYPE` can be:
- 0: `stdin` (is written on `stdout`)
- 1: `stdout`
- 2: `stderr`
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
**PAYLOAD**
**PAYLOAD**
The payload is the raw stream.
The payload is the raw stream.
**IMPLEMENTATION**
**IMPLEMENTATION**
The simplest way to implement the Attach protocol is the following:
The simplest way to implement the Attach protocol is the following:
1. Read eight bytes.
2. Choose `stdout` or `stderr` depending on the first byte.
@ -1733,7 +1740,7 @@ Create an image either by pulling it from the registry or by importing it
**Example request**:
POST /images/create?fromImage=ubuntu HTTP/1.1
POST /images/create?fromImage=busybox&tag=latest HTTP/1.1
**Example response**:
@ -1760,7 +1767,8 @@ a base64-encoded AuthConfig object.
- **repo** Repository name given to an image when it is imported.
The repo may include a tag. This parameter may only be used when importing
an image.
- **tag** Tag or digest.
- **tag** Tag or digest. If empty when pulling an image, this causes all tags
for the given image to be pulled.
**Request Headers**:
@ -2278,7 +2286,7 @@ Show the docker version information
Content-Type: application/json
{
"Version": "1.10.0",
"Version": "1.11.0",
"Os": "linux",
"KernelVersion": "3.19.0-23-generic",
"GoVersion": "go1.4.2",
@ -2728,16 +2736,16 @@ Sets up an exec instance in a running container `id`
POST /containers/e90e34656806/exec HTTP/1.1
Content-Type: application/json
{
"AttachStdin": false,
"AttachStdout": true,
"AttachStderr": true,
"DetachKeys": "ctrl-p,ctrl-q",
"Tty": false,
"Cmd": [
"date"
]
}
{
"AttachStdin": true,
"AttachStdout": true,
"AttachStderr": true,
"Cmd": ["sh"],
"DetachKeys": "ctrl-p,ctrl-q",
"Privileged": true,
"Tty": true,
"User": "123:456"
}
**Example response**:
@ -2759,7 +2767,10 @@ Sets up an exec instance in a running container `id`
where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`.
- **Tty** - Boolean value to allocate a pseudo-TTY.
- **Cmd** - Command to run specified as a string or an array of strings.
- **Privileged** - Boolean value, runs the exec process with extended privileges.
- **User** - A string value specifying the user, and optionally, group to run
the exec process inside the container. Format is one of: `"user"`,
`"user:group"`, `"uid"`, or `"uid:gid"`.
**Status codes**:
@ -2804,8 +2815,9 @@ interactive session with the `exec` command.
- **404** no such exec instance
- **409** - container is paused
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
### Exec Resize

View file

@ -299,8 +299,8 @@ Create a container
"CpuQuota": 50000,
"CpusetCpus": "0,1",
"CpusetMems": "0,1",
"MaximumIOps": 0,
"MaximumIOBps": 0,
"IOMaximumBandwidth": 0,
"IOMaximumIOps": 0,
"BlkioWeight": 300,
"BlkioWeightDevice": [{}],
"BlkioDeviceReadBps": [{}],
@ -391,10 +391,17 @@ Create a container
- **StopSignal** - Signal to stop a container as a string or unsigned integer. `SIGTERM` by default.
- **HostConfig**
- **Binds** A list of volume bindings for this container. Each volume binding is a string in one of these forms:
+ `host_path:container_path` to bind-mount a host path into the container
+ `host_path:container_path:ro` to make the bind-mount read-only inside the container.
+ `volume_name:container_path` to bind-mount a volume managed by a volume plugin into the container.
+ `volume_name:container_path:ro` to make the bind mount read-only inside the container.
+ `host-src:container-dest` to bind-mount a host path into the
container. Both `host-src`, and `container-dest` must be an
_absolute_ path.
+ `host-src:container-dest:ro` to make the bind-mount read-only
inside the container. Both `host-src`, and `container-dest` must be
an _absolute_ path.
+ `volume-name:container-dest` to bind-mount a volume managed by a
volume driver into the container. `container-dest` must be an
_absolute_ path.
+ `volume-name:container-dest:ro` to mount the volume read-only
inside the container. `container-dest` must be an _absolute_ path.
- **Links** - A list of links for the container. Each link entry should be
in the form of `container_name:alias`.
- **Memory** - Memory limit in bytes.
@ -409,8 +416,8 @@ Create a container
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
- **MaximumIOps** - Maximum IO absolute rate in terms of IOps.
- **MaximumIOBps** - Maximum IO absolute rate in terms of bytes per second.
- **IOMaximumBandwidth** - Maximum IO absolute rate in terms of IOps.
- **IOMaximumIOps** - Maximum IO absolute rate in terms of bytes per second.
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
- **BlkioWeightDevice** - Block IO weight (relative device weight) in the form of: `"BlkioWeightDevice": [{"Path": "device_path", "Weight": weight}]`
- **BlkioDeviceReadBps** - Limit read rate (bytes per second) from a device in the form of: `"BlkioDeviceReadBps": [{"Path": "device_path", "Rate": rate}]`, for example:
@ -558,8 +565,8 @@ Return low-level information on the container `id`
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"MaximumIOps": 0,
"MaximumIOBps": 0,
"IOMaximumBandwidth": 0,
"IOMaximumIOps": 0,
"BlkioWeight": 0,
"BlkioWeightDevice": [{}],
"BlkioDeviceReadBps": [{}],
@ -1281,43 +1288,43 @@ Attach to the container `id`
- **409** - container is paused
- **500** server error
**Stream details**:
**Stream details**:
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
When using the TTY setting is enabled in
[`POST /containers/create`
](#create-a-container),
the stream is the raw data from the process PTY and client's `stdin`.
When the TTY is disabled, then the stream is multiplexed to separate
`stdout` and `stderr`.
The format is a **Header** and a **Payload** (frame).
The format is a **Header** and a **Payload** (frame).
**HEADER**
**HEADER**
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
The header contains the information which the stream writes (`stdout` or
`stderr`). It also contains the size of the associated frame encoded in the
last four bytes (`uint32`).
It is encoded on the first eight bytes like this:
It is encoded on the first eight bytes like this:
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
`STREAM_TYPE` can be:
`STREAM_TYPE` can be:
- 0: `stdin` (is written on `stdout`)
- 1: `stdout`
- 2: `stderr`
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
`SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of
the `uint32` size encoded as big endian.
**PAYLOAD**
**PAYLOAD**
The payload is the raw stream.
The payload is the raw stream.
**IMPLEMENTATION**
**IMPLEMENTATION**
The simplest way to implement the Attach protocol is the following:
The simplest way to implement the Attach protocol is the following:
1. Read eight bytes.
2. Choose `stdout` or `stderr` depending on the first byte.
@ -1734,7 +1741,7 @@ Create an image either by pulling it from the registry or by importing it
**Example request**:
POST /images/create?fromImage=ubuntu HTTP/1.1
POST /images/create?fromImage=busybox&tag=latest HTTP/1.1
**Example response**:
@ -1761,7 +1768,8 @@ a base64-encoded AuthConfig object.
- **repo** Repository name given to an image when it is imported.
The repo may include a tag. This parameter may only be used when importing
an image.
- **tag** Tag or digest.
- **tag** Tag or digest. If empty when pulling an image, this causes all tags
for the given image to be pulled.
**Request Headers**:
@ -2742,16 +2750,16 @@ Sets up an exec instance in a running container `id`
POST /containers/e90e34656806/exec HTTP/1.1
Content-Type: application/json
{
"AttachStdin": false,
"AttachStdout": true,
"AttachStderr": true,
"DetachKeys": "ctrl-p,ctrl-q",
"Tty": false,
"Cmd": [
"date"
]
}
{
"AttachStdin": true,
"AttachStdout": true,
"AttachStderr": true,
"Cmd": ["sh"],
"DetachKeys": "ctrl-p,ctrl-q",
"Privileged": true,
"Tty": true,
"User": "123:456"
}
**Example response**:
@ -2773,7 +2781,10 @@ Sets up an exec instance in a running container `id`
where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`.
- **Tty** - Boolean value to allocate a pseudo-TTY.
- **Cmd** - Command to run specified as a string or an array of strings.
- **Privileged** - Boolean value, runs the exec process with extended privileges.
- **User** - A string value specifying the user, and optionally, group to run
the exec process inside the container. Format is one of: `"user"`,
`"user:group"`, `"uid"`, or `"uid:gid"`.
**Status codes**:
@ -2818,8 +2829,9 @@ interactive session with the `exec` command.
- **404** no such exec instance
- **409** - container is paused
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
### Exec Resize
@ -4007,7 +4019,7 @@ Return low-level information on the node `id`
### Remove a node
`DELETE /nodes/<id>`
`DELETE /nodes/(id)`
Remove a node [`id`] from the swarm.
@ -4035,7 +4047,7 @@ Remove a node [`id`] from the swarm.
### Update a node
`POST /nodes/<id>/update`
`POST /nodes/(id)/update`
Update the node `id`.
@ -4401,7 +4413,8 @@ List services
"Reservations": {}
},
"RestartPolicy": {
"Condition": "ANY"
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
@ -4411,26 +4424,36 @@ List services
}
},
"UpdateConfig": {
"Parallelism": 1
"Parallelism": 1,
"FailureAction": "pause"
},
"EndpointSpec": {
"Mode": "VIP",
"Ingress": "PUBLICPORT",
"ExposedPorts": [
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"Port": 6379
"TargetPort": 6379,
"PublishedPort": 30001
}
]
}
},
"Endpoint": {
"Spec": {},
"ExposedPorts": [
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 6379,
"PublishedPort": 30001
}
]
},
"Ports": [
{
"Protocol": "tcp",
"Port": 6379,
"PublicPort": 30000
"TargetPort": 6379,
"PublishedPort": 30001
}
],
"VirtualIPs": [
@ -4616,13 +4639,13 @@ image](#create-an-image) section for more details.
- **FailureAction** - Action to take if an updated task fails to run, or stops running during the
update. Values are `continue` and `pause`.
- **Networks** Array of network names or IDs to attach the service to.
- **Endpoint** Properties that can be configured to access and load balance a service.
- **Spec**
- **Mode** The mode of resolution to use for internal load balancing
between tasks (`vip` or `dnsrr`).
- **Ports** Exposed ports that this service is accessible on from the outside, in the form
of: `"Ports": { "<port>/<tcp|udp>: {}" }`
- **VirtualIPs**
- **EndpointSpec** Properties that can be configured to access and load balance a service.
- **Mode** The mode of resolution to use for internal load balancing
between tasks (`vip` or `dnsrr`). Defaults to `vip` if not provided.
- **Ports** List of exposed ports that this service is accessible on from
the outside, in the form of:
`{"Protocol": <"tcp"|"udp">, "PublishedPort": <port>, "TargetPort": <port>}`.
Ports can only be provided if `vip` resolution mode is used.
**Request Headers**:
@ -4675,7 +4698,7 @@ Return information on the service `id`.
"UpdatedAt": "2016-06-07T21:10:20.276301259Z",
"Spec": {
"Name": "redis",
"Task": {
"TaskTemplate": {
"ContainerSpec": {
"Image": "redis"
},
@ -4684,7 +4707,8 @@ Return information on the service `id`.
"Reservations": {}
},
"RestartPolicy": {
"Condition": "ANY"
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
@ -4694,26 +4718,36 @@ Return information on the service `id`.
}
},
"UpdateConfig": {
"Parallelism": 1
"Parallelism": 1,
"FailureAction": "pause"
},
"EndpointSpec": {
"Mode": "VIP",
"Ingress": "PUBLICPORT",
"ExposedPorts": [
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"Port": 6379
"TargetPort": 6379,
"PublishedPort": 30001
}
]
}
},
"Endpoint": {
"Spec": {},
"ExposedPorts": [
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 6379,
"PublishedPort": 30001
}
]
},
"Ports": [
{
"Protocol": "tcp",
"Port": 6379,
"PublicPort": 30001
"TargetPort": 6379,
"PublishedPort": 30001
}
],
"VirtualIPs": [
@ -4827,7 +4861,7 @@ image](#create-an-image) section for more details.
as part of this service.
- **Condition** Condition for restart (`none`, `on-failure`, or `any`).
- **Delay** Delay between restart attempts.
- **Attempts** Maximum attempts to restart a given container before giving up (default value
- **MaxAttempts** Maximum attempts to restart a given container before giving up (default value
is 0, which is ignored).
- **Window** Windows is the time window used to evaluate the restart policy (default value is
0, which is unbounded).
@ -4838,13 +4872,13 @@ image](#create-an-image) section for more details.
parallelism).
- **Delay** Amount of time between updates.
- **Networks** Array of network names or IDs to attach the service to.
- **Endpoint** Properties that can be configured to access and load balance a service.
- **Spec**
- **Mode** The mode of resolution to use for internal load balancing
between tasks (`vip` or `dnsrr`).
- **Ports** Exposed ports that this service is accessible on from the outside, in the form
of: `"Ports": { "<port>/<tcp|udp>: {}" }`
- **VirtualIPs**
- **EndpointSpec** Properties that can be configured to access and load balance a service.
- **Mode** The mode of resolution to use for internal load balancing
between tasks (`vip` or `dnsrr`). Defaults to `vip` if not provided.
- **Ports** List of exposed ports that this service is accessible on from
the outside, in the form of:
`{"Protocol": <"tcp"|"udp">, "PublishedPort": <port>, "TargetPort": <port>}`.
Ports can only be provided if `vip` resolution mode is used.
**Query parameters**:
@ -4863,7 +4897,7 @@ image](#create-an-image) section for more details.
- **200** no error
- **404** no such service
- **500** server error
## 3.10 Tasks
**Note**: Task operations require the engine to be part of a swarm.
@ -4889,7 +4923,6 @@ List tasks
},
"CreatedAt": "2016-06-07T21:07:31.171892745Z",
"UpdatedAt": "2016-06-07T21:07:31.376370513Z",
"Name": "hopeful_cori",
"Spec": {
"ContainerSpec": {
"Image": "redis"
@ -4899,21 +4932,24 @@ List tasks
"Reservations": {}
},
"RestartPolicy": {
"Condition": "ANY"
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
"ServiceID": "9mnpnzenvg8p8tdbtq4wvbkcz",
"Instance": 1,
"NodeID": "24ifsmvkjbyhk",
"ServiceAnnotations": {},
"Slot": 1,
"NodeID": "60gvrl6tm78dmak4yl7srz94v",
"Status": {
"Timestamp": "2016-06-07T21:07:31.290032978Z",
"State": "FAILED",
"Message": "execution failed",
"ContainerStatus": {}
"State": "running",
"Message": "started",
"ContainerStatus": {
"ContainerID": "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035",
"PID": 677
}
},
"DesiredState": "SHUTDOWN",
"DesiredState": "running",
"NetworksAttachments": [
{
"Network": {
@ -4929,12 +4965,12 @@ List tasks
"com.docker.swarm.internal": "true"
},
"DriverConfiguration": {},
"IPAM": {
"IPAMOptions": {
"Driver": {},
"Configs": [
{
"Family": "UNKNOWN",
"Subnet": "10.255.0.0/16"
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
}
@ -4945,14 +4981,14 @@ List tasks
"com.docker.network.driver.overlay.vxlanid_list": "256"
}
},
"IPAM": {
"IPAMOptions": {
"Driver": {
"Name": "default"
},
"Configs": [
{
"Family": "UNKNOWN",
"Subnet": "10.255.0.0/16"
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
}
@ -4962,26 +4998,6 @@ List tasks
]
}
],
"Endpoint": {
"Spec": {},
"ExposedPorts": [
{
"Protocol": "tcp",
"Port": 6379,
"PublicPort": 30000
}
],
"VirtualIPs": [
{
"NetworkID": "4qvuz4ko70xaltuqbt8956gd1",
"Addr": "10.255.0.2/16"
},
{
"NetworkID": "4qvuz4ko70xaltuqbt8956gd1",
"Addr": "10.255.0.3/16"
}
]
}
},
{
"ID": "1yljwbmlr8er2waf8orvqpwms",
@ -5000,21 +5016,23 @@ List tasks
"Reservations": {}
},
"RestartPolicy": {
"Condition": "ANY"
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
"ServiceID": "9mnpnzenvg8p8tdbtq4wvbkcz",
"Instance": 1,
"NodeID": "24ifsmvkjbyhk",
"ServiceAnnotations": {},
"Slot": 1,
"NodeID": "60gvrl6tm78dmak4yl7srz94v",
"Status": {
"Timestamp": "2016-06-07T21:07:30.202183143Z",
"State": "FAILED",
"Message": "execution failed",
"ContainerStatus": {}
"State": "shutdown",
"Message": "shutdown",
"ContainerStatus": {
"ContainerID": "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213"
}
},
"DesiredState": "SHUTDOWN",
"DesiredState": "shutdown",
"NetworksAttachments": [
{
"Network": {
@ -5030,12 +5048,12 @@ List tasks
"com.docker.swarm.internal": "true"
},
"DriverConfiguration": {},
"IPAM": {
"IPAMOptions": {
"Driver": {},
"Configs": [
{
"Family": "UNKNOWN",
"Subnet": "10.255.0.0/16"
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
}
@ -5046,14 +5064,14 @@ List tasks
"com.docker.network.driver.overlay.vxlanid_list": "256"
}
},
"IPAM": {
"IPAMOptions": {
"Driver": {
"Name": "default"
},
"Configs": [
{
"Family": "UNKNOWN",
"Subnet": "10.255.0.0/16"
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
}
@ -5062,27 +5080,7 @@ List tasks
"10.255.0.5/16"
]
}
],
"Endpoint": {
"Spec": {},
"ExposedPorts": [
{
"Protocol": "tcp",
"Port": 6379,
"PublicPort": 30000
}
],
"VirtualIPs": [
{
"NetworkID": "4qvuz4ko70xaltuqbt8956gd1",
"Addr": "10.255.0.2/16"
},
{
"NetworkID": "4qvuz4ko70xaltuqbt8956gd1",
"Addr": "10.255.0.3/16"
}
]
}
]
}
]
@ -5122,7 +5120,6 @@ Get details on a task
},
"CreatedAt": "2016-06-07T21:07:31.171892745Z",
"UpdatedAt": "2016-06-07T21:07:31.376370513Z",
"Name": "hopeful_cori",
"Spec": {
"ContainerSpec": {
"Image": "redis"
@ -5132,21 +5129,24 @@ Get details on a task
"Reservations": {}
},
"RestartPolicy": {
"Condition": "ANY"
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
"ServiceID": "9mnpnzenvg8p8tdbtq4wvbkcz",
"Instance": 1,
"NodeID": "24ifsmvkjbyhk",
"ServiceAnnotations": {},
"Slot": 1,
"NodeID": "60gvrl6tm78dmak4yl7srz94v",
"Status": {
"Timestamp": "2016-06-07T21:07:31.290032978Z",
"State": "FAILED",
"Message": "execution failed",
"ContainerStatus": {}
"State": "running",
"Message": "started",
"ContainerStatus": {
"ContainerID": "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035",
"PID": 677
}
},
"DesiredState": "SHUTDOWN",
"DesiredState": "running",
"NetworksAttachments": [
{
"Network": {
@ -5162,12 +5162,12 @@ Get details on a task
"com.docker.swarm.internal": "true"
},
"DriverConfiguration": {},
"IPAM": {
"IPAMOptions": {
"Driver": {},
"Configs": [
{
"Family": "UNKNOWN",
"Subnet": "10.255.0.0/16"
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
}
@ -5178,14 +5178,14 @@ Get details on a task
"com.docker.network.driver.overlay.vxlanid_list": "256"
}
},
"IPAM": {
"IPAMOptions": {
"Driver": {
"Name": "default"
},
"Configs": [
{
"Family": "UNKNOWN",
"Subnet": "10.255.0.0/16"
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
}
@ -5194,27 +5194,7 @@ Get details on a task
"10.255.0.10/16"
]
}
],
"Endpoint": {
"Spec": {},
"ExposedPorts": [
{
"Protocol": "tcp",
"Port": 6379,
"PublicPort": 30000
}
],
"VirtualIPs": [
{
"NetworkID": "4qvuz4ko70xaltuqbt8956gd1",
"Addr": "10.255.0.2/16"
},
{
"NetworkID": "4qvuz4ko70xaltuqbt8956gd1",
"Addr": "10.255.0.3/16"
}
]
}
]
}
**Status codes**:

View file

@ -300,8 +300,8 @@ Create a container
"CpuQuota": 50000,
"CpusetCpus": "0,1",
"CpusetMems": "0,1",
"MaximumIOps": 0,
"MaximumIOBps": 0,
"IOMaximumBandwidth": 0,
"IOMaximumIOps": 0,
"BlkioWeight": 300,
"BlkioWeightDevice": [{}],
"BlkioDeviceReadBps": [{}],
@ -329,6 +329,7 @@ Create a container
"AutoRemove": true,
"NetworkMode": "bridge",
"Devices": [],
"Sysctls": { "net.ipv4.ip_forward": "1" },
"Ulimits": [{}],
"LogConfig": { "Type": "json-file", "Config": {} },
"SecurityOpt": [],
@ -394,10 +395,17 @@ Create a container
- **StopSignal** - Signal to stop a container as a string or unsigned integer. `SIGTERM` by default.
- **HostConfig**
- **Binds** A list of volume bindings for this container. Each volume binding is a string in one of these forms:
+ `host_path:container_path` to bind-mount a host path into the container
+ `host_path:container_path:ro` to make the bind-mount read-only inside the container.
+ `volume_name:container_path` to bind-mount a volume managed by a volume plugin into the container.
+ `volume_name:container_path:ro` to make the bind mount read-only inside the container.
+ `host-src:container-dest` to bind-mount a host path into the
container. Both `host-src`, and `container-dest` must be an
_absolute_ path.
+ `host-src:container-dest:ro` to make the bind-mount read-only
inside the container. Both `host-src`, and `container-dest` must be
an _absolute_ path.
+ `volume-name:container-dest` to bind-mount a volume managed by a
volume driver into the container. `container-dest` must be an
_absolute_ path.
+ `volume-name:container-dest:ro` to mount the volume read-only
inside the container. `container-dest` must be an _absolute_ path.
- **Links** - A list of links for the container. Each link entry should be
in the form of `container_name:alias`.
- **Memory** - Memory limit in bytes.
@ -412,8 +420,8 @@ Create a container
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
- **MaximumIOps** - Maximum IO absolute rate in terms of IOps.
- **MaximumIOBps** - Maximum IO absolute rate in terms of bytes per second.
- **IOMaximumBandwidth** - Maximum IO absolute rate in terms of IOps.
- **IOMaximumIOps** - Maximum IO absolute rate in terms of bytes per second.
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
- **BlkioWeightDevice** - Block IO weight (relative device weight) in the form of: `"BlkioWeightDevice": [{"Path": "device_path", "Weight": weight}]`
- **BlkioDeviceReadBps** - Limit read rate (bytes per second) from a device in the form of: `"BlkioDeviceReadBps": [{"Path": "device_path", "Rate": rate}]`, for example:
@ -563,8 +571,8 @@ Return low-level information on the container `id`
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"MaximumIOps": 0,
"MaximumIOBps": 0,
"IOMaximumBandwidth": 0,
"IOMaximumIOps": 0,
"BlkioWeight": 0,
"BlkioWeightDevice": [{}],
"BlkioDeviceReadBps": [{}],
@ -1306,7 +1314,7 @@ last four bytes (`uint32`).
It is encoded on the first eight bytes like this:
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
`STREAM_TYPE` can be:
@ -1740,16 +1748,25 @@ Create an image either by pulling it from the registry or by importing it
**Example request**:
POST /images/create?fromImage=ubuntu HTTP/1.1
POST /images/create?fromImage=busybox&tag=latest HTTP/1.1
**Example response**:
HTTP/1.1 200 OK
Content-Type: application/json
{"status": "Pulling..."}
{"status": "Pulling", "progress": "1 B/ 100 B", "progressDetail": {"current": 1, "total": 100}}
{"error": "Invalid..."}
{"status":"Pulling from library/busybox","id":"latest"}
{"status":"Pulling fs layer","progressDetail":{},"id":"8ddc19f16526"}
{"status":"Downloading","progressDetail":{"current":15881,"total":667590},"progress":"[=\u003e ] 15.88 kB/667.6 kB","id":"8ddc19f16526"}
{"status":"Downloading","progressDetail":{"current":556269,"total":667590},"progress":"[=========================================\u003e ] 556.3 kB/667.6 kB","id":"8ddc19f16526"}
{"status":"Download complete","progressDetail":{},"id":"8ddc19f16526"}
{"status":"Extracting","progressDetail":{"current":32768,"total":667590},"progress":"[==\u003e ] 32.77 kB/667.6 kB","id":"8ddc19f16526"}
{"status":"Extracting","progressDetail":{"current":491520,"total":667590},"progress":"[====================================\u003e ] 491.5 kB/667.6 kB","id":"8ddc19f16526"}
{"status":"Extracting","progressDetail":{"current":667590,"total":667590},"progress":"[==================================================\u003e] 667.6 kB/667.6 kB","id":"8ddc19f16526"}
{"status":"Extracting","progressDetail":{"current":667590,"total":667590},"progress":"[==================================================\u003e] 667.6 kB/667.6 kB","id":"8ddc19f16526"}
{"status":"Pull complete","progressDetail":{},"id":"8ddc19f16526"}
{"status":"Digest: sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6"}
{"status":"Status: Downloaded newer image for busybox:latest"}
...
When using this endpoint to pull an image from the registry, the
@ -1767,7 +1784,8 @@ a base64-encoded AuthConfig object.
- **repo** Repository name given to an image when it is imported.
The repo may include a tag. This parameter may only be used when importing
an image.
- **tag** Tag or digest.
- **tag** Tag or digest. If empty when pulling an image, this causes all tags
for the given image to be pulled.
**Request Headers**:
@ -2748,16 +2766,16 @@ Sets up an exec instance in a running container `id`
POST /containers/e90e34656806/exec HTTP/1.1
Content-Type: application/json
{
"AttachStdin": false,
"AttachStdout": true,
"AttachStderr": true,
"DetachKeys": "ctrl-p,ctrl-q",
"Tty": false,
"Cmd": [
"date"
]
}
{
"AttachStdin": true,
"AttachStdout": true,
"AttachStderr": true,
"Cmd": ["sh"],
"DetachKeys": "ctrl-p,ctrl-q",
"Privileged": true,
"Tty": true,
"User": "123:456"
}
**Example response**:
@ -2779,7 +2797,10 @@ Sets up an exec instance in a running container `id`
where `<value>` is one of: `a-z`, `@`, `^`, `[`, `,` or `_`.
- **Tty** - Boolean value to allocate a pseudo-TTY.
- **Cmd** - Command to run specified as a string or an array of strings.
- **Privileged** - Boolean value, runs the exec process with extended privileges.
- **User** - A string value specifying the user, and optionally, group to run
the exec process inside the container. Format is one of: `"user"`,
`"user:group"`, `"uid"`, or `"uid:gid"`.
**Status codes**:
@ -2825,6 +2846,7 @@ interactive session with the `exec` command.
- **409** - container is paused
**Stream details**:
Similar to the stream behavior of `POST /containers/(id or name)/attach` API
### Exec Resize
@ -4168,7 +4190,7 @@ Inspect swarm
`POST /swarm/init`
Initialize a new swarm
Initialize a new swarm. The body of the HTTP response includes the node ID.
**Example request**:
@ -4190,8 +4212,12 @@ Initialize a new swarm
**Example response**:
HTTP/1.1 200 OK
Content-Length: 0
Content-Type: text/plain; charset=utf-8
Content-Length: 28
Content-Type: application/json
Date: Thu, 01 Sep 2016 21:49:13 GMT
Server: Docker/1.12.0 (linux)
"7v2t30z9blmxuhnyo6s4cpenp"
**Status codes**:
@ -4423,7 +4449,8 @@ List services
"Reservations": {}
},
"RestartPolicy": {
"Condition": "ANY"
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
@ -4433,26 +4460,36 @@ List services
}
},
"UpdateConfig": {
"Parallelism": 1
"Parallelism": 1,
"FailureAction": "pause"
},
"EndpointSpec": {
"Mode": "VIP",
"Ingress": "PUBLICPORT",
"ExposedPorts": [
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"Port": 6379
"TargetPort": 6379,
"PublishedPort": 30001
}
]
}
},
"Endpoint": {
"Spec": {},
"ExposedPorts": [
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 6379,
"PublishedPort": 30001
}
]
},
"Ports": [
{
"Protocol": "tcp",
"Port": 6379,
"PublicPort": 30000
"TargetPort": 6379,
"PublishedPort": 30001
}
],
"VirtualIPs": [
@ -4638,13 +4675,13 @@ image](#create-an-image) section for more details.
- **FailureAction** - Action to take if an updated task fails to run, or stops running during the
update. Values are `continue` and `pause`.
- **Networks** Array of network names or IDs to attach the service to.
- **Endpoint** Properties that can be configured to access and load balance a service.
- **Spec**
- **Mode** The mode of resolution to use for internal load balancing
between tasks (`vip` or `dnsrr`).
- **Ports** Exposed ports that this service is accessible on from the outside, in the form
of: `"Ports": { "<port>/<tcp|udp>: {}" }`
- **VirtualIPs**
- **EndpointSpec** Properties that can be configured to access and load balance a service.
- **Mode** The mode of resolution to use for internal load balancing
between tasks (`vip` or `dnsrr`). Defaults to `vip` if not provided.
- **Ports** List of exposed ports that this service is accessible on from
the outside, in the form of:
`{"Protocol": <"tcp"|"udp">, "PublishedPort": <port>, "TargetPort": <port>}`.
Ports can only be provided if `vip` resolution mode is used.
**Request Headers**:
@ -4697,7 +4734,7 @@ Return information on the service `id`.
"UpdatedAt": "2016-06-07T21:10:20.276301259Z",
"Spec": {
"Name": "redis",
"Task": {
"TaskTemplate": {
"ContainerSpec": {
"Image": "redis"
},
@ -4706,7 +4743,8 @@ Return information on the service `id`.
"Reservations": {}
},
"RestartPolicy": {
"Condition": "ANY"
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
@ -4716,26 +4754,36 @@ Return information on the service `id`.
}
},
"UpdateConfig": {
"Parallelism": 1
"Parallelism": 1,
"FailureAction": "pause"
},
"EndpointSpec": {
"Mode": "VIP",
"Ingress": "PUBLICPORT",
"ExposedPorts": [
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"Port": 6379
"TargetPort": 6379,
"PublishedPort": 30001
}
]
}
},
"Endpoint": {
"Spec": {},
"ExposedPorts": [
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 6379,
"PublishedPort": 30001
}
]
},
"Ports": [
{
"Protocol": "tcp",
"Port": 6379,
"PublicPort": 30001
"TargetPort": 6379,
"PublishedPort": 30001
}
],
"VirtualIPs": [
@ -4849,7 +4897,7 @@ image](#create-an-image) section for more details.
as part of this service.
- **Condition** Condition for restart (`none`, `on-failure`, or `any`).
- **Delay** Delay between restart attempts.
- **Attempts** Maximum attempts to restart a given container before giving up (default value
- **MaxAttempts** Maximum attempts to restart a given container before giving up (default value
is 0, which is ignored).
- **Window** Windows is the time window used to evaluate the restart policy (default value is
0, which is unbounded).
@ -4860,13 +4908,13 @@ image](#create-an-image) section for more details.
parallelism).
- **Delay** Amount of time between updates.
- **Networks** Array of network names or IDs to attach the service to.
- **Endpoint** Properties that can be configured to access and load balance a service.
- **Spec**
- **Mode** The mode of resolution to use for internal load balancing
between tasks (`vip` or `dnsrr`).
- **Ports** Exposed ports that this service is accessible on from the outside, in the form
of: `"Ports": { "<port>/<tcp|udp>: {}" }`
- **VirtualIPs**
- **EndpointSpec** Properties that can be configured to access and load balance a service.
- **Mode** The mode of resolution to use for internal load balancing
between tasks (`vip` or `dnsrr`). Defaults to `vip` if not provided.
- **Ports** List of exposed ports that this service is accessible on from
the outside, in the form of:
`{"Protocol": <"tcp"|"udp">, "PublishedPort": <port>, "TargetPort": <port>}`.
Ports can only be provided if `vip` resolution mode is used.
**Query parameters**:
@ -4911,7 +4959,6 @@ List tasks
},
"CreatedAt": "2016-06-07T21:07:31.171892745Z",
"UpdatedAt": "2016-06-07T21:07:31.376370513Z",
"Name": "hopeful_cori",
"Spec": {
"ContainerSpec": {
"Image": "redis"
@ -4921,21 +4968,24 @@ List tasks
"Reservations": {}
},
"RestartPolicy": {
"Condition": "ANY"
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
"ServiceID": "9mnpnzenvg8p8tdbtq4wvbkcz",
"Instance": 1,
"NodeID": "24ifsmvkjbyhk",
"ServiceAnnotations": {},
"Slot": 1,
"NodeID": "60gvrl6tm78dmak4yl7srz94v",
"Status": {
"Timestamp": "2016-06-07T21:07:31.290032978Z",
"State": "FAILED",
"Message": "execution failed",
"ContainerStatus": {}
"State": "running",
"Message": "started",
"ContainerStatus": {
"ContainerID": "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035",
"PID": 677
}
},
"DesiredState": "SHUTDOWN",
"DesiredState": "running",
"NetworksAttachments": [
{
"Network": {
@ -4951,12 +5001,12 @@ List tasks
"com.docker.swarm.internal": "true"
},
"DriverConfiguration": {},
"IPAM": {
"IPAMOptions": {
"Driver": {},
"Configs": [
{
"Family": "UNKNOWN",
"Subnet": "10.255.0.0/16"
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
}
@ -4967,14 +5017,14 @@ List tasks
"com.docker.network.driver.overlay.vxlanid_list": "256"
}
},
"IPAM": {
"IPAMOptions": {
"Driver": {
"Name": "default"
},
"Configs": [
{
"Family": "UNKNOWN",
"Subnet": "10.255.0.0/16"
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
}
@ -4984,26 +5034,6 @@ List tasks
]
}
],
"Endpoint": {
"Spec": {},
"ExposedPorts": [
{
"Protocol": "tcp",
"Port": 6379,
"PublicPort": 30000
}
],
"VirtualIPs": [
{
"NetworkID": "4qvuz4ko70xaltuqbt8956gd1",
"Addr": "10.255.0.2/16"
},
{
"NetworkID": "4qvuz4ko70xaltuqbt8956gd1",
"Addr": "10.255.0.3/16"
}
]
}
},
{
"ID": "1yljwbmlr8er2waf8orvqpwms",
@ -5022,21 +5052,23 @@ List tasks
"Reservations": {}
},
"RestartPolicy": {
"Condition": "ANY"
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
"ServiceID": "9mnpnzenvg8p8tdbtq4wvbkcz",
"Instance": 1,
"NodeID": "24ifsmvkjbyhk",
"ServiceAnnotations": {},
"Slot": 1,
"NodeID": "60gvrl6tm78dmak4yl7srz94v",
"Status": {
"Timestamp": "2016-06-07T21:07:30.202183143Z",
"State": "FAILED",
"Message": "execution failed",
"ContainerStatus": {}
"State": "shutdown",
"Message": "shutdown",
"ContainerStatus": {
"ContainerID": "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213"
}
},
"DesiredState": "SHUTDOWN",
"DesiredState": "shutdown",
"NetworksAttachments": [
{
"Network": {
@ -5052,12 +5084,12 @@ List tasks
"com.docker.swarm.internal": "true"
},
"DriverConfiguration": {},
"IPAM": {
"IPAMOptions": {
"Driver": {},
"Configs": [
{
"Family": "UNKNOWN",
"Subnet": "10.255.0.0/16"
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
}
@ -5068,14 +5100,14 @@ List tasks
"com.docker.network.driver.overlay.vxlanid_list": "256"
}
},
"IPAM": {
"IPAMOptions": {
"Driver": {
"Name": "default"
},
"Configs": [
{
"Family": "UNKNOWN",
"Subnet": "10.255.0.0/16"
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
}
@ -5084,27 +5116,7 @@ List tasks
"10.255.0.5/16"
]
}
],
"Endpoint": {
"Spec": {},
"ExposedPorts": [
{
"Protocol": "tcp",
"Port": 6379,
"PublicPort": 30000
}
],
"VirtualIPs": [
{
"NetworkID": "4qvuz4ko70xaltuqbt8956gd1",
"Addr": "10.255.0.2/16"
},
{
"NetworkID": "4qvuz4ko70xaltuqbt8956gd1",
"Addr": "10.255.0.3/16"
}
]
}
]
}
]
@ -5144,7 +5156,6 @@ Get details on a task
},
"CreatedAt": "2016-06-07T21:07:31.171892745Z",
"UpdatedAt": "2016-06-07T21:07:31.376370513Z",
"Name": "hopeful_cori",
"Spec": {
"ContainerSpec": {
"Image": "redis"
@ -5154,21 +5165,24 @@ Get details on a task
"Reservations": {}
},
"RestartPolicy": {
"Condition": "ANY"
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {}
},
"ServiceID": "9mnpnzenvg8p8tdbtq4wvbkcz",
"Instance": 1,
"NodeID": "24ifsmvkjbyhk",
"ServiceAnnotations": {},
"Slot": 1,
"NodeID": "60gvrl6tm78dmak4yl7srz94v",
"Status": {
"Timestamp": "2016-06-07T21:07:31.290032978Z",
"State": "FAILED",
"Message": "execution failed",
"ContainerStatus": {}
"State": "running",
"Message": "started",
"ContainerStatus": {
"ContainerID": "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035",
"PID": 677
}
},
"DesiredState": "SHUTDOWN",
"DesiredState": "running",
"NetworksAttachments": [
{
"Network": {
@ -5184,12 +5198,12 @@ Get details on a task
"com.docker.swarm.internal": "true"
},
"DriverConfiguration": {},
"IPAM": {
"IPAMOptions": {
"Driver": {},
"Configs": [
{
"Family": "UNKNOWN",
"Subnet": "10.255.0.0/16"
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
}
@ -5200,14 +5214,14 @@ Get details on a task
"com.docker.network.driver.overlay.vxlanid_list": "256"
}
},
"IPAM": {
"IPAMOptions": {
"Driver": {
"Name": "default"
},
"Configs": [
{
"Family": "UNKNOWN",
"Subnet": "10.255.0.0/16"
"Subnet": "10.255.0.0/16",
"Gateway": "10.255.0.1"
}
]
}
@ -5216,27 +5230,7 @@ Get details on a task
"10.255.0.10/16"
]
}
],
"Endpoint": {
"Spec": {},
"ExposedPorts": [
{
"Protocol": "tcp",
"Port": 6379,
"PublicPort": 30000
}
],
"VirtualIPs": [
{
"NetworkID": "4qvuz4ko70xaltuqbt8956gd1",
"Addr": "10.255.0.2/16"
},
{
"NetworkID": "4qvuz4ko70xaltuqbt8956gd1",
"Addr": "10.255.0.3/16"
}
]
}
]
}
**Status codes**:

View file

@ -107,9 +107,10 @@ Options:
'host': Use the Docker host user namespace
'': Use the Docker daemon user namespace specified by `--userns-remap` option.
--uts string UTS namespace to use
-v, --volume value Bind mount a volume (default []). The comma-delimited
`options` are [rw|ro], [z|Z],
[[r]shared|[r]slave|[r]private], and
-v, --volume value Bind mount a volume (default []). The format
is `[host-src:]container-dest[:<options>]`.
The comma-delimited `options` are [rw|ro],
[z|Z], [[r]shared|[r]slave|[r]private], and
[nocopy]. The 'host-src' is an absolute path
or a name value.
--volume-driver string Optional volume driver for the container

View file

@ -123,26 +123,32 @@ find examples of using Systemd socket activation with Docker and Systemd in the
You can configure the Docker daemon to listen to multiple sockets at the same
time using multiple `-H` options:
# listen using the default unix socket, and on 2 specific IP addresses on this host.
dockerd -H unix:///var/run/docker.sock -H tcp://192.168.59.106 -H tcp://10.10.10.2
```bash
# listen using the default unix socket, and on 2 specific IP addresses on this host.
$ sudo dockerd -H unix:///var/run/docker.sock -H tcp://192.168.59.106 -H tcp://10.10.10.2
```
The Docker client will honor the `DOCKER_HOST` environment variable to set the
`-H` flag for the client.
$ docker -H tcp://0.0.0.0:2375 ps
# or
$ export DOCKER_HOST="tcp://0.0.0.0:2375"
$ docker ps
# both are equal
```bash
$ docker -H tcp://0.0.0.0:2375 ps
# or
$ export DOCKER_HOST="tcp://0.0.0.0:2375"
$ docker ps
# both are equal
```
Setting the `DOCKER_TLS_VERIFY` environment variable to any value other than
the empty string is equivalent to setting the `--tlsverify` flag. The following
are equivalent:
$ docker --tlsverify ps
# or
$ export DOCKER_TLS_VERIFY=1
$ docker ps
```bash
$ docker --tlsverify ps
# or
$ export DOCKER_TLS_VERIFY=1
$ docker ps
```
The Docker client will honor the `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY`
environment variables (or the lowercase versions thereof). `HTTPS_PROXY` takes
@ -188,27 +194,31 @@ For example:
`-H`, when empty, will default to the same value as
when no `-H` was passed in.
`-H` also accepts short form for TCP bindings:
`host:` or `host:port` or `:port`
`-H` also accepts short form for TCP bindings: `host:` or `host:port` or `:port`
Run Docker in daemon mode:
$ sudo <path to>/dockerd -H 0.0.0.0:5555 &
```bash
$ sudo <path to>/dockerd -H 0.0.0.0:5555 &
```
Download an `ubuntu` image:
$ docker -H :5555 pull ubuntu
```bash
$ docker -H :5555 pull ubuntu
```
You can use multiple `-H`, for example, if you want to listen on both
TCP and a Unix socket
# Run docker in daemon mode
$ sudo <path to>/dockerd -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock &
# Download an ubuntu image, use default Unix socket
$ docker pull ubuntu
# OR use the TCP port
$ docker -H tcp://127.0.0.1:2375 pull ubuntu
```bash
# Run docker in daemon mode
$ sudo <path to>/dockerd -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock &
# Download an ubuntu image, use default Unix socket
$ docker pull ubuntu
# OR use the TCP port
$ docker -H tcp://127.0.0.1:2375 pull ubuntu
```
### Daemon storage-driver option
@ -272,29 +282,30 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
* `dm.thinpooldev`
Specifies a custom block storage device to use for the thin pool.
Specifies a custom block storage device to use for the thin pool.
If using a block device for device mapper storage, it is best to use `lvm`
to create and manage the thin-pool volume. This volume is then handed to Docker
to exclusively create snapshot volumes needed for images and containers.
If using a block device for device mapper storage, it is best to use `lvm`
to create and manage the thin-pool volume. This volume is then handed to Docker
to exclusively create snapshot volumes needed for images and containers.
Managing the thin-pool outside of Engine makes for the most feature-rich
method of having Docker utilize device mapper thin provisioning as the
backing storage for Docker containers. The highlights of the lvm-based
thin-pool management feature include: automatic or interactive thin-pool
resize support, dynamically changing thin-pool features, automatic thinp
metadata checking when lvm activates the thin-pool, etc.
Managing the thin-pool outside of Engine makes for the most feature-rich
method of having Docker utilize device mapper thin provisioning as the
backing storage for Docker containers. The highlights of the lvm-based
thin-pool management feature include: automatic or interactive thin-pool
resize support, dynamically changing thin-pool features, automatic thinp
metadata checking when lvm activates the thin-pool, etc.
As a fallback if no thin pool is provided, loopback files are
created. Loopback is very slow, but can be used without any
pre-configuration of storage. It is strongly recommended that you do
not use loopback in production. Ensure your Engine daemon has a
`--storage-opt dm.thinpooldev` argument provided.
As a fallback if no thin pool is provided, loopback files are
created. Loopback is very slow, but can be used without any
pre-configuration of storage. It is strongly recommended that you do
not use loopback in production. Ensure your Engine daemon has a
`--storage-opt dm.thinpooldev` argument provided.
Example use:
Example use:
$ dockerd \
--storage-opt dm.thinpooldev=/dev/mapper/thin-pool
```bash
$ sudo dockerd --storage-opt dm.thinpooldev=/dev/mapper/thin-pool
```
* `dm.basesize`
@ -310,7 +321,10 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
Example use:
$ dockerd --storage-opt dm.basesize=50G
```bash
$ sudo dockerd --storage-opt dm.basesize=50G
```
This will increase the base device size to 50G. The Docker daemon will throw an
error if existing base device size is larger than 50G. A user can use
@ -320,19 +334,23 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
that may already be initialized and inherited by pulled images. Typically,
a change to this value requires additional steps to take effect:
$ sudo service docker stop
$ sudo rm -rf /var/lib/docker
$ sudo service docker start
```bash
$ sudo service docker stop
$ sudo rm -rf /var/lib/docker
$ sudo service docker start
```
Example use:
$ dockerd --storage-opt dm.basesize=20G
```bash
$ sudo dockerd --storage-opt dm.basesize=20G
```
* `dm.loopdatasize`
> **Note**:
> This option configures devicemapper loopback, which should not
> be used in production.
> This option configures devicemapper loopback, which should not
> be used in production.
Specifies the size to use when creating the loopback file for the
"data" device which is used for the thin pool. The default size is
@ -341,7 +359,9 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
Example use:
$ dockerd --storage-opt dm.loopdatasize=200G
```bash
$ sudo dockerd --storage-opt dm.loopdatasize=200G
```
* `dm.loopmetadatasize`
@ -356,7 +376,9 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
Example use:
$ dockerd --storage-opt dm.loopmetadatasize=4G
```bash
$ sudo dockerd --storage-opt dm.loopmetadatasize=4G
```
* `dm.fs`
@ -365,7 +387,9 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
Example use:
$ dockerd --storage-opt dm.fs=ext4
```bash
$ sudo dockerd --storage-opt dm.fs=ext4
```
* `dm.mkfsarg`
@ -373,7 +397,9 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
Example use:
$ dockerd --storage-opt "dm.mkfsarg=-O ^has_journal"
```bash
$ sudo dockerd --storage-opt "dm.mkfsarg=-O ^has_journal"
```
* `dm.mountopt`
@ -381,7 +407,9 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
Example use:
$ dockerd --storage-opt dm.mountopt=nodiscard
```bash
$ sudo dockerd --storage-opt dm.mountopt=nodiscard
```
* `dm.datadev`
@ -395,9 +423,11 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
Example use:
$ dockerd \
--storage-opt dm.datadev=/dev/sdb1 \
--storage-opt dm.metadatadev=/dev/sdc1
```bash
$ sudo dockerd \
--storage-opt dm.datadev=/dev/sdb1 \
--storage-opt dm.metadatadev=/dev/sdc1
```
* `dm.metadatadev`
@ -411,13 +441,17 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
If setting up a new metadata pool it is required to be valid. This can be
achieved by zeroing the first 4k to indicate empty metadata, like this:
$ dd if=/dev/zero of=$metadata_dev bs=4096 count=1
```bash
$ dd if=/dev/zero of=$metadata_dev bs=4096 count=1
```
Example use:
$ dockerd \
--storage-opt dm.datadev=/dev/sdb1 \
--storage-opt dm.metadatadev=/dev/sdc1
```bash
$ sudo dockerd \
--storage-opt dm.datadev=/dev/sdb1 \
--storage-opt dm.metadatadev=/dev/sdc1
```
* `dm.blocksize`
@ -426,7 +460,9 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
Example use:
$ dockerd --storage-opt dm.blocksize=512K
```bash
$ sudo dockerd --storage-opt dm.blocksize=512K
```
* `dm.blkdiscard`
@ -440,7 +476,9 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
Example use:
$ dockerd --storage-opt dm.blkdiscard=false
```bash
$ sudo dockerd --storage-opt dm.blkdiscard=false
```
* `dm.override_udev_sync_check`
@ -450,10 +488,12 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
To view the `udev` sync support of a Docker daemon that is using the
`devicemapper` driver, run:
$ docker info
[...]
Udev Sync Supported: true
[...]
```bash
$ docker info
[...]
Udev Sync Supported: true
[...]
```
When `udev` sync support is `true`, then `devicemapper` and udev can
coordinate the activation and deactivation of devices for containers.
@ -466,7 +506,9 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
To allow the `docker` daemon to start, regardless of `udev` sync not being
supported, set `dm.override_udev_sync_check` to true:
$ dockerd --storage-opt dm.override_udev_sync_check=true
```bash
$ sudo dockerd --storage-opt dm.override_udev_sync_check=true
```
When this value is `true`, the `devicemapper` continues and simply warns
you the errors are happening.
@ -496,7 +538,9 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
Example use:
$ dockerd --storage-opt dm.use_deferred_removal=true
```bash
$ sudo dockerd --storage-opt dm.use_deferred_removal=true
```
* `dm.use_deferred_deletion`
@ -510,9 +554,11 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
To avoid this failure, enable both deferred device deletion and deferred
device removal on the daemon.
$ dockerd \
--storage-opt dm.use_deferred_deletion=true \
--storage-opt dm.use_deferred_removal=true
```bash
$ sudo dockerd \
--storage-opt dm.use_deferred_deletion=true \
--storage-opt dm.use_deferred_removal=true
```
With these two options enabled, if a device is busy when the driver is
deleting a container, the driver marks the device as deleted. Later, when
@ -549,7 +595,7 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
Example use:
```bash
$ dockerd --storage-opt dm.min_free_space=10%
$ sudo dockerd --storage-opt dm.min_free_space=10%
```
#### ZFS options
@ -562,7 +608,9 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
Example use:
$ dockerd -s zfs --storage-opt zfs.fsname=zroot/docker
```bash
$ sudo dockerd -s zfs --storage-opt zfs.fsname=zroot/docker
```
#### Btrfs options
@ -574,7 +622,10 @@ options for `zfs` start with `zfs` and options for `btrfs` start with `btrfs`.
**size** cannot be smaller than **btrfs.min_space**.
Example use:
$ dockerd -s btrfs --storage-opt btrfs.min_space=10G
```bash
$ sudo dockerd -s btrfs --storage-opt btrfs.min_space=10G
```
#### Overlay2 options
@ -599,7 +650,7 @@ control `containerd` startup, manually start `containerd` and pass the path to
the `containerd` socket using the `--containerd` flag. For example:
```bash
$ dockerd --containerd /var/run/dev/docker-containerd.sock
$ sudo dockerd --containerd /var/run/dev/docker-containerd.sock
```
Runtimes can be registered with the daemon either via the
@ -623,9 +674,11 @@ The following is an example adding 2 runtimes via the configuration:
This is the same example via the command line:
$ sudo dockerd --add-runtime runc=runc --add-runtime custom=/usr/local/bin/my-runc-replacement
```bash
$ sudo dockerd --add-runtime runc=runc --add-runtime custom=/usr/local/bin/my-runc-replacement
```
**Note**: defining runtime arguments via the command line is not supported.
> **Note**: defining runtime arguments via the command line is not supported.
## Options for the runtime
@ -640,14 +693,18 @@ cgroups. You can specify only specify `cgroupfs` or `systemd`. If you specify
This example sets the `cgroupdriver` to `systemd`:
$ sudo dockerd --exec-opt native.cgroupdriver=systemd
```bash
$ sudo dockerd --exec-opt native.cgroupdriver=systemd
```
Setting this option applies to all containers the daemon launches.
Also Windows Container makes use of `--exec-opt` for special purpose. Docker user
can specify default container isolation technology with this, for example:
$ dockerd --exec-opt isolation=hyperv
```bash
$ sudo dockerd --exec-opt isolation=hyperv
```
Will make `hyperv` the default isolation technology on Windows. If no isolation
value is specified on daemon start, on Windows client, the default is
@ -655,11 +712,19 @@ value is specified on daemon start, on Windows client, the default is
## Daemon DNS options
To set the DNS server for all Docker containers, use
`dockerd --dns 8.8.8.8`.
To set the DNS server for all Docker containers, use:
```bash
$ sudo dockerd --dns 8.8.8.8
```
To set the DNS search domain for all Docker containers, use:
```bash
$ sudo dockerd --dns-search example.com
```
To set the DNS search domain for all Docker containers, use
`dockerd --dns-search example.com`.
## Insecure registries
@ -754,7 +819,7 @@ using the `--cluster-store-opt` flag, specifying the paths to PEM encoded
files. For example:
```bash
dockerd \
$ sudo dockerd \
--cluster-advertise 192.168.1.2:2376 \
--cluster-store etcd://192.168.1.2:2379 \
--cluster-store-opt kv.cacertfile=/path/to/ca.pem \
@ -804,7 +869,7 @@ authorization plugins when you start the Docker `daemon` using the
`--authorization-plugin=PLUGIN_ID` option.
```bash
dockerd --authorization-plugin=plugin1 --authorization-plugin=plugin2,...
$ sudo dockerd --authorization-plugin=plugin1 --authorization-plugin=plugin2,...
```
The `PLUGIN_ID` value is either the plugin's name or a path to its specification
@ -875,10 +940,10 @@ startup will fail with an error message.
> *before* the `--userns-remap` option is enabled. Once these files exist, the
> daemon can be (re)started and range assignment on user creation works properly.
*Example: starting with default Docker user management:*
**Example: starting with default Docker user management:**
```bash
$ dockerd --userns-remap=default
$ sudo dockerd --userns-remap=default
```
When `default` is provided, Docker will create - or find the existing - user and group
@ -1220,7 +1285,7 @@ The `--tls*` options enable use of specific certificates for individual daemons.
Example script for a separate “bootstrap” instance of the Docker daemon without network:
```bash
$ dockerd \
$ sudo dockerd \
-H unix:///var/run/docker-bootstrap.sock \
-p /var/run/docker-bootstrap.pid \
--iptables=false \

View file

@ -136,7 +136,7 @@ read the [`dockerd`](dockerd.md) reference page.
| [service create](service_create.md) | Create a new service |
| [service inspect](service_inspect.md) | Inspect a service |
| [service ls](service_ls.md) | List services in the swarm |
| [service rm](service_rm.md) | Reemove a swervice from the swarm |
| [service rm](service_rm.md) | Remove a service from the swarm |
| [service scale](service_scale.md) | Set the number of replicas for the desired state of the service |
| [service ps](service_ps.md) | List the tasks of a service |
| [service update](service_update.md) | Update the attributes of a service |

View file

@ -33,7 +33,7 @@ meta data regarding those images are stored. When run for the first time Docker
allocates a certain amount of data space and meta data space from the space
available on the volume where `/var/lib/docker` is mounted.
# EXAMPLES
# Examples
## Display Docker system information

View file

@ -24,11 +24,14 @@ Options:
-t, --timestamps Show timestamps
```
> **Note**: this command is available only for containers with `json-file` and
> `journald` logging drivers.
The `docker logs` command batch-retrieves logs present at the time of execution.
> **Note**: this command is only functional for containers that are started with
> the `json-file` or `journald` logging driver.
For more information about selecting and configuring login-drivers, refer to
[Configure logging drivers](../../admin/logging/overview.md).
The `docker logs --follow` command will continue streaming the new output from
the container's `STDOUT` and `STDERR`.

View file

@ -136,8 +136,8 @@ $ docker network create -d overlay \
--gateway=192.168.0.100 \
--gateway=192.170.0.100 \
--ip-range=192.168.1.0/24 \
--aux-address a=192.168.1.5 --aux-address b=192.168.1.6 \
--aux-address a=192.170.1.5 --aux-address b=192.170.1.6 \
--aux-address="my-router=192.168.1.5" --aux-address="my-switch=192.168.1.6" \
--aux-address="my-printer=192.170.1.5" --aux-address="my-nas=192.170.1.6" \
my-multihost-network
```
@ -156,7 +156,7 @@ equivalent docker daemon flags used for docker0 bridge:
| `com.docker.network.bridge.enable_ip_masquerade` | `--ip-masq` | Enable IP masquerading |
| `com.docker.network.bridge.enable_icc` | `--icc` | Enable or Disable Inter Container Connectivity |
| `com.docker.network.bridge.host_binding_ipv4` | `--ip` | Default IP when binding container ports |
| `com.docker.network.mtu` | `--mtu` | Set the containers network MTU |
| `com.docker.network.driver.mtu` | `--mtu` | Set the containers network MTU |
The following arguments can be passed to `docker network create` for any
network driver, again with their approximate equivalents to `docker daemon`.

View file

@ -22,9 +22,42 @@ Options:
Use `docker push` to share your images to the [Docker Hub](https://hub.docker.com)
registry or to a self-hosted one.
[Read more about valid image names and tags](tag.md).
Refer to the [`docker tag`](tag.md) reference for more information about valid
image and tag names.
Killing the `docker push` process, for example by pressing `CTRL-c` while it is
running in a terminal, will terminate the push operation.
running in a terminal, terminates the push operation.
Registry credentials are managed by [docker login](login.md).
## Examples
### Pushing a new image to a registry
First save the new image by finding the container ID (using [`docker ps`](ps.md))
and then committing it to a new image name. Note that only `a-z0-9-_.` are
allowed when naming images:
```bash
$ docker commit c16378f943fe rhel-httpd
```
Now, push the image to the registry using the image ID. In this example the
registry is on host named `registry-host` and listening on port `5000`. To do
this, tag the image with the host name or IP address, and the port of the
registry:
```bash
$ docker tag rhel-httpd registry-host:5000/myadmin/rhel-httpd
$ docker push registry-host:5000/myadmin/rhel-httpd
```
Check that this worked by running:
```bash
$ docker images
```
You should see both `rhel-httpd` and `registry-host:5000/myadmin/rhel-httpd`
listed.

View file

@ -115,9 +115,10 @@ Options:
'host': Use the Docker host user namespace
'': Use the Docker daemon user namespace specified by `--userns-remap` option.
--uts string UTS namespace to use
-v, --volume value Bind mount a volume (default []). The comma-delimited
`options` are [rw|ro], [z|Z],
[[r]shared|[r]slave|[r]private], and
-v, --volume value Bind mount a volume (default []). The format
is `[host-src:]container-dest[:<options>]`.
The comma-delimited `options` are [rw|ro],
[z|Z], [[r]shared|[r]slave|[r]private], and
[nocopy]. The 'host-src' is an absolute path
or a name value.
--volume-driver string Optional volume driver for the container
@ -239,6 +240,8 @@ binary (refer to [get the linux binary](
you give the container the full access to create and manipulate the host's
Docker daemon.
For in-depth information about volumes, refer to [manage data in containers](../../tutorials/dockervolumes.md)
### Publish or expose port (-p, --expose)
$ docker run -p 127.0.0.1:80:8080 ubuntu bash
@ -633,14 +636,14 @@ On Microsoft Windows, can take any of these values:
| `hyperv` | Hyper-V hypervisor partition-based isolation. |
On Windows, the default isolation for client is `hyperv`, and for server is
`process`. Therefore when running on Windows server without a `daemon` option
`process`. Therefore when running on Windows server without a `daemon` option
set, these two commands are equivalent:
```
$ docker run -d --isolation default busybox top
$ docker run -d --isolation process busybox top
```
If you have set the `--exec-opt isolation=hyperv` option on the Docker `daemon`,
If you have set the `--exec-opt isolation=hyperv` option on the Docker `daemon`,
if running on Windows server, any of these commands also result in `hyperv` isolation:
```

View file

@ -138,13 +138,183 @@ $ docker service create \
For more information about labels, refer to [apply custom
metadata](../../userguide/labels-custom-metadata.md).
### Add bind-mounts or volumes
Docker supports two different kinds of mounts, which allow containers to read to
or write from files or directories on other containers or the host operating
system. These types are _data volumes_ (often referred to simply as volumes) and
_bind-mounts_.
A **bind-mount** makes a file or directory on the host available to the
container it is mounted within. A bind-mount may be either read-only or
read-write. For example, a container might share its host's DNS information by
means of a bind-mount of the host's `/etc/resolv.conf` or a container might
write logs to its host's `/var/log/myContainerLogs` directory. If you use
bind-mounts and your host and containers have different notions of permissions,
access controls, or other such details, you will run into portability issues.
A **named volume** is a mechanism for decoupling persistent data needed by your
container from the image used to create the container and from the host machine.
Named volumes are created and managed by Docker, and a named volume persists
even when no container is currently using it. Data in named volumes can be
shared between a container and the host machine, as well as between multiple
containers. Docker uses a _volume driver_ to create, manage, and mount volumes.
You can back up or restore volumes using Docker commands.
Consider a situation where your image starts a lightweight web server. You could
use that image as a base image, copy in your website's HTML files, and package
that into another image. Each time your website changed, you'd need to update
the new image and redeploy all of the containers serving your website. A better
solution is to store the website in a named volume which is attached to each of
your web server containers when they start. To update the website, you just
update the named volume.
For more information about named volumes, see
[Data Volumes](https://docs.docker.com/engine/tutorials/dockervolumes/).
The following table describes options which apply to both bind-mounts and named
volumes in a service:
| Option | Required | Description
|:-----------------------------------------|:--------------------------|:-----------------------------------------------------------------------------------------
| **type** | | The type of mount, can be either `volume`, or `bind`. Defaults to `volume` if no type is specified.<ul><li>`volume`: mounts a [managed volume](volume_create.md) into the container.</li><li>`bind`: bind-mounts a directory or file from the host into the container.</li></ul>
| **src** or **source** | for `type=bind`&nbsp;only | <ul><li>`type=volume`: `src` is an optional way to specify the name of the volume (for example, `src=my-volume`). If the named volume does not exist, it is automatically created. If no `src` is specified, the volume is assigned a random name which is guaranteed to be unique on the host, but may not be unique cluster-wide. A randomly-named volume has the same lifecycle as its container and is destroyed when the *container* is destroyed (which is upon `service update`, or when scaling or re-balancing the service).</li><li>`type=bind`: `src` is required, and specifies an absolute path to the file or directory to bind-mount (for example, `src=/path/on/host/`). An error is produced if the file or directory does not exist.</li></ul>
| **dst** or **destination** or **target** | yes | Mount path inside the container, for example `/some/path/in/container/`. If the path does not exist in the container's filesystem, the Engine creates a directory at the specified location before mounting the volume or bind-mount.
| **readonly** or **ro** | | The Engine mounts binds and volumes `read-write` unless `readonly` option is given when mounting the bind or volume.<br /><br /><ul><li>`true` or `1` or no value: Mounts the bind or volume read-only.</li><li>`false` or `0`: Mounts the bind or volume read-write.</li></ul>
#### Bind Propagation
Bind propagation refers to whether or not mounts created within a given
bind-mount or named volume can be propagated to replicas of that mount. Consider
a mount point `/mnt`, which is also mounted on `/tmp`. The propation settings
control whether a mount on `/tmp/a` would also be available on `/mnt/a`. Each
propagation setting has a recursive counterpoint. In the case of recursion,
consider that `/tmp/a` is also mounted as `/foo`. The propagation settings
control whether `/mnt/a` and/or `/tmp/a` would exist.
The `bind-propagation` option defaults to `rprivate` for both bind-mounts and
volume mounts, and is only configurable for bind-mounts. In other words, named
volumes do not support bind propagation.
- **`shared`**: Sub-mounts of the original mount are exposed to replica mounts,
and sub-mounts of replica mounts are also propagated to the
original mount.
- **`slave`**: similar to a shared mount, but only in one direction. If the
original mount exposes a sub-mount, the replica mount can see it.
However, if the replica mount exposes a sub-mount, the original
mount cannot see it.
- **`private`**: The mount is private. Sub-mounts within it are not exposed to
replica mounts, and sub-mounts of replica mounts are not
exposed to the original mount.
- **`rshared`**: The same as shared, but the propagation also extends to and from
mount points nested within any of the original or replica mount
points.
- **`rslave`**: The same as `slave`, but the propagation also extends to and from
mount points nested within any of the original or replica mount
points.
- **`rprivate`**: The default. The same as `private`, meaning that no mount points
anywhere within the original or replica mount points propagate
in either direction.
For more information about bind propagation, see the
[Linux kernel documentation for shared subtree](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt).
#### Options for Named Volumes
The following options can only be used for named volumes (`type=volume`);
| Option | Description
|:----------------------|:--------------------------------------------------------------------------------------------------------------------
| **volume-driver** | Name of the volume-driver plugin to use for the volume. Defaults to ``"local"``, to use the local volume driver to create the volume if the volume does not exist.
| **volume-label** | One or more custom metadata ("labels") to apply to the volume upon creation. For example, `volume-label=mylabel=hello-world,my-other-label=hello-mars`. For more information about labels, refer to [apply custom metadata](../../userguide/labels-custom-metadata.md).
| **volume-nocopy** | By default, if you attach an empty volume to a container, and files or directories already existed at the mount-path in the container (`dst`), the Engine copies those files and directories into the volume, allowing the host to access them. Set `volume-nocopy` to disables copying files from the container's filesystem to the volume and mount the empty volume.<br /><br />A value is optional:<ul><li>`true` or `1`: Default if you do not provide a value. Disables copying.</li><li>`false` or `0`: Enables copying.</li></ul>
| **volume-opt** | Options specific to a given volume driver, which will be passed to the driver when creating the volume. Options are provided as a comma-separated list of key/value pairs, for example, `volume-opt=some-option=some-value,some-other-option=some-other-value`. For available options for a given driver, refer to that driver's documentation.
#### Differences between "--mount" and "--volume"
The `--mount` flag supports most options that are supported by the `-v`
or `--volume` flag for `docker run`, with some important exceptions:
- The `--mount` flag allows you to specify a volume driver and volume driver
options *per volume*, without creating the volumes in advance. In contrast,
`docker run` allows you to specify a single volume driver which is shared
by all volumes, using the `--volume-driver` flag.
- The `--mount` flag allows you to specify custom metadata ("labels") for a volume,
before the volume is created.
- When you use `--mount` with `type=bind`, the host-path must refer to an *existing*
path on the host. The path will not be created for you and the service will fail
with an error if the path does not exist.
- The `--mount` flag does not allow you to relabel a volume with `Z` or `z` flags,
which are used for `selinux` labeling.
#### Create a service using a named volume
The following example creates a service that uses a named volume:
```bash
$ docker service create \
--name my-service \
--replicas 3 \
--mount type=volume,source=my-volume,destination=/path/in/container,volume-label="color=red",volume-label="shape=round" \
nginx:alpine
```
For each replica of the service, the engine requests a volume named "my-volume"
from the default ("local") volume driver where the task is deployed. If the
volume does not exist, the engine creates a new volume and applies the "color"
and "shape" labels.
When the task is started, the volume is mounted on `/path/in/container/` inside
the container.
Be aware that the default ("local") volume is a locally scoped volume driver.
This means that depending on where a task is deployed, either that task gets a
*new* volume named "my-volume", or shares the same "my-volume" with other tasks
of the same service. Multiple containers writing to a single shared volume can
cause data corruption if the software running inside the container is not
designed to handle concurrent processes writing to the same location. Also take
into account that containers can be re-scheduled by the Swarm orchestrator and
be deployed on a different node.
#### Create a service that uses an anonymous volume
The following command creates a service with three replicas with an anonymous
volume on `/path/in/container`:
```bash
$ docker service create \
--name my-service \
--replicas 3 \
--mount type=volume,destination=/path/in/container \
nginx:alpine
```
In this example, no name (`source`) is specified for the volume, so a new volume
is created for each task. This guarantees that each task gets its own volume,
and volumes are not shared between tasks. Anonymous volumes are removed after
the task using them is complete.
#### Create a service that uses a bind-mounted host directory
The following example bind-mounts a host directory at `/path/in/container` in
the containers backing the service:
```bash
$ docker service create \
--name my-service \
--mount type=bind,source=/path/on/host,destination=/path/in/container \
nginx:alpine
```
### Set service mode (--mode)
You can set the service mode to "replicated" (default) or to "global". A
replicated service runs the number of replica tasks you specify. A global
The service mode determines whether this is a _replicated_ service or a _global_
service. A replicated service runs as many tasks as specified, while a global
service runs on each active node in the swarm.
The following command creates a "global" service:
The following command creates a global service:
```bash
$ docker service create \
@ -160,13 +330,13 @@ constraint expressions. Multiple constraints find nodes that satisfy every
expression (AND match). Constraints can match node or Docker Engine labels as
follows:
| node attribute | matches | example |
|:------------- |:-------------| :---------------------------------------------|
| node.id | node ID | `node.id == 2ivku8v2gvtg4` |
| node.hostname | node hostname | `node.hostname != node-2` |
| node.role | node role: manager | `node.role == manager` |
| node.labels | user defined node labels | `node.labels.security == high` |
| engine.labels | Docker Engine's labels | `engine.labels.operatingsystem == ubuntu 14.04`|
| node attribute | matches | example |
|:----------------|:--------------------------|:------------------------------------------------|
| node.id | node ID | `node.id == 2ivku8v2gvtg4` |
| node.hostname | node hostname | `node.hostname != node-2` |
| node.role | node role: manager | `node.role == manager` |
| node.labels | user defined node labels | `node.labels.security == high` |
| engine.labels | Docker Engine's labels | `engine.labels.operatingsystem == ubuntu 14.04` |
`engine.labels` apply to Docker Engine labels like operating system,
drivers, etc. Swarm administrators add `node.labels` for operational purposes by
@ -201,6 +371,7 @@ access to the network.
When you create a service and pass the --network flag to attach the service to
the overlay network:
```bash
$ docker service create \
--replicas 3 \
--network my-network \
@ -208,6 +379,8 @@ $ docker service create \
nginx
716thylsndqma81j6kkkb5aus
```
The swarm extends my-network to each node running the service.
Containers on the same network can access each other using
@ -219,13 +392,13 @@ You can publish service ports to make them available externally to the swarm
using the `--publish` flag:
```bash
docker service create --publish <TARGET-PORT>:<SERVICE-PORT> nginx
$ docker service create --publish <TARGET-PORT>:<SERVICE-PORT> nginx
```
For example:
```bash
docker service create --name my_web --replicas 3 --publish 8080:80 nginx
$ docker service create --name my_web --replicas 3 --publish 8080:80 nginx
```
When you publish a service port, the swarm routing mesh makes the service
@ -241,3 +414,5 @@ the service running on the node. For more information refer to
* [service scale](service_scale.md)
* [service ps](service_ps.md)
* [service update](service_update.md)
<style>table tr > td:first-child { white-space: nowrap;}</style>

View file

@ -67,6 +67,46 @@ for further information.
$ docker service update --limit-cpu 2 redis
```
### Adding and removing mounts
Use the `--mount-add` or `--mount-rm` options add or remove a service's bind-mounts
or volumes.
The following example creates a service which mounts the `test-data` volume to
`/somewhere`. The next step updates the service to also mount the `other-volume`
volume to `/somewhere-else`volume, The last step unmounts the `/somewhere` mount
point, effectively removing the `test-data` volume. Each command returns the
service name.
- The `--mount-add` flag takes the same parameters as the `--mount` flag on
`service create`. Refer to the [volumes and
bind-mounts](service_create.md#volumes-and-bind-mounts-mount) section in the
`service create` reference for details.
- The `--mount-rm` flag takes the `target` path of the mount.
```bash
$ docker service create \
--name=myservice \
--mount \
type=volume,source=test-data,target=/somewhere \
nginx:alpine \
myservice
myservice
$ docker service update \
--mount-add \
type=volume,source=other-volume,target=/somewhere-else \
myservice
myservice
$ docker service update --mount-rm /somewhere myservice
myservice
```
## Related information
* [service create](service_create.md)

View file

@ -45,7 +45,7 @@ To remove `worker2`, issue the following command from `worker2` itself:
$ docker swarm leave
Node left the default swarm.
```
To remove an inactive node, use the [`node rm`](swarm_rm.md) command instead.
To remove an inactive node, use the [`node rm`](node_rm.md) command instead.
## Related information

View file

@ -46,7 +46,7 @@ Another configuration you can change with this command is restart policy,
new restart policy will take effect instantly after you run `docker update`
on a container.
## EXAMPLES
## Examples
The following sections illustrate ways to use this command.

View file

@ -1358,7 +1358,7 @@ If the operator uses `--link` when starting a new client container in the
default bridge network, then the client container can access the exposed
port via a private networking interface.
If `--link` is used when starting a container in a user-defined network as
described in [*Docker network overview*](../userguide/networking/index.md)),
described in [*Docker network overview*](../userguide/networking/index.md),
it will provide a named alias for the container being linked to.
### ENV (environment variables)

View file

@ -136,7 +136,7 @@ Finally, if you run Docker on a server, it is recommended to run
exclusively Docker in the server, and move all other services within
containers controlled by Docker. Of course, it is fine to keep your
favorite admin tools (probably at least an SSH server), as well as
existing monitoring/supervision processes (e.g., NRPE, collectd, etc).
existing monitoring/supervision processes, such as NRPE and collectd.
## Linux kernel capabilities

View file

@ -18,7 +18,7 @@ in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize,
and encrypt the communications between themselves and other nodes in the swarm.
When you create a swarm by running `docker swarm init`, the Docker Engine
designates istself as a manager node. By default, the manager node generates
designates itself as a manager node. By default, the manager node generates
itself a new root Certificate Authority (CA) along with a key pair to secure
communications with other nodes that join the swarm. If you prefer, you can pass
the `--external-ca` flag to specify a root CA external to the swarm. Refer to

View file

@ -65,7 +65,7 @@ $ docker network ls
NETWORK ID NAME DRIVER SCOPE
f9145f09b38b bridge bridge local
..snip..
bd0befxwiva4 my-network overlay swarm
273d53261bcd my-network overlay swarm
```
The `swarm` scope indicates that the network is available for use with services
@ -123,7 +123,7 @@ $ docker network inspect my-network
[
{
"Name": "my-network",
"Id": "7m2rjx0a97n88wzr4nu8772r3",
"Id": "273d53261bcdfda5f198587974dae3827e947ccd7e74a41bf1f482ad17fa0d33",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,

View file

@ -34,7 +34,7 @@ If you are brand new to Docker, see [About Docker Engine](../../index.md).
To run this tutorial, you need the following:
* [three networked host machines](#three-networked-host-machines)
* [Docker Engine 1.12 or later installed](#docker-engine-1-12-or-later)
* [Docker Engine 1.12 or later installed](#docker-engine-1-12-or-newer)
* [the IP address of the manager machine](#the-ip-address-of-the-manager-machine)
* [open ports between the hosts](#open-ports-between-the-hosts)
@ -87,7 +87,7 @@ will serve as the single swarm node.
* Currently, you cannot use Docker for Mac or Windows alone to test a
_multi-node_ swarm. However, you can use the included version of [Docker
Machine](/machine/overview.md) to create the swarm nodes, then follow the
Machine](/machine/overview.md) to create the swarm nodes (see [Get started with Docker Machine and a local VM](/machine/get-started.md)), then follow the
tutorial for all multi-node features. For this scenario, you run commands from
a Docker for Mac or Docker for Windows host, but that Docker host itself is
_not_ participating in the swarm (i.e., it will not be `manager1`, `worker1`,

View file

@ -19,7 +19,7 @@ Docker is an open platform for developing, shipping, and running applications.
Docker enables you to separate your applications from your infrastructure so
you can deliver software quickly. With Docker, you can manage your infrastructure
in the same ways you manage your applications. By taking advantage of Docker's
methodoligies for shipping, testing, and deploying code quickly, you can
methodologies for shipping, testing, and deploying code quickly, you can
significantly reduce the delay between writing code and running it in production.
## What is the Docker platform?

View file

@ -134,6 +134,43 @@ image. We recommend the [Debian image](https://hub.docker.com/_/debian/)
since its very tightly controlled and kept minimal (currently under 150 mb),
while still being a full distribution.
### LABEL
[Understanding object labels](../labels-custom-metadata.md)
You can add labels to your image to help organize images by project, record
licensing information, to aid in automation, or for other reasons. For each
label, add a line beginning with `LABEL` and with one or more key-value pairs.
The following examples show the different acceptable formats. Explanatory comments
are included inline.
>**Note**: If your string contains spaces, it must be quoted **or** the spaces
must be escaped. If your string contains inner quote characters (`"`), escape
them as well.
```dockerfile
# Set one or more individual labels
LABEL com.example.version="0.0.1-beta"
LABEL vendor="ACME Incorporated"
LABEL com.example.release-date="2015-02-12"
LABEL com.example.version.is-production=""
# Set multiple labels on one line
LABEL com.example.version="0.0.1-beta" com.example.release-date="2015-02-12"
# Set multiple labels at once, using line-continuation characters to break long lines
LABEL vendor=ACME\ Incorporated \
com.example.is-beta= \
com.example.is-production="" \
com.example.version="0.0.1-beta" \
com.example.release-date="2015-02-12"
```
See [Understanding object labels](../labels-custom-metadata.md) for
guidelines about acceptable label keys and values. For information about
querying labels, refer to the items related to filtering in
[Managing labels on objects](../labels-custom-metadata.md#managing-labels-on-objects).
### RUN
[Dockerfile reference for the RUN instruction](../../reference/builder.md#run)
@ -142,7 +179,7 @@ As always, to make your `Dockerfile` more readable, understandable, and
maintainable, split long or complex `RUN` statements on multiple lines separated
with backslashes.
### apt-get
#### apt-get
Probably the most common use-case for `RUN` is an application of `apt-get`. The
`RUN apt-get` command, because it installs packages, has several gotchas to look
@ -238,12 +275,12 @@ keep the image size down. Since the `RUN` statement starts with
The `CMD` instruction should be used to run the software contained by your
image, along with any arguments. `CMD` should almost always be used in the
form of `CMD [“executable”, “param1”, “param2”…]`. Thus, if the image is for a
service (Apache, Rails, etc.), you would run something like
service, such as Apache and Rails, you would run something like
`CMD ["apache2","-DFOREGROUND"]`. Indeed, this form of the instruction is
recommended for any service-based image.
In most other cases, `CMD` should be given an interactive shell (bash, python,
perl, etc), for example, `CMD ["perl", "-de0"]`, `CMD ["python"]`, or
In most other cases, `CMD` should be given an interactive shell, such as bash, python
and perl. For example, `CMD ["perl", "-de0"]`, `CMD ["python"]`, or
`CMD [“php”, “-a”]`. Using this form means that when you execute something like
`docker run -it python`, youll get dropped into a usable shell, ready to go.
`CMD` should rarely be used in the manner of `CMD [“param”, “param”]` in

View file

@ -3,6 +3,7 @@
title = "Introduction"
description = "Introduction to user guide"
keywords = ["docker, introduction, documentation, about, technology, docker.io, user, guide, user's, manual, platform, framework, home, intro"]
identifier = "engine_guide_intro"
[menu.main]
parent="engine_guide"
+++
@ -64,6 +65,25 @@ learning how to manage data, volumes and mounts inside our containers.
Go to [Managing Data in Containers](../tutorials/dockervolumes.md).
## Managing metadata (labels) for Docker objects
Labels are a mechanism for applying metadata to Docker objects, including:
- Images
- Containers
- Local daemons
- Volumes
- Networks
- Swarm nodes
- Swarm services
You can use labels to organize your images, record licensing information, annotate
relationships between containers, volumes, and networks, or in any way that makes
sense for your business or application.
Go to [Managing Docker object labels](labels-custom-metadata.md).
## Docker products that complement Engine
Often, one powerful technology spawns many other inventions that make that easier to get to, easier to use, and more powerful. These spawned things share one common characteristic: they augment the central technology. The following Docker products expand on the core Docker Engine functions.

View file

@ -1,230 +1,116 @@
<!--[metadata]>
+++
title = "Apply custom metadata"
description = "Learn how to work with custom metadata in Docker, using labels."
keywords = ["Usage, user guide, labels, metadata, docker, documentation, examples, annotating"]
title = "Managing Docker object labels"
description = "Description of labels, which are used to manage metadata on Docker objects."
keywords = ["Usage, user guide, labels, metadata, docker, documentation, examples, annotating"]
[menu.main]
parent = "engine_guide"
parent = "engine_guide_intro"
weight=90
+++
<![end-metadata]-->
# Apply custom metadata
# About labels
You can apply metadata to your images, containers, volumes, networks, nodes, services or daemons via
labels. Labels serve a wide range of uses, such as adding notes or licensing
information to an image, or to identify a host.
Labels are a mechanism for applying metadata to Docker objects, including:
A label is a `<key>` / `<value>` pair. Docker stores the label values as
*strings*. You can specify multiple labels but each `<key>` must be
unique or the value will be overwritten. If you specify the same `key` several
times but with different values, newer labels overwrite previous labels. Docker
uses the last `key=value` you supply.
- Images
- Containers
- Local daemons
- Volumes
- Networks
- Swarm nodes
- Swarm services
>**Note:** Support for daemon-labels was added in Docker 1.4.1. Labels on
>containers and images were added in Docker 1.6.0
You can use labels to organize your images, record licensing information, annotate
relationships between containers, volumes, and networks, or in any way that makes
sense for your business or application.
## Label keys (namespaces)
# Label keys and values
Docker puts no hard restrictions on the `key` used for a label. However, using
simple keys can easily lead to conflicts. For example, you have chosen to
categorize your images by CPU architecture using "architecture" labels in
your Dockerfiles:
A label is a key-value pair, stored as a string. You can specify multiple labels
for an object, but each key-value pair must be unique within an object. If the
same key is given multiple values, the most-recently-written value overwrites
all previous values.
LABEL architecture="amd64"
## Key format recommendations
LABEL architecture="ARMv7"
A label _key_ is the left-hand side of the key-value pair. Keys are alphanumeric
strings which may contain periods (`.`) and hyphens (`-`). Most Docker users use
images created by other organizations, and the following guidelines help to
prevent inadvertent duplication of labels across objects, especially if you plan
to use labels as a mechanism for automation.
Another user may apply the same label based on a building's "architecture":
- Authors of third-party tools should prefix each label key with the
reverse DNS notation of a domain they own, such as `com.example.some-label`.
LABEL architecture="Art Nouveau"
To prevent naming conflicts, Docker recommends using namespaces to label keys
using reverse domain notation. Use the following guidelines to name your keys:
- All (third-party) tools should prefix their keys with the
reverse DNS notation of a domain controlled by the author. For
example, `com.example.some-label`.
- Do not use a domain in your label key without the domain owner's permission.
- The `com.docker.*`, `io.docker.*` and `org.dockerproject.*` namespaces are
reserved for Docker's internal use.
reserved by Docker for internal use.
- Keys should only consist of lower-cased alphanumeric characters,
dots and dashes (for example, `[a-z0-9-.]`).
- Label keys should begin and end with a lower-case letter and should only
contain lower-case alphanumeric characters, the period character (`.`), and
the hyphen character (`-`). Consecutive periods or hyphens are not allowed.
- Keys should start *and* end with an alpha numeric character.
- The period character (`.`) separates namespace "fields". Label keys without
namespaces are reserved for CLI use, allowing users of the CLI to interactively
label Docker objects using shorter typing-friendly strings.
- Keys may not contain consecutive dots or dashes.
These guidelines are not currently enforced and additional guidelines may apply
to specific use cases.
- Keys *without* namespace (dots) are reserved for CLI use. This allows end-
users to add metadata to their containers and images without having to type
cumbersome namespaces on the command-line.
## Value guidelines
Label values can contain any data type that can be represented as a string,
including (but not limited to) JSON, XML, CSV, or YAML. The only requirement is
that the value be serialized to a string first, using a mechanism specific to
the type of structure. For instance, to serialize JSON into a string, you might
use the `JSON.stringify()` JavaScript method.
Since Docker does not deserialize the value, you cannot treat a JSON or XML
document as a nested structure when querying or filtering by label value unless
you build this functionality into third-party tooling.
# Managing labels on objects
Each type of object with support for labels has mechanisms for adding and
managing them and using them as they relate to that type of object. These links
provide a good place to start learning about how you can use labels in your
Docker deployments.
Labels on images, containers, local daemons, volumes, and networks are static for
the lifetime of the object. To change these labels you must recreate the object.
Labels on swarm nodes and services can be updated dynamically.
These are simply guidelines and Docker does not *enforce* them. However, for
the benefit of the community, you *should* use namespaces for your label keys.
- Images and containers
- [Adding labels to images](../reference/builder.md#label)
- [Overriding a container's labels at runtime](../reference/commandline/run.md#set-metadata-on-container-l-label-label-file)
- [Inspecting labels on images or containers](../reference/commandline/inspect.md)
- [Filtering images by label](../reference/commandline/inspect.md#filtering)
- [Filtering containers by label](../reference/commandline/ps.md#filtering)
- Local Docker daemons
- [Adding labels to a Docker daemon at runtime](../reference/commandline/dockerd.md)
- [Inspecting a Docker daemon's labels](../reference/commandline/info.md)
## Store structured data in labels
- Volumes
- [Adding labels to volumes](../reference/commandline/volume_create.md)
- [Inspecting a volume's labels](../reference/commandline/volume_inspect.md)
- [Filtering volumes by label](../reference/commandline/volume_ls.md#filtering)
Label values can contain any data type as long as it can be represented as a
string. For example, consider this JSON document:
- Networks
- [Adding labels to a network](../reference/commandline/network_create.md)
- [Inspecting a network's labels](../reference/commandline/network_inspect.md)
- [Filtering networks by label](../reference/commandline/network_ls.md#filtering)
- Swarm nodes
- [Adding or updating a swarm node's labels](../reference/commandline/node_update.md#add-label-metadata-to-a-node)
- [Inspecting a swarm node's labels](../reference/commandline/node_inspect.md)
- [Filtering swarm nodes by label](../reference/commandline/node_ls.md#filtering)
{
"Description": "A containerized foobar",
"Usage": "docker run --rm example/foobar [args]",
"License": "GPL",
"Version": "0.0.1-beta",
"aBoolean": true,
"aNumber" : 0.01234,
"aNestedArray": ["a", "b", "c"]
}
You can store this struct in a label by serializing it to a string first:
LABEL com.example.image-specs="{\"Description\":\"A containerized foobar\",\"Usage\":\"docker run --rm example\\/foobar [args]\",\"License\":\"GPL\",\"Version\":\"0.0.1-beta\",\"aBoolean\":true,\"aNumber\":0.01234,\"aNestedArray\":[\"a\",\"b\",\"c\"]}"
While it is *possible* to store structured data in label values, Docker treats
this data as a 'regular' string. This means that Docker doesn't offer ways to
query (filter) based on nested properties. If your tool needs to filter on
nested properties, the tool itself needs to implement this functionality.
## Add labels to images
To add labels to an image, use the `LABEL` instruction in your Dockerfile:
LABEL [<namespace>.]<key>=<value> ...
The `LABEL` instruction adds a label to your image. A `LABEL` consists of a `<key>`
and a `<value>`.
Use an empty string for labels that don't have a `<value>`,
Use surrounding quotes or backslashes for labels that contain
white space characters in the `<value>`:
LABEL vendor=ACME\ Incorporated
LABEL com.example.version.is-beta=
LABEL com.example.version.is-production=""
LABEL com.example.version="0.0.1-beta"
LABEL com.example.release-date="2015-02-12"
The `LABEL` instruction also supports setting multiple `<key>` / `<value>` pairs
in a single instruction:
LABEL com.example.version="0.0.1-beta" com.example.release-date="2015-02-12"
Long lines can be split up by using a backslash (`\`) as continuation marker:
LABEL vendor=ACME\ Incorporated \
com.example.is-beta= \
com.example.is-production="" \
com.example.version="0.0.1-beta" \
com.example.release-date="2015-02-12"
Docker recommends you add multiple labels in a single `LABEL` instruction. Using
individual instructions for each label can result in an inefficient image. This
is because each `LABEL` instruction in a Dockerfile produces a new IMAGE layer.
You can view the labels via the `docker inspect` command:
$ docker inspect 4fa6e0f0c678
...
"Labels": {
"vendor": "ACME Incorporated",
"com.example.is-beta": "",
"com.example.is-production": "",
"com.example.version": "0.0.1-beta",
"com.example.release-date": "2015-02-12"
}
...
# Inspect labels on container
$ docker inspect -f "{{json .Config.Labels }}" 4fa6e0f0c678
{"Vendor":"ACME Incorporated","com.example.is-beta":"", "com.example.is-production":"", "com.example.version":"0.0.1-beta","com.example.release-date":"2015-02-12"}
# Inspect labels on images
$ docker inspect -f "{{json .ContainerConfig.Labels }}" myimage
## Query labels
Besides storing metadata, you can filter images and containers by label. To list all
running containers that have the `com.example.is-beta` label:
# List all running containers that have a `com.example.is-beta` label
$ docker ps --filter "label=com.example.is-beta"
List all running containers with the label `color` that have a value `blue`:
$ docker ps --filter "label=color=blue"
List all images with the label `vendor` that have the value `ACME`:
$ docker images --filter "label=vendor=ACME"
## Container labels
docker run \
-d \
--label com.example.group="webservers" \
--label com.example.environment="production" \
busybox \
top
Please refer to the [Query labels](#query-labels) section above for information
on how to query labels set on a container.
## Daemon labels
dockerd \
--dns 8.8.8.8 \
--dns 8.8.4.4 \
-H unix:///var/run/docker.sock \
--label com.example.environment="production" \
--label com.example.storage="ssd"
These labels appear as part of the `docker info` output for the daemon:
$ docker -D info
Containers: 12
Running: 5
Paused: 2
Stopped: 5
Images: 672
Server Version: 1.9.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 697
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.19.0-22-generic
Operating System: Ubuntu 15.04
CPUs: 24
Total Memory: 62.86 GiB
Name: docker
ID: I54V:OLXT:HVMM:TPKO:JPHQ:CQCD:JNLC:O3BZ:4ZVJ:43XJ:PFHZ:6N2S
Debug mode (server): true
File Descriptors: 59
Goroutines: 159
System Time: 2015-09-23T14:04:20.699842089+08:00
EventsListeners: 0
Init SHA1:
Init Path: /usr/bin/docker
Docker Root Dir: /var/lib/docker
Http Proxy: http://test:test@localhost:8080
Https Proxy: https://test:test@localhost:8080
WARNING: No swap limit support
Username: svendowideit
Registry: [https://index.docker.io/v1/]
Labels:
com.example.environment=production
com.example.storage=ssd
- Swarm services
- [Adding labels when creating a swarm service](../reference/commandline/service_create.md#set-metadata-on-a-service-l-label)
- [Updating a swarm service's labels](../reference/commandline/service_update.md)
- [Inspecting a swarm service's labels](../reference/commandline/service_inspect.md)
- [Filtering swarm services by label](../reference/commandline/service_ls.md#filtering)

View file

@ -0,0 +1,265 @@
<!--[metadata]>
+++
title = "Get started with macvlan network driver"
description = "Use macvlan for container networking"
keywords = ["Examples, Usage, network, docker, documentation, user guide, macvlan, cluster"]
[menu.main]
parent = "smn_networking"
weight=-3
+++
<![end-metadata]-->
# Macvlan Network Driver
### Getting Started
The Macvlan driver is in order to make Docker users use cases and vet the implementation to ensure a hardened, production ready driver. Libnetwork now gives users total control over both IPv4 and IPv6 addressing. The VLAN drivers build on top of that in giving operators complete control of layer 2 VLAN tagging for users interested in underlay network integration. For overlay deployments that abstract away physical constraints see the [multi-host overlay ](https://docs.docker.com/engine/userguide/networking/get-started-overlay/) driver.
Macvlan is a new twist on the tried and true network virtualization technique. The Linux implementations are extremely lightweight because rather than using the traditional Linux bridge for isolation, they are simply associated to a Linux Ethernet interface or sub-interface to enforce separation between networks and connectivity to the physical network.
Macvlan offers a number of unique features and plenty of room for further innovations with the various modes. Two high level advantages of these approaches are, the positive performance implications of bypassing the Linux bridge and the simplicity of having less moving parts. Removing the bridge that traditionally resides in between the Docker host NIC and container interface leaves a very simple setup consisting of container interfaces, attached directly to the Docker host interface. This result is easy access for external facing services as there is no port mappings in these scenarios.
### Pre-Requisites
- The examples on this page are all single host and setup using Docker 1.12.0+
- All of the examples can be performed on a single host running Docker. Any examples using a sub-interface like `eth0.10` can be replaced with `eth0` or any other valid parent interface on the Docker host. Sub-interfaces with a `.` are created on the fly. `-o parent` interfaces can also be left out of the `docker network create` all together and the driver will create a `dummy` interface that will enable local host connectivity to perform the examples.
- Kernel requirements:
- To check your current kernel version, use `uname -r` to display your kernel version
- Macvlan Linux kernel v3.93.19 and 4.0+
### MacVlan Bridge Mode Example Usage
Macvlan Bridge mode has a unique MAC address per container used to track MAC to port mappings by the Docker host.
- Macvlan driver networks are attached to a parent Docker host interface. Examples are a physical interface such as `eth0`, a sub-interface for 802.1q VLAN tagging like `eth0.10` (`.10` representing VLAN `10`) or even bonded host adaptors which bundle two Ethernet interfaces into a single logical interface.
- The specified gateway is external to the host provided by the network infrastructure.
- Each Macvlan Bridge mode Docker network is isolated from one another and there can be only one network attached to a parent interface at a time. There is a theoretical limit of 4,094 sub-interfaces per host adaptor that a Docker network could be attached to.
- Any container inside the same subnet can talk to any other container in the same network without a gateway in `macvlan bridge`.
- The same `docker network` commands apply to the vlan drivers.
- In Macvlan mode, containers on separate networks cannot reach one another without an external process routing between the two networks/subnets. This also applies to multiple subnets within the same `docker network
In the following example, `eth0` on the docker host has an IP on the `172.16.86.0/24` network and a default gateway of `172.16.86.1`. The gateway is an external router with an address of `172.16.86.1`. An IP address is not required on the Docker host interface `eth0` in `bridge` mode, it merely needs to be on the proper upstream network to get forwarded by a network switch or network router.
![Simple Macvlan Bridge Mode Example](images/macvlan_bridge_simple.png)
**Note** For Macvlan bridge mode the subnet values need to match the NIC's interface of the Docker host. For example, Use the same subnet and gateway of the Docker host ethernet interface that is specified by the `-o parent=` option.
- The parent interface used in this example is `eth0` and it is on the subnet `172.16.86.0/24`. The containers in the `docker network` will also need to be on this same subnet as the parent `-o parent=`. The gateway is an external router on the network, not any ip masquerading or any other local proxy.
- The driver is specified with `-d driver_name` option. In this case `-d macvlan`
- The parent interface `-o parent=eth0` is configured as followed:
```
ip addr show eth0
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 172.16.86.250/24 brd 172.16.86.255 scope global eth0
```
Create the macvlan network and run a couple of containers attached to it:
```
# Macvlan (-o macvlan_mode= Defaults to Bridge mode if not specified)
docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 pub_net
# Run a container on the new network specifying the --ip address.
docker run --net=pub_net --ip=172.16.86.10 -itd alpine /bin/sh
# Start a second container and ping the first
docker run --net=pub_net -it --rm alpine /bin/sh
ping -c 4 172.16.86.10
```
Take a look at the containers ip and routing table:
```
ip a show eth0
eth0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 46:b2:6b:26:2f:69 brd ff:ff:ff:ff:ff:ff
inet 172.16.86.2/24 scope global eth0
ip route
default via 172.16.86.1 dev eth0
172.16.86.0/24 dev eth0 src 172.16.86.2
# NOTE: the containers can NOT ping the underlying host interfaces as
# they are intentionally filtered by Linux for additional isolation.
# In this case the containers cannot ping the -o parent=172.16.86.250
```
You can explicitly specify the `bridge` mode option `-o macvlan_mode=bridge`. It is the default so will be in `bridge` mode either way.
While the `eth0` interface does not need to have an IP address in Macvlan Bridge it is not uncommon to have an IP address on the interface. Addresses can be excluded from getting an address from the default built in IPAM by using the `--aux-address=x.x.x.x` flag. This will blacklist the specified address from being handed out to containers. The same network example above blocking the `-o parent=eth0` address from being handed out to a container.
```
docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
--aux-address="exclude_host=172.16.86.250" \
-o parent=eth0 pub_net
```
Another option for subpool IP address selection in a network provided by the default Docker IPAM driver is to use `--ip-range=`. This specifies the driver to allocate container addresses from this pool rather then the broader range from the `--subnet=` argument from a network create as seen in the following example that will allocate addresses beginning at `192.168.32.128` and increment upwards from there.
```
docker network create -d macvlan \
--subnet=192.168.32.0/24 \
--ip-range=192.168.32.128/25 \
--gateway=192.168.32.254 \
-o parent=eth0 macnet32
# Start a container and verify the address is 192.168.32.128
docker run --net=macnet32 -it --rm alpine /bin/sh
```
The network can then be deleted with:
```
docker network rm <network_name or id>
```
- **Note:** In Macvlan you are not able to ping or communicate with the default namespace IP address. For example, if you create a container and try to ping the Docker host's `eth0` it will **not** work. That traffic is explicitly filtered by the kernel modules themselves to offer additional provider isolation and security.
For more on Docker networking commands see [Working with Docker network commands](https://docs.docker.com/engine/userguide/networking/work-with-networks/)
### Macvlan 802.1q Trunk Bridge Mode Example Usage
VLANs (Virtual Local Area Networks) have long been a primary means of virtualizing data center networks and are still in virtually all existing networks today. VLANs work by tagging a Layer-2 isolation domain with a 12-bit identifier ranging from 1-4094 that is inserted into a packet header that enables a logical grouping of a single or multiple subnets of both IPv4 and IPv6. It is very common for network operators to separate traffic using VLANs based on a subnet(s) function or security profile such as `web`, `db` or any other isolation needs.
It is very common to have a compute host requirement of running multiple virtual networks concurrently on a host. Linux networking has long supported VLAN tagging, also known by its standard 802.1q, for maintaining datapath isolation between networks. The Ethernet link connected to a Docker host can be configured to support the 802.1q VLAN IDs, by creating Linux sub-interfaces, each one dedicated to a unique VLAN ID.
![Multi Tenant 802.1q Vlans](images/multi_tenant_8021q_vlans.png)
Trunking 802.1q to a Linux host is notoriously painful for many in operations. It requires configuration file changes in order to be persistent through a reboot. If a bridge is involved, a physical NIC needs to be moved into the bridge and the bridge then gets the IP address. This has lead to many a stranded servers since the risk of cutting off access during that convoluted process is high.
Like all of the Docker network drivers, the overarching goal is to alleviate the operational pains of managing network resources. To that end, when a network receives a sub-interface as the parent that does not exist, the drivers create the VLAN tagged interfaces while creating the network.
In the case of a host reboot, instead of needing to modify often complex network configuration files the driver will recreate all network links when the Docker daemon restarts. The driver tracks if it created the VLAN tagged sub-interface originally with the network create and will **only** recreate the sub-interface after a restart or delete `docker network rm` the link if it created it in the first place with `docker network create`.
If the user doesn't want Docker to modify the `-o parent` sub-interface, the user simply needs to pass an existing link that already exists as the parent interface. Parent interfaces such as `eth0` are not deleted, only sub-interfaces that are not master links.
For the driver to add/delete the vlan sub-interfaces the format needs to be `interface_name.vlan_tag`.
For example: `eth0.50` denotes a parent interface of `eth0` with a slave of `eth0.50` tagged with vlan id `50`. The equivalent `ip link` command would be `ip link add link eth0 name eth0.50 type vlan id 50`.
**Vlan ID 50**
In the first network tagged and isolated by the Docker host, `eth0.50` is the parent interface tagged with vlan id `50` specified with `-o parent=eth0.50`. Other naming formats can be used, but the links need to be added and deleted manually using `ip link` or Linux configuration files. As long as the `-o parent` exists anything can be used if compliant with Linux netlink.
```
# now add networks and hosts as you would normally by attaching to the master (sub)interface that is tagged
docker network create -d macvlan \
--subnet=192.168.50.0/24 \
--gateway=192.168.50.1 \
-o parent=eth0.50 macvlan50
# In two separate terminals, start a Docker container and the containers can now ping one another.
docker run --net=macvlan50 -it --name macvlan_test5 --rm alpine /bin/sh
docker run --net=macvlan50 -it --name macvlan_test6 --rm alpine /bin/sh
```
**Vlan ID 60**
In the second network, tagged and isolated by the Docker host, `eth0.60` is the parent interface tagged with vlan id `60` specified with `-o parent=eth0.60`. The `macvlan_mode=` defaults to `macvlan_mode=bridge`. It can also be explicitly set with the same result as shown in the next example.
```
# now add networks and hosts as you would normally by attaching to the master (sub)interface that is tagged.
docker network create -d macvlan \
--subnet=192.168.60.0/24 \
--gateway=192.168.60.1 \
-o parent=eth0.60 -o \
-o macvlan_mode=bridge macvlan60
# In two separate terminals, start a Docker container and the containers can now ping one another.
docker run --net=macvlan60 -it --name macvlan_test7 --rm alpine /bin/sh
docker run --net=macvlan60 -it --name macvlan_test8 --rm alpine /bin/sh
```
**Example:** Multi-Subnet Macvlan 802.1q Trunking
The same as the example before except there is an additional subnet bound to the network that the user can choose to provision containers on. In MacVlan/Bridge mode, containers can only ping one another if they are on the same subnet/broadcast domain unless there is an external router that routes the traffic (answers ARP etc) between the two subnets.
```
### Create multiple L2 subnets
docker network create -d ipvlan \
--subnet=192.168.210.0/24 \
--subnet=192.168.212.0/24 \
--gateway=192.168.210.254 \
--gateway=192.168.212.254 \
-o ipvlan_mode=l2 ipvlan210
# Test 192.168.210.0/24 connectivity between containers
docker run --net=ipvlan210 --ip=192.168.210.10 -itd alpine /bin/sh
docker run --net=ipvlan210 --ip=192.168.210.9 -it --rm alpine ping -c 2 192.168.210.10
# Test 192.168.212.0/24 connectivity between containers
docker run --net=ipvlan210 --ip=192.168.212.10 -itd alpine /bin/sh
docker run --net=ipvlan210 --ip=192.168.212.9 -it --rm alpine ping -c 2 192.168.212.10
```
### Dual Stack IPv4 IPv6 Macvlan Bridge Mode
**Example:** Macvlan Bridge mode, 802.1q trunk, VLAN ID: 218, Multi-Subnet, Dual Stack
```
# Create multiple bridge subnets with a gateway of x.x.x.1:
docker network create -d macvlan \
--subnet=192.168.216.0/24 --subnet=192.168.218.0/24 \
--gateway=192.168.216.1 --gateway=192.168.218.1 \
--subnet=2001:db8:abc8::/64 --gateway=2001:db8:abc8::10 \
-o parent=eth0.218 \
-o macvlan_mode=bridge macvlan216
# Start a container on the first subnet 192.168.216.0/24
docker run --net=macvlan216 --name=macnet216_test --ip=192.168.216.10 -itd alpine /bin/sh
# Start a container on the second subnet 192.168.218.0/24
docker run --net=macvlan216 --name=macnet216_test --ip=192.168.218.10 -itd alpine /bin/sh
# Ping the first container started on the 192.168.216.0/24 subnet
docker run --net=macvlan216 --ip=192.168.216.11 -it --rm alpine /bin/sh
ping 192.168.216.10
# Ping the first container started on the 192.168.218.0/24 subnet
docker run --net=macvlan216 --ip=192.168.218.11 -it --rm alpine /bin/sh
ping 192.168.218.10
```
View the details of one of the containers:
```
docker run --net=macvlan216 --ip=192.168.216.11 -it --rm alpine /bin/sh
root@526f3060d759:/# ip a show eth0
eth0@if92: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 8e:9a:99:25:b6:16 brd ff:ff:ff:ff:ff:ff
inet 192.168.216.11/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 2001:db8:abc4::8c9a:99ff:fe25:b616/64 scope link tentative
valid_lft forever preferred_lft forever
inet6 2001:db8:abc8::2/64 scope link nodad
valid_lft forever preferred_lft forever
# Specified v4 gateway of 192.168.216.1
root@526f3060d759:/# ip route
default via 192.168.216.1 dev eth0
192.168.216.0/24 dev eth0 proto kernel scope link src 192.168.216.11
# Specified v6 gateway of 2001:db8:abc8::10
root@526f3060d759:/# ip -6 route
2001:db8:abc4::/64 dev eth0 proto kernel metric 256
2001:db8:abc8::/64 dev eth0 proto kernel metric 256
default via 2001:db8:abc8::10 dev eth0 metric 1024
```

View file

@ -47,7 +47,7 @@ $ docker network create \
# Create an nginx service and extend the my-multi-host-network to nodes where
# the service's tasks run.
$ $ docker service create --replicas 2 --network my-multi-host-network --name my-web nginx
$ docker service create --replicas 2 --network my-multi-host-network --name my-web nginx
716thylsndqma81j6kkkb5aus
```

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 53 KiB

View file

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View file

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 39 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 48 KiB

View file

@ -426,7 +426,7 @@ $ docker network create \
# Create an nginx service and extend the my-multi-host-network to nodes where
# the service's tasks run.
$ $ docker service create --replicas 2 --network my-multi-host-network --name my-web nginx
$ docker service create --replicas 2 --network my-multi-host-network --name my-web nginx
716thylsndqma81j6kkkb5aus
```

View file

@ -105,8 +105,8 @@ $ docker network create -d overlay \
--gateway=192.168.0.100 \
--gateway=192.170.0.100 \
--ip-range=192.168.1.0/24 \
--aux-address a=192.168.1.5 --aux-address b=192.168.1.6 \
--aux-address a=192.170.1.5 --aux-address b=192.170.1.6 \
--aux-address="my-router=192.168.1.5" --aux-address="my-switch=192.168.1.6" \
--aux-address="my-printer=192.170.1.5" --aux-address="my-nas=192.170.1.6" \
my-multihost-network
```
@ -123,7 +123,7 @@ equivalent docker daemon flags used for docker0 bridge:
| `com.docker.network.bridge.enable_ip_masquerade` | `--ip-masq` | Enable IP masquerading |
| `com.docker.network.bridge.enable_icc` | `--icc` | Enable or Disable Inter Container Connectivity |
| `com.docker.network.bridge.host_binding_ipv4` | `--ip` | Default IP when binding container ports |
| `com.docker.network.mtu` | `--mtu` | Set the containers network MTU |
| `com.docker.network.driver.mtu` | `--mtu` | Set the containers network MTU |
The following arguments can be passed to `docker network create` for any network driver.
@ -882,7 +882,7 @@ $ docker network disconnect isolated_nw container3
```
```bash
docker network inspect isolated_nw
$ docker network inspect isolated_nw
[
{

View file

@ -72,7 +72,7 @@ to build a Docker binary with the experimental features enabled:
## Current experimental features
* [External graphdriver plugins](plugins_graphdriver.md)
* [Macvlan and Ipvlan Network Drivers](vlan-networks.md)
* [Ipvlan Network Drivers](vlan-networks.md)
* [Docker Stacks and Distributed Application Bundles](docker-stacks-and-bundles.md)
## How to comment on an experimental feature

View file

@ -1,15 +1,12 @@
# Macvlan and Ipvlan Network Drivers
# Ipvlan Network Driver
### Getting Started
The Macvlan and Ipvlan drivers are currently in experimental mode in order to incubate Docker users use cases and vet the implementation to ensure a hardened, production ready driver in a future release. Libnetwork now gives users total control over both IPv4 and IPv6 addressing. The VLAN drivers build on top of that in giving operators complete control of layer 2 VLAN tagging and even Ipvlan L3 routing for users interested in underlay network integration. For overlay deployments that abstract away physical constraints see the [multi-host overlay ](https://docs.docker.com/engine/userguide/networking/get-started-overlay/) driver.
The Ipvlan driver is currently in experimental mode in order to incubate Docker users use cases and vet the implementation to ensure a hardened, production ready driver in a future release. Libnetwork now gives users total control over both IPv4 and IPv6 addressing. The VLAN driver builds on top of that in giving operators complete control of layer 2 VLAN tagging and even Ipvlan L3 routing for users interested in underlay network integration. For overlay deployments that abstract away physical constraints see the [multi-host overlay ](https://docs.docker.com/engine/userguide/networking/get-started-overlay/) driver.
Macvlan and Ipvlan are a new twist on the tried and true network virtualization technique. The Linux implementations are extremely lightweight because rather than using the traditional Linux bridge for isolation, they are simply associated to a Linux Ethernet interface or sub-interface to enforce separation between networks and connectivity to the physical network.
Macvlan and Ipvlan offer a number of unique features and plenty of room for further innovations with the various modes. Two high level advantages of these approaches are, the positive performance implications of bypassing the Linux bridge and the simplicity of having less moving parts. Removing the bridge that traditionally resides in between the Docker host NIC and container interface leaves a very simple setup consisting of container interfaces, attached directly to the Docker host interface. This result is easy access for external facing services as there is no port mappings in these scenarios.
Ipvlan is a new twist on the tried and true network virtualization technique. The Linux implementations are extremely lightweight because rather than using the traditional Linux bridge for isolation, they are simply associated to a Linux Ethernet interface or sub-interface to enforce separation between networks and connectivity to the physical network.
Ipvlan offers a number of unique features and plenty of room for further innovations with the various modes. Two high level advantages of these approaches are, the positive performance implications of bypassing the Linux bridge and the simplicity of having less moving parts. Removing the bridge that traditionally resides in between the Docker host NIC and container interface leaves a very simple setup consisting of container interfaces, attached directly to the Docker host interface. This result is easy access for external facing services as there is no port mappings in these scenarios.
### Pre-Requisites
@ -20,122 +17,11 @@ Macvlan and Ipvlan offer a number of unique features and plenty of room for furt
- Kernel requirements:
- To check your current kernel version, use `uname -r` to display your kernel version
- Macvlan Linux kernel v3.93.19 and 4.0+
- Ipvlan Linux kernel v4.2+ (support for earlier kernels exists but is buggy)
### MacVlan Bridge Mode Example Usage
Macvlan Bridge mode has a unique MAC address per container used to track MAC to port mappings by the Docker host. This is the largest difference from Ipvlan L2 mode which uses the same MAC address as the parent interface for each container `eth0` interface.
- Macvlan and Ipvlan driver networks are attached to a parent Docker host interface. Examples are a physical interface such as `eth0`, a sub-interface for 802.1q VLAN tagging like `eth0.10` (`.10` representing VLAN `10`) or even bonded host adaptors which bundle two Ethernet interfaces into a single logical interface.
- The specified gateway is external to the host provided by the network infrastructure.
- Each Macvlan Bridge mode Docker network is isolated from one another and there can be only one network attached to a parent interface at a time. There is a theoretical limit of 4,094 sub-interfaces per host adaptor that a Docker network could be attached to.
- It is not recommended to mix ipvlan and macvlan networks on the same `-o parent=` interface. Older kernel versions will throw uninformative netlink errors such as `device is busy`.
- Any container inside the same subnet can talk any other container in the same network without a gateway in both `macvlan bridge` mode and `ipvlan L2` modes.
- The same `docker network` commands apply to the vlan drivers. Some are irrelevant such as `-icc` or `--set-macaddress` for the Ipvlan driver.
- In Macvlan and Ipvlan L2 mode, containers on separate networks cannot reach one another without an external process routing between the two networks/subnets. This also applies to multiple subnets within the same `docker network`. See Ipvlan L3 mode for inter-subnet communications without a router.
In the following example, `eth0` on the docker host has an IP on the `172.16.86.0/24` network and a default gateway of `172.16.86.1`. The gateway is an external router with an address of `172.16.86.1`. An IP address is not required on the Docker host interface `eth0` in `bridge` mode, it merely needs to be on the proper upstream network to get forwarded by a network switch or network router.
![Simple Macvlan Bridge Mode Example](images/macvlan_bridge_simple.png)
**Note** For Macvlan bridge mode and Ipvlan L2 mode the subnet values need to match the NIC's interface of the Docker host. For example, Use the same subnet and gateway of the Docker host ethernet interface that is specified by the `-o parent=` option.
- The parent interface used in this example is `eth0` and it is on the subnet `172.16.86.0/24`. The containers in the `docker network` will also need to be on this same subnet as the parent `-o parent=`. The gateway is an external router on the network, not any ip masquerading or any other local proxy.
- The driver is specified with `-d driver_name` option. In this case `-d macvlan`
- The parent interface `-o parent=eth0` is configured as followed:
```
ip addr show eth0
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 172.16.86.250/24 brd 172.16.86.255 scope global eth0
```
Create the macvlan network and run a couple of containers attached to it:
```
# Macvlan (-o macvlan_mode= Defaults to Bridge mode if not specified)
docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 pub_net
# Run a container on the new network specifying the --ip address.
docker run --net=pub_net --ip=172.16.86.10 -itd alpine /bin/sh
# Start a second container and ping the first
docker run --net=pub_net -it --rm alpine /bin/sh
ping -c 4 172.16.86.10
```
Take a look at the containers ip and routing table:
```
ip a show eth0
eth0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 46:b2:6b:26:2f:69 brd ff:ff:ff:ff:ff:ff
inet 172.16.86.2/24 scope global eth0
ip route
default via 172.16.86.1 dev eth0
172.16.86.0/24 dev eth0 src 172.16.86.2
# NOTE: the containers can NOT ping the underlying host interfaces as
# they are intentionally filtered by Linux for additional isolation.
# In this case the containers cannot ping the -o parent=172.16.86.250
```
You can explicitly specify the `bridge` mode option `-o macvlan_mode=bridge`. It is the default so will be in `bridge` mode either way.
While the `eth0` interface does not need to have an IP address in Macvlan Bridge mode or Ipvlan L2 mode it is not uncommon to have an IP address on the interface. Addresses can be excluded from getting an address from the default built in IPAM by using the `--aux-address=x.x.x.x` flag. This will blacklist the specified address from being handed out to containers. The same network example above blocking the `-o parent=eth0` address from being handed out to a container.
```
docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
--aux-address="exclude_host=172.16.86.250" \
-o parent=eth0 pub_net
```
Another option for subpool IP address selection in a network provided by the default Docker IPAM driver is to use `--ip-range=`. This specifies the driver to allocate container addresses from this pool rather then the broader range from the `--subnet=` argument from a network create as seen in the following example that will allocate addresses beginning at `192.168.32.128` and increment upwards from there.
```
docker network create -d macvlan \
--subnet=192.168.32.0/24 \
--ip-range=192.168.32.128/25 \
--gateway=192.168.32.254 \
-o parent=eth0 macnet32
# Start a container and verify the address is 192.168.32.128
docker run --net=macnet32 -it --rm alpine /bin/sh
```
The network can then be deleted with:
```
docker network rm <network_name or id>
```
- **Note:** In both Macvlan and Ipvlan you are not able to ping or communicate with the default namespace IP address. For example, if you create a container and try to ping the Docker host's `eth0` it will **not** work. That traffic is explicitly filtered by the kernel modules themselves to offer additional provider isolation and security.
For more on Docker networking commands see [Working with Docker network commands](https://docs.docker.com/engine/userguide/networking/work-with-networks/)
### Ipvlan L2 Mode Example Usage
The ipvlan `L2` mode example is virtually identical to the macvlan `bridge` mode example. The driver is specified with `-d driver_name` option. In this case `-d ipvlan`
The ipvlan `L2` mode example is like the following image. The driver is specified with `-d driver_name` option. In this case `-d ipvlan`.
![Simple Ipvlan L2 Mode Example](images/ipvlan_l2_simple.png)
@ -166,11 +52,11 @@ docker run --net=db_net -it --rm alpine /bin/sh
# they are intentionally filtered by Linux for additional isolation.
```
The default mode for Ipvlan is `l2`. The default mode for Macvlan is `bridge`. If `-o ipvlan_mode=` or `-o macvlan_mode=` are left unspecified, the default modes will be used. Similarly, if the `--gateway` is left empty, the first usable address on the network will be set as the gateway. For example, if the subnet provided in the network create is `--subnet=192.168.1.0/24` then the gateway the container receives is `192.168.1.1`.
The default mode for Ipvlan is `l2`. If `-o ipvlan_mode=` are left unspecified, the default mode will be used. Similarly, if the `--gateway` is left empty, the first usable address on the network will be set as the gateway. For example, if the subnet provided in the network create is `--subnet=192.168.1.0/24` then the gateway the container receives is `192.168.1.1`.
To help understand how this mode interacts with other hosts, the following figure shows the same layer 2 segment between two Docker hosts that applies to both Macvlan Bride mode and Ipvlan L2 mode.
To help understand how this mode interacts with other hosts, the following figure shows the same layer 2 segment between two Docker hosts that applies to and Ipvlan L2 mode.
![Multiple Ipvlan and Macvlan Hosts](images/macvlan-bridge-ipvlan-l2.png)
![Multiple Ipvlan Hosts](images/macvlan-bridge-ipvlan-l2.png)
The following will create the exact same network as the network `db_net` created prior, with the driver defaults for `--gateway=192.168.1.1` and `-o ipvlan_mode=l2`.
@ -219,84 +105,6 @@ docker exec -it cid2 /bin/sh
docker exec -it cid3 /bin/sh
```
### Macvlan 802.1q Trunk Bridge Mode Example Usage
VLANs (Virtual Local Area Networks) have long been a primary means of virtualizing data center networks and are still in virtually all existing networks today. VLANs work by tagging a Layer-2 isolation domain with a 12-bit identifier ranging from 1-4094 that is inserted into a packet header that enables a logical grouping of a single or multiple subnets of both IPv4 and IPv6. It is very common for network operators to separate traffic using VLANs based on a subnet(s) function or security profile such as `web`, `db` or any other isolation needs.
It is very common to have a compute host requirement of running multiple virtual networks concurrently on a host. Linux networking has long supported VLAN tagging, also known by its standard 802.1q, for maintaining datapath isolation between networks. The Ethernet link connected to a Docker host can be configured to support the 802.1q VLAN IDs, by creating Linux sub-interfaces, each one dedicated to a unique VLAN ID.
![Simple Ipvlan L2 Mode Example](images/multi_tenant_8021q_vlans.png)
Trunking 802.1q to a Linux host is notoriously painful for many in operations. It requires configuration file changes in order to be persistent through a reboot. If a bridge is involved, a physical NIC needs to be moved into the bridge and the bridge then gets the IP address. This has lead to many a stranded servers since the risk of cutting off access during that convoluted process is high.
Like all of the Docker network drivers, the overarching goal is to alleviate the operational pains of managing network resources. To that end, when a network receives a sub-interface as the parent that does not exist, the drivers create the VLAN tagged interfaces while creating the network.
In the case of a host reboot, instead of needing to modify often complex network configuration files the driver will recreate all network links when the Docker daemon restarts. The driver tracks if it created the VLAN tagged sub-interface originally with the network create and will **only** recreate the sub-interface after a restart or delete `docker network rm` the link if it created it in the first place with `docker network create`.
If the user doesn't want Docker to modify the `-o parent` sub-interface, the user simply needs to pass an existing link that already exists as the parent interface. Parent interfaces such as `eth0` are not deleted, only sub-interfaces that are not master links.
For the driver to add/delete the vlan sub-interfaces the format needs to be `interface_name.vlan_tag`.
For example: `eth0.50` denotes a parent interface of `eth0` with a slave of `eth0.50` tagged with vlan id `50`. The equivalent `ip link` command would be `ip link add link eth0 name eth0.50 type vlan id 50`.
Replace the `macvlan` with `ipvlan` in the `-d` driver argument to create macvlan 802.1q trunks.
**Vlan ID 50**
In the first network tagged and isolated by the Docker host, `eth0.50` is the parent interface tagged with vlan id `50` specified with `-o parent=eth0.50`. Other naming formats can be used, but the links need to be added and deleted manually using `ip link` or Linux configuration files. As long as the `-o parent` exists anything can be used if compliant with Linux netlink.
```
# now add networks and hosts as you would normally by attaching to the master (sub)interface that is tagged
docker network create -d macvlan \
--subnet=192.168.50.0/24 \
--gateway=192.168.50.1 \
-o parent=eth0.50 macvlan50
# In two separate terminals, start a Docker container and the containers can now ping one another.
docker run --net=macvlan50 -it --name macvlan_test5 --rm alpine /bin/sh
docker run --net=macvlan50 -it --name macvlan_test6 --rm alpine /bin/sh
```
**Vlan ID 60**
In the second network, tagged and isolated by the Docker host, `eth0.60` is the parent interface tagged with vlan id `60` specified with `-o parent=eth0.60`. The `macvlan_mode=` defaults to `macvlan_mode=bridge`. It can also be explicitly set with the same result as shown in the next example.
```
# now add networks and hosts as you would normally by attaching to the master (sub)interface that is tagged.
docker network create -d macvlan \
--subnet=192.168.60.0/24 \
--gateway=192.168.60.1 \
-o parent=eth0.60 -o \
-o macvlan_mode=bridge macvlan60
# In two separate terminals, start a Docker container and the containers can now ping one another.
docker run --net=macvlan60 -it --name macvlan_test7 --rm alpine /bin/sh
docker run --net=macvlan60 -it --name macvlan_test8 --rm alpine /bin/sh
```
**Example:** Multi-Subnet Macvlan 802.1q Trunking
The same as the example before except there is an additional subnet bound to the network that the user can choose to provision containers on. In MacVlan/Bridge mode, containers can only ping one another if they are on the same subnet/broadcast domain unless there is an external router that routes the traffic (answers ARP etc) between the two subnets.
```
### Create multiple L2 subnets
docker network create -d ipvlan \
--subnet=192.168.210.0/24 \
--subnet=192.168.212.0/24 \
--gateway=192.168.210.254 \
--gateway=192.168.212.254 \
-o ipvlan_mode=l2 ipvlan210
# Test 192.168.210.0/24 connectivity between containers
docker run --net=ipvlan210 --ip=192.168.210.10 -itd alpine /bin/sh
docker run --net=ipvlan210 --ip=192.168.210.9 -it --rm alpine ping -c 2 192.168.210.10
# Test 192.168.212.0/24 connectivity between containers
docker run --net=ipvlan210 --ip=192.168.212.10 -itd alpine /bin/sh
docker run --net=ipvlan210 --ip=192.168.212.9 -it --rm alpine ping -c 2 192.168.212.10
```
### Ipvlan 802.1q Trunk L2 Mode Example Usage
Architecturally, Ipvlan L2 mode trunking is the same as Macvlan with regard to gateways and L2 path isolation. There are nuances that can be advantageous for CAM table pressure in ToR switches, one MAC per port and MAC exhaustion on a host's parent NIC to name a few. The 802.1q trunk scenario looks the same. Both modes adhere to tagging standards and have seamless integration with the physical network for underlay integration and hardware vendor plugin integrations.
@ -356,7 +164,7 @@ $ ip route
192.168.30.0/24 dev eth0 src 192.168.30.2
```
Example: Multi-Subnet Ipvlan L2 Mode starting two containers on the same subnet and pinging one another. In order for the `192.168.114.0/24` to reach `192.168.116.0/24` it requires an external router in L2 mode. L3 mode can route between subnets that share a common `-o parent=`. This same multi-subnet example is also valid for Macvlan `bridge` mode.
Example: Multi-Subnet Ipvlan L2 Mode starting two containers on the same subnet and pinging one another. In order for the `192.168.114.0/24` to reach `192.168.116.0/24` it requires an external router in L2 mode. L3 mode can route between subnets that share a common `-o parent=`.
Secondary addresses on network routers are common as an address space becomes exhausted to add another secondary to a L3 vlan interface or commonly referred to as a "switched virtual interface" (SVI).
@ -393,13 +201,13 @@ IPVlan will require routes to be distributed to each endpoint. The driver only b
![Docker Ipvlan L2 Mode](images/ipvlan-l3.png)
Ipvlan L3 mode drops all broadcast and multicast traffic. This reason alone makes Ipvlan L3 mode a prime candidate for those looking for massive scale and predictable network integrations. It is predictable and in turn will lead to greater uptimes because there is no bridging involved. Bridging loops have been responsible for high profile outages that can be hard to pinpoint depending on the size of the failure domain. This is due to the cascading nature of BPDUs (Bridge Port Data Units) that are flooded throughout a broadcast domain (VLAN) to find and block topology loops. Eliminating bridging domains, or at the least, keeping them isolated to a pair of ToRs (top of rack switches) will reduce hard to troubleshoot bridging instabilities. Macvlan Bridge and Ipvlan L2 modes are well suited for isolated VLANs only trunked into a pair of ToRs that can provide a loop-free non-blocking fabric. The next step further is to route at the edge via Ipvlan L3 mode that reduces a failure domain to a local host only.
Ipvlan L3 mode drops all broadcast and multicast traffic. This reason alone makes Ipvlan L3 mode a prime candidate for those looking for massive scale and predictable network integrations. It is predictable and in turn will lead to greater uptimes because there is no bridging involved. Bridging loops have been responsible for high profile outages that can be hard to pinpoint depending on the size of the failure domain. This is due to the cascading nature of BPDUs (Bridge Port Data Units) that are flooded throughout a broadcast domain (VLAN) to find and block topology loops. Eliminating bridging domains, or at the least, keeping them isolated to a pair of ToRs (top of rack switches) will reduce hard to troubleshoot bridging instabilities. Ipvlan L2 modes is well suited for isolated VLANs only trunked into a pair of ToRs that can provide a loop-free non-blocking fabric. The next step further is to route at the edge via Ipvlan L3 mode that reduces a failure domain to a local host only.
- L3 mode needs to be on a separate subnet as the default namespace since it requires a netlink route in the default namespace pointing to the Ipvlan parent interface.
- The parent interface used in this example is `eth0` and it is on the subnet `192.168.1.0/24`. Notice the `docker network` is **not** on the same subnet as `eth0`.
- Unlike macvlan bridge mode and ipvlan l2 modes, different subnets/networks can ping one another as long as they share the same parent interface `-o parent=`.
- Unlike ipvlan l2 modes, different subnets/networks can ping one another as long as they share the same parent interface `-o parent=`.
```
ip a show eth0
@ -444,61 +252,6 @@ $ ip route
In order to ping the containers from a remote Docker host or the container be able to ping a remote host, the remote host or the physical network in between need to have a route pointing to the host IP address of the container's Docker host eth interface. More on this as we evolve the Ipvlan `L3` story.
### Dual Stack IPv4 IPv6 Macvlan Bridge Mode
**Example:** Macvlan Bridge mode, 802.1q trunk, VLAN ID: 218, Multi-Subnet, Dual Stack
```
# Create multiple bridge subnets with a gateway of x.x.x.1:
docker network create -d macvlan \
--subnet=192.168.216.0/24 --subnet=192.168.218.0/24 \
--gateway=192.168.216.1 --gateway=192.168.218.1 \
--subnet=2001:db8:abc8::/64 --gateway=2001:db8:abc8::10 \
-o parent=eth0.218 \
-o macvlan_mode=bridge macvlan216
# Start a container on the first subnet 192.168.216.0/24
docker run --net=macvlan216 --name=macnet216_test --ip=192.168.216.10 -itd alpine /bin/sh
# Start a container on the second subnet 192.168.218.0/24
docker run --net=macvlan216 --name=macnet216_test --ip=192.168.218.10 -itd alpine /bin/sh
# Ping the first container started on the 192.168.216.0/24 subnet
docker run --net=macvlan216 --ip=192.168.216.11 -it --rm alpine /bin/sh
ping 192.168.216.10
# Ping the first container started on the 192.168.218.0/24 subnet
docker run --net=macvlan216 --ip=192.168.218.11 -it --rm alpine /bin/sh
ping 192.168.218.10
```
View the details of one of the containers:
```
docker run --net=macvlan216 --ip=192.168.216.11 -it --rm alpine /bin/sh
root@526f3060d759:/# ip a show eth0
eth0@if92: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 8e:9a:99:25:b6:16 brd ff:ff:ff:ff:ff:ff
inet 192.168.216.11/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 2001:db8:abc4::8c9a:99ff:fe25:b616/64 scope link tentative
valid_lft forever preferred_lft forever
inet6 2001:db8:abc8::2/64 scope link nodad
valid_lft forever preferred_lft forever
# Specified v4 gateway of 192.168.216.1
root@526f3060d759:/# ip route
default via 192.168.216.1 dev eth0
192.168.216.0/24 dev eth0 proto kernel scope link src 192.168.216.11
# Specified v6 gateway of 2001:db8:abc8::10
root@526f3060d759:/# ip -6 route
2001:db8:abc4::/64 dev eth0 proto kernel metric 256
2001:db8:abc8::/64 dev eth0 proto kernel metric 256
default via 2001:db8:abc8::10 dev eth0 metric 1024
```
### Dual Stack IPv4 IPv6 Ipvlan L2 Mode
- Not only does Libnetwork give you complete control over IPv4 addressing, but it also gives you total control over IPv6 addressing as well as feature parity between the two address families.
@ -602,13 +355,10 @@ Start a second container with a specific `--ip4` address and ping the first host
docker run --net=ipvlan140 --ip=192.168.140.10 -it --rm alpine /bin/sh
```
**Note**: Different subnets on the same parent interface in both Ipvlan `L2` mode and Macvlan `bridge` mode cannot ping one another. That requires a router to proxy-arp the requests with a secondary subnet. However, Ipvlan `L3` will route the unicast traffic between disparate subnets as long as they share the same `-o parent` parent link.
**Note**: Different subnets on the same parent interface in Ipvlan `L2` mode cannot ping one another. That requires a router to proxy-arp the requests with a secondary subnet. However, Ipvlan `L3` will route the unicast traffic between disparate subnets as long as they share the same `-o parent` parent link.
### Dual Stack IPv4 IPv6 Ipvlan L3 Mode
**Example:** IpVlan L3 Mode Dual Stack IPv4/IPv6, Multi-Subnet w/ 802.1q Vlan Tag:118
As in all of the examples, a tagged VLAN interface does not have to be used. The sub-interfaces can be swapped with `eth0`, `eth1`, `bond0` or any other valid interface on the host other then the `lo` loopback.

View file

@ -586,7 +586,7 @@ func (s *DockerNetworkSuite) TestDockerNetworkIpamMultipleNetworks(c *check.C) {
"--gateway=192.168.0.100", "--gateway=192.170.0.100",
"--ip-range=192.168.1.0/24",
"--aux-address", "a=192.168.1.5", "--aux-address", "b=192.168.1.6",
"--aux-address", "a=192.170.1.5", "--aux-address", "b=192.170.1.6",
"--aux-address", "c=192.170.1.5", "--aux-address", "d=192.170.1.6",
"test7")
assertNwIsAvailable(c, "test7")

View file

@ -127,8 +127,8 @@ $ docker network create -d overlay \
--gateway=192.168.0.100 \
--gateway=192.170.0.100 \
--ip-range=192.168.1.0/24 \
--aux-address a=192.168.1.5 --aux-address b=192.168.1.6 \
--aux-address a=192.170.1.5 --aux-address b=192.170.1.6 \
--aux-address="my-router=192.168.1.5" --aux-address="my-switch=192.168.1.6" \
--aux-address="my-printer=192.170.1.5" --aux-address="my-nas=192.170.1.6" \
my-multihost-network
```

View file

@ -11,18 +11,28 @@ NAME[:TAG] | [REGISTRY_HOST[:REGISTRY_PORT]/]NAME[:TAG]
# DESCRIPTION
This command pushes an image or a repository to a registry. If you do not
specify a `REGISTRY_HOST`, the command uses Docker's public registry located at
`registry-1.docker.io` by default. Refer to **docker-tag(1)** for more
information about valid image and tag names.
Use `docker push` to share your images to the [Docker Hub](https://hub.docker.com)
registry or to a self-hosted one.
Refer to **docker-tag(1)** for more information about valid image and tag names.
Killing the **docker push** process, for example by pressing **CTRL-c** while it
is running in a terminal, terminates the push operation.
Registry credentials are managed by **docker-login(1)**.
# OPTIONS
**--disable-content-trust**
Skip image verification (default true)
**--help**
Print usage statement
# EXAMPLES
# Pushing a new image to a registry
## Pushing a new image to a registry
First save the new image by finding the container ID (using **docker ps**)
and then committing it to a new image name. Note that only a-z0-9-_. are
@ -45,8 +55,6 @@ Check that this worked by running:
You should see both `rhel-httpd` and `registry-host:5000/myadmin/rhel-httpd`
listed.
Registry credentials are managed by **docker-login(1)**.
# HISTORY
April 2014, Originally compiled by William Henry (whenry at redhat dot com)
based on docker.com source material and internal work.