Merge pull request #25632 from SvenDowideit/more-docs-1.12.1-cherry-picks

More docs 1.12.1 cherry picks
This commit is contained in:
Tibor Vass 2016-08-12 00:14:30 -07:00 committed by GitHub
commit 5680192346
5 changed files with 532 additions and 134 deletions

View file

@ -21,11 +21,13 @@ and network IO metrics.
The following is a sample output from the `docker stats` command
$ docker stats redis1 redis2
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O
redis1 0.07% 796 KB / 64 MB 1.21% 788 B / 648 B 3.568 MB / 512 KB
redis2 0.07% 2.746 MB / 64 MB 4.29% 1.266 KB / 648 B 12.4 MB / 0 B
```bash
$ docker stats redis1 redis2
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O
redis1 0.07% 796 KB / 64 MB 1.21% 788 B / 648 B 3.568 MB / 512 KB
redis2 0.07% 2.746 MB / 64 MB 4.29% 1.266 KB / 648 B 12.4 MB / 0 B
```
The [docker stats](../reference/commandline/stats.md) reference page has
more details about the `docker stats` command.
@ -52,7 +54,9 @@ corresponding to existing containers.
To figure out where your control groups are mounted, you can run:
$ grep cgroup /proc/mounts
```bash
$ grep cgroup /proc/mounts
```
## Enumerating cgroups
@ -138,86 +142,19 @@ they represent occurrences of a specific event (e.g., pgfault, which
indicates the number of page faults which happened since the creation of
the cgroup; this number can never decrease).
<style>table tr > td:first-child { white-space: nowrap;}</style>
- **cache:**
the amount of memory used by the processes of this control group
that can be associated precisely with a block on a block device.
When you read from and write to files on disk, this amount will
increase. This will be the case if you use "conventional" I/O
(`open`, `read`,
`write` syscalls) as well as mapped files (with
`mmap`). It also accounts for the memory used by
`tmpfs` mounts, though the reasons are unclear.
- **rss:**
the amount of memory that *doesn't* correspond to anything on disk:
stacks, heaps, and anonymous memory maps.
- **mapped_file:**
indicates the amount of memory mapped by the processes in the
control group. It doesn't give you information about *how much*
memory is used; it rather tells you *how* it is used.
- **pgfault and pgmajfault:**
indicate the number of times that a process of the cgroup triggered
a "page fault" and a "major fault", respectively. A page fault
happens when a process accesses a part of its virtual memory space
which is nonexistent or protected. The former can happen if the
process is buggy and tries to access an invalid address (it will
then be sent a `SIGSEGV` signal, typically
killing it with the famous `Segmentation fault`
message). The latter can happen when the process reads from a memory
zone which has been swapped out, or which corresponds to a mapped
file: in that case, the kernel will load the page from disk, and let
the CPU complete the memory access. It can also happen when the
process writes to a copy-on-write memory zone: likewise, the kernel
will preempt the process, duplicate the memory page, and resume the
write operation on the process` own copy of the page. "Major" faults
happen when the kernel actually has to read the data from disk. When
it just has to duplicate an existing page, or allocate an empty
page, it's a regular (or "minor") fault.
- **swap:**
the amount of swap currently used by the processes in this cgroup.
- **active_anon and inactive_anon:**
the amount of *anonymous* memory that has been identified has
respectively *active* and *inactive* by the kernel. "Anonymous"
memory is the memory that is *not* linked to disk pages. In other
words, that's the equivalent of the rss counter described above. In
fact, the very definition of the rss counter is **active_anon** +
**inactive_anon** - **tmpfs** (where tmpfs is the amount of memory
used up by `tmpfs` filesystems mounted by this
control group). Now, what's the difference between "active" and
"inactive"? Pages are initially "active"; and at regular intervals,
the kernel sweeps over the memory, and tags some pages as
"inactive". Whenever they are accessed again, they are immediately
retagged "active". When the kernel is almost out of memory, and time
comes to swap out to disk, the kernel will swap "inactive" pages.
- **active_file and inactive_file:**
cache memory, with *active* and *inactive* similar to the *anon*
memory above. The exact formula is cache = **active_file** +
**inactive_file** + **tmpfs**. The exact rules used by the kernel
to move memory pages between active and inactive sets are different
from the ones used for anonymous memory, but the general principle
is the same. Note that when the kernel needs to reclaim memory, it
is cheaper to reclaim a clean (=non modified) page from this pool,
since it can be reclaimed immediately (while anonymous pages and
dirty/modified pages have to be written to disk first).
- **unevictable:**
the amount of memory that cannot be reclaimed; generally, it will
account for memory that has been "locked" with `mlock`.
It is often used by crypto frameworks to make sure that
secret keys and other sensitive material never gets swapped out to
disk.
- **memory and memsw limits:**
These are not really metrics, but a reminder of the limits applied
to this cgroup. The first one indicates the maximum amount of
physical memory that can be used by the processes of this control
group; the second one indicates the maximum amount of RAM+swap.
Metric | Description
--------------------------------------|-----------------------------------------------------------
**cache** | The amount of memory used by the processes of this control group that can be associated precisely with a block on a block device. When you read from and write to files on disk, this amount will increase. This will be the case if you use "conventional" I/O (`open`, `read`, `write` syscalls) as well as mapped files (with `mmap`). It also accounts for the memory used by `tmpfs` mounts, though the reasons are unclear.
**rss** | The amount of memory that *doesn't* correspond to anything on disk: stacks, heaps, and anonymous memory maps.
**mapped_file** | Indicates the amount of memory mapped by the processes in the control group. It doesn't give you information about *how much* memory is used; it rather tells you *how* it is used.
**pgfault**, **pgmajfault** | Indicate the number of times that a process of the cgroup triggered a "page fault" and a "major fault", respectively. A page fault happens when a process accesses a part of its virtual memory space which is nonexistent or protected. The former can happen if the process is buggy and tries to access an invalid address (it will then be sent a `SIGSEGV` signal, typically killing it with the famous `Segmentation fault` message). The latter can happen when the process reads from a memory zone which has been swapped out, or which corresponds to a mapped file: in that case, the kernel will load the page from disk, and let the CPU complete the memory access. It can also happen when the process writes to a copy-on-write memory zone: likewise, the kernel will preempt the process, duplicate the memory page, and resume the write operation on the process` own copy of the page. "Major" faults happen when the kernel actually has to read the data from disk. When it just has to duplicate an existing page, or allocate an empty page, it's a regular (or "minor") fault.
**swap** | The amount of swap currently used by the processes in this cgroup.
**active_anon**, **inactive_anon** | The amount of *anonymous* memory that has been identified has respectively *active* and *inactive* by the kernel. "Anonymous" memory is the memory that is *not* linked to disk pages. In other words, that's the equivalent of the rss counter described above. In fact, the very definition of the rss counter is **active_anon** + **inactive_anon** - **tmpfs** (where tmpfs is the amount of memory used up by `tmpfs` filesystems mounted by this control group). Now, what's the difference between "active" and "inactive"? Pages are initially "active"; and at regular intervals, the kernel sweeps over the memory, and tags some pages as "inactive". Whenever they are accessed again, they are immediately retagged "active". When the kernel is almost out of memory, and time comes to swap out to disk, the kernel will swap "inactive" pages.
**active_file**, **inactive_file** | Cache memory, with *active* and *inactive* similar to the *anon* memory above. The exact formula is **cache** = **active_file** + **inactive_file** + **tmpfs**. The exact rules used by the kernel to move memory pages between active and inactive sets are different from the ones used for anonymous memory, but the general principle is the same. Note that when the kernel needs to reclaim memory, it is cheaper to reclaim a clean (=non modified) page from this pool, since it can be reclaimed immediately (while anonymous pages and dirty/modified pages have to be written to disk first).
**unevictable** | The amount of memory that cannot be reclaimed; generally, it will account for memory that has been "locked" with `mlock`. It is often used by crypto frameworks to make sure that secret keys and other sensitive material never gets swapped out to disk.
**memory_limit**, **memsw_limit** | These are not really metrics, but a reminder of the limits applied to this cgroup. The first one indicates the maximum amount of physical memory that can be used by the processes of this control group; the second one indicates the maximum amount of RAM+swap.
Accounting for memory in the page cache is very complex. If two
processes in different control groups both read the same file
@ -261,32 +198,12 @@ file in the kernel documentation, here is a short list of the most
relevant ones:
- **blkio.sectors:**
contain the number of 512-bytes sectors read and written by the
processes member of the cgroup, device by device. Reads and writes
are merged in a single counter.
- **blkio.io_service_bytes:**
indicates the number of bytes read and written by the cgroup. It has
4 counters per device, because for each device, it differentiates
between synchronous vs. asynchronous I/O, and reads vs. writes.
- **blkio.io_serviced:**
the number of I/O operations performed, regardless of their size. It
also has 4 counters per device.
- **blkio.io_queued:**
indicates the number of I/O operations currently queued for this
cgroup. In other words, if the cgroup isn't doing any I/O, this will
be zero. Note that the opposite is not true. In other words, if
there is no I/O queued, it does not mean that the cgroup is idle
(I/O-wise). It could be doing purely synchronous reads on an
otherwise quiescent device, which is therefore able to handle them
immediately, without queuing. Also, while it is helpful to figure
out which cgroup is putting stress on the I/O subsystem, keep in
mind that it is a relative quantity. Even if a process group does
not perform more I/O, its queue size can increase just because the
device load increases because of other devices.
Metric | Description
----------------------------|-----------------------------------------------------------
**blkio.sectors** | contains the number of 512-bytes sectors read and written by the processes member of the cgroup, device by device. Reads and writes are merged in a single counter.
**blkio.io_service_bytes** | indicates the number of bytes read and written by the cgroup. It has 4 counters per device, because for each device, it differentiates between synchronous vs. asynchronous I/O, and reads vs. writes.
**blkio.io_serviced** | the number of I/O operations performed, regardless of their size. It also has 4 counters per device.
**blkio.io_queued** | indicates the number of I/O operations currently queued for this cgroup. In other words, if the cgroup isn't doing any I/O, this will be zero. Note that the opposite is not true. In other words, if there is no I/O queued, it does not mean that the cgroup is idle (I/O-wise). It could be doing purely synchronous reads on an otherwise quiescent device, which is therefore able to handle them immediately, without queuing. Also, while it is helpful to figure out which cgroup is putting stress on the I/O subsystem, keep in mind that it is a relative quantity. Even if a process group does not perform more I/O, its queue size can increase just because the device load increases because of other devices.
## Network metrics
@ -313,7 +230,9 @@ an interface) can do some serious accounting.
For instance, you can setup a rule to account for the outbound HTTP
traffic on a web server:
$ iptables -I OUTPUT -p tcp --sport 80
```bash
$ iptables -I OUTPUT -p tcp --sport 80
```
There is no `-j` or `-g` flag,
so the rule will just count matched packets and go to the following
@ -321,7 +240,9 @@ rule.
Later, you can check the values of the counters, with:
$ iptables -nxvL OUTPUT
```bash
$ iptables -nxvL OUTPUT
```
Technically, `-n` is not required, but it will
prevent iptables from doing DNS reverse lookups, which are probably
@ -363,11 +284,15 @@ though.
The exact format of the command is:
$ ip netns exec <nsname> <command...>
```bash
$ ip netns exec <nsname> <command...>
```
For example:
$ ip netns exec mycontainer netstat -i
```bash
$ ip netns exec mycontainer netstat -i
```
`ip netns` finds the "mycontainer" container by
using namespaces pseudo-files. Each process belongs to one network
@ -388,7 +313,7 @@ container, we need to:
- Create a symlink from `/var/run/netns/<somename>` to `/proc/<thepid>/ns/net`
- Execute `ip netns exec <somename> ....`
Please review [*Enumerating Cgroups*](#enumerating-cgroups) to learn how to find
Please review [Enumerating Cgroups](#enumerating-cgroups) to learn how to find
the cgroup of a process running in the container of which you want to
measure network usage. From there, you can examine the pseudo-file named
`tasks`, which contains the PIDs that are in the
@ -397,11 +322,13 @@ control group (i.e., in the container). Pick any one of them.
Putting everything together, if the "short ID" of a container is held in
the environment variable `$CID`, then you can do this:
$ TASKS=/sys/fs/cgroup/devices/docker/$CID*/tasks
$ PID=$(head -n 1 $TASKS)
$ mkdir -p /var/run/netns
$ ln -sf /proc/$PID/ns/net /var/run/netns/$CID
$ ip netns exec $CID netstat -i
```bash
$ TASKS=/sys/fs/cgroup/devices/docker/$CID*/tasks
$ PID=$(head -n 1 $TASKS)
$ mkdir -p /var/run/netns
$ ln -sf /proc/$PID/ns/net /var/run/netns/$CID
$ ip netns exec $CID netstat -i
```
## Tips for high-performance metric collection

View file

@ -2906,7 +2906,9 @@ Return low-level information about the `exec` command `id`.
{
"Name": "tardis",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/tardis"
"Mountpoint": "/var/lib/docker/volumes/tardis",
"Labels": null,
"Scope": "local"
}
],
"Warnings": []
@ -2941,6 +2943,7 @@ Create a volume
"com.example.some-label": "some-value",
"com.example.some-other-label": "some-other-value"
},
"Driver": "custom"
}
**Example response**:
@ -2950,13 +2953,16 @@ Create a volume
{
"Name": "tardis",
"Driver": "local",
"Driver": "custom",
"Mountpoint": "/var/lib/docker/volumes/tardis",
"Status": null,
"Status": {
"hello": "world"
},
"Labels": {
"com.example.some-label": "some-value",
"com.example.some-other-label": "some-other-value"
},
"Scope": "local"
}
**Status codes**:
@ -2970,8 +2976,13 @@ Create a volume
- **Driver** - Name of the volume driver to use. Defaults to `local` for the name.
- **DriverOpts** - A mapping of driver options and values. These options are
passed directly to the driver and are driver specific.
- **Labels** - Labels to set on the volume, specified as a map: `{"key":"value" [,"key2":"value2"]}`
- **Labels** - Labels to set on the volume, specified as a map: `{"key":"value","key2":"value2"}`
**JSON fields in response**:
Refer to the [inspect a volume](#inspect-a-volume) section or details about the
JSON fields returned in the response.
### Inspect a volume
`GET /volumes/(name)`
@ -2989,12 +3000,16 @@ Return low-level information on the volume `name`
{
"Name": "tardis",
"Driver": "local",
"Driver": "custom",
"Mountpoint": "/var/lib/docker/volumes/tardis/_data",
"Status": {
"hello": "world"
},
"Labels": {
"com.example.some-label": "some-value",
"com.example.some-other-label": "some-other-value"
}
},
"Scope": "local"
}
**Status codes**:
@ -3003,6 +3018,23 @@ Return low-level information on the volume `name`
- **404** - no such volume
- **500** - server error
**JSON fields in response**:
The following fields can be returned in the API response. Empty fields, or
fields that are not supported by the volume's driver may be omitted in the
response.
- **Name** - Name of the volume.
- **Driver** - Name of the volume driver used by the volume.
- **Mountpoint** - Mount path of the volume on the host.
- **Status** - Low-level details about the volume, provided by the volume driver.
Details are returned as a map with key/value pairs: `{"key":"value","key2":"value2"}`.
The `Status` field is optional, and is omitted if the volume driver does not
support this feature.
- **Labels** - Labels set on the volume, specified as a map: `{"key":"value","key2":"value2"}`.
- **Scope** - Scope describes the level at which the volume exists, can be one of
`global` for cluster-wide or `local` for machine level. The default is `local`.
### Remove a volume
`DELETE /volumes/(name)`
@ -3350,7 +3382,446 @@ Instruct the driver to remove the network (`id`).
- **404** - no such network
- **500** - server error
## 3.6 Nodes
## 3.6 Plugins (experimental)
### List plugins
`GET /plugins`
Returns information about installed plugins.
**Example request**:
GET /plugins HTTP/1.1
**Example response**:
```
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"Id": "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078",
"Name": "tiborvass/no-remove",
"Tag": "latest",
"Active": true,
"Config": {
"Mounts": [
{
"Name": "",
"Description": "",
"Settable": null,
"Source": "/data",
"Destination": "/data",
"Type": "bind",
"Options": [
"shared",
"rbind"
]
},
{
"Name": "",
"Description": "",
"Settable": null,
"Source": null,
"Destination": "/foobar",
"Type": "tmpfs",
"Options": null
}
],
"Env": [
"DEBUG=1"
],
"Args": null,
"Devices": null
},
"Manifest": {
"ManifestVersion": "v0",
"Description": "A test plugin for Docker",
"Documentation": "https://docs.docker.com/engine/extend/plugins/",
"Interface": {
"Types": [
"docker.volumedriver/1.0"
],
"Socket": "plugins.sock"
},
"Entrypoint": [
"plugin-no-remove",
"/data"
],
"Workdir": "",
"User": {
},
"Network": {
"Type": "host"
},
"Capabilities": null,
"Mounts": [
{
"Name": "",
"Description": "",
"Settable": null,
"Source": "/data",
"Destination": "/data",
"Type": "bind",
"Options": [
"shared",
"rbind"
]
},
{
"Name": "",
"Description": "",
"Settable": null,
"Source": null,
"Destination": "/foobar",
"Type": "tmpfs",
"Options": null
}
],
"Devices": [
{
"Name": "device",
"Description": "a host device to mount",
"Settable": null,
"Path": "/dev/cpu_dma_latency"
}
],
"Env": [
{
"Name": "DEBUG",
"Description": "If set, prints debug messages",
"Settable": null,
"Value": "1"
}
],
"Args": {
"Name": "args",
"Description": "command line arguments",
"Settable": null,
"Value": [
]
}
}
}
]
```
**Status codes**:
- **200** - no error
- **500** - server error
### Install a plugin
`POST /plugins/pull?name=<plugin name>`
Pulls and installs a plugin. After the plugin is installed, it can be enabled
using the [`POST /plugins/(plugin name)/enable` endpoint](#enable-a-plugin).
**Example request**:
```
POST /plugins/pull?name=tiborvass/no-remove:latest HTTP/1.1
```
The `:latest` tag is optional, and is used as default if omitted. When using
this endpoint to pull a plugin from the registry, the `X-Registry-Auth` header
can be used to include a base64-encoded AuthConfig object. Refer to the [create
an image](#create-an-image) section for more details.
**Example response**:
```
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 175
[
{
"Name": "network",
"Description": "",
"Value": [
"host"
]
},
{
"Name": "mount",
"Description": "",
"Value": [
"/data"
]
},
{
"Name": "device",
"Description": "",
"Value": [
"/dev/cpu_dma_latency"
]
}
]
```
**Query parameters**:
- **name** - Name of the plugin to pull. The name may include a tag or digest.
This parameter is required.
**Status codes**:
- **200** - no error
- **500** - error parsing reference / not a valid repository/tag: repository
name must have at least one component
- **500** - plugin already exists
### Inspect a plugin
`GET /plugins/(plugin name)`
Returns detailed information about an installed plugin.
**Example request**:
```
GET /plugins/tiborvass/no-remove:latest HTTP/1.1
```
The `:latest` tag is optional, and is used as default if omitted.
**Example response**:
```
HTTP/1.1 200 OK
Content-Type: application/json
{
"Id": "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078",
"Name": "tiborvass/no-remove",
"Tag": "latest",
"Active": false,
"Config": {
"Mounts": [
{
"Name": "",
"Description": "",
"Settable": null,
"Source": "/data",
"Destination": "/data",
"Type": "bind",
"Options": [
"shared",
"rbind"
]
},
{
"Name": "",
"Description": "",
"Settable": null,
"Source": null,
"Destination": "/foobar",
"Type": "tmpfs",
"Options": null
}
],
"Env": [
"DEBUG=1"
],
"Args": null,
"Devices": null
},
"Manifest": {
"ManifestVersion": "v0",
"Description": "A test plugin for Docker",
"Documentation": "https://docs.docker.com/engine/extend/plugins/",
"Interface": {
"Types": [
"docker.volumedriver/1.0"
],
"Socket": "plugins.sock"
},
"Entrypoint": [
"plugin-no-remove",
"/data"
],
"Workdir": "",
"User": {
},
"Network": {
"Type": "host"
},
"Capabilities": null,
"Mounts": [
{
"Name": "",
"Description": "",
"Settable": null,
"Source": "/data",
"Destination": "/data",
"Type": "bind",
"Options": [
"shared",
"rbind"
]
},
{
"Name": "",
"Description": "",
"Settable": null,
"Source": null,
"Destination": "/foobar",
"Type": "tmpfs",
"Options": null
}
],
"Devices": [
{
"Name": "device",
"Description": "a host device to mount",
"Settable": null,
"Path": "/dev/cpu_dma_latency"
}
],
"Env": [
{
"Name": "DEBUG",
"Description": "If set, prints debug messages",
"Settable": null,
"Value": "1"
}
],
"Args": {
"Name": "args",
"Description": "command line arguments",
"Settable": null,
"Value": [
]
}
}
}
```
**Status codes**:
- **200** - no error
- **404** - plugin not installed
### Enable a plugin
`POST /plugins/(plugin name)/enable`
Enables a plugin
**Example request**:
```
POST /plugins/tiborvass/no-remove:latest/enable HTTP/1.1
```
The `:latest` tag is optional, and is used as default if omitted.
**Example response**:
```
HTTP/1.1 200 OK
Content-Length: 0
Content-Type: text/plain; charset=utf-8
```
**Status codes**:
- **200** - no error
- **500** - plugin is already enabled
### Disable a plugin
`POST /plugins/(plugin name)/disable`
Disables a plugin
**Example request**:
```
POST /plugins/tiborvass/no-remove:latest/disable HTTP/1.1
```
The `:latest` tag is optional, and is used as default if omitted.
**Example response**:
```
HTTP/1.1 200 OK
Content-Length: 0
Content-Type: text/plain; charset=utf-8
```
**Status codes**:
- **200** - no error
- **500** - plugin is already disabled
### Remove a plugin
`DELETE /plugins/(plugin name)`
Removes a plugin
**Example request**:
```
DELETE /plugins/tiborvass/no-remove:latest HTTP/1.1
```
The `:latest` tag is optional, and is used as default if omitted.
**Example response**:
```
HTTP/1.1 200 OK
Content-Length: 0
Content-Type: text/plain; charset=utf-8
```
**Status codes**:
- **200** - no error
- **404** - plugin not installed
- **500** - plugin is active
<!-- TODO Document "docker plugin push" endpoint once we have "plugin build"
### Push a plugin
`POST /plugins/tiborvass/(plugin name)/push HTTP/1.1`
Pushes a plugin to the registry.
**Example request**:
```
POST /plugins/tiborvass/no-remove:latest HTTP/1.1
```
The `:latest` tag is optional, and is used as default if omitted. When using
this endpoint to push a plugin to the registry, the `X-Registry-Auth` header
can be used to include a base64-encoded AuthConfig object. Refer to the [create
an image](#create-an-image) section for more details.
**Example response**:
**Status codes**:
- **200** - no error
- **404** - plugin not installed
-->
## 3.7 Nodes
**Note**: Node operations require the engine to be part of a swarm.
@ -3611,7 +4082,7 @@ JSON Parameters:
- **404** no such node
- **500** server error
## 3.7 Swarm
## 3.8 Swarm
### Initialize a new swarm
@ -3830,7 +4301,7 @@ JSON Parameters:
- **Worker** - Token to use for joining as a worker.
- **Manager** - Token to use for joining as a manager.
## 3.8 Services
## 3.9 Services
**Note**: Service operations require to first be part of a swarm.
@ -4315,7 +4786,7 @@ Update the service `id`.
- **404** no such service
- **500** server error
## 3.9 Tasks
## 3.10 Tasks
**Note**: Task operations require the engine to be part of a swarm.

View file

@ -1027,7 +1027,7 @@ This is a full example of the allowed configuration options on Linux:
"labels": [],
"live-restore": true,
"log-driver": "",
"log-opts": [],
"log-opts": {},
"mtu": 0,
"pidfile": "",
"graph": "",

View file

@ -42,12 +42,12 @@ manager. If the manager in a single-manager swarm fails, your services will
continue to run, but you will need to create a new cluster to recover.
To take advantage of swarm mode's fault-tolerance features, Docker recommends
you implement an odd number of nodes nodes according to your organization's
you implement an odd number of nodes according to your organization's
high-availability requirements. When you have multiple managers you can recover
from the failure of a manager node without downtime.
* A three-manager swarm tolerates a maximum loss of one manager.
* A five-manager swarm tolerates a maximum simultaneous loss two
* A five-manager swarm tolerates a maximum simultaneous loss of two
manager nodes.
* An `N` manager cluster will tolerate the loss of at most
`(N-1)/2` managers.

View file

@ -46,7 +46,7 @@ node. For example, the tutorial uses a machine named `manager1`.
to access the manager at the IP address.
The output incudes the commands to join new nodes to the swarm. Nodes will
join as managers or workers depending on the value for the `--swarm-token`
join as managers or workers depending on the value for the `--token`
flag.
2. Run `docker info` to view the current state of the swarm: