Merge pull request #22392 from thaJeztah/docs-cherry-picks-3
documentation cherry-picks
This commit is contained in:
commit
290e0ea54c
30 changed files with 577 additions and 7749 deletions
14
contrib/docker-device-tool/README.md
Normal file
14
contrib/docker-device-tool/README.md
Normal file
|
@ -0,0 +1,14 @@
|
|||
Docker device tool for devicemapper storage driver backend
|
||||
===================
|
||||
|
||||
The ./contrib/docker-device-tool contains a tool to manipulate devicemapper thin-pool.
|
||||
|
||||
Compile
|
||||
========
|
||||
|
||||
$ make shell
|
||||
## inside build container
|
||||
$ go build contrib/docker-device-tool/device_tool.go
|
||||
|
||||
# if devicemapper version is old and compliation fails, compile with `libdm_no_deferred_remove` tag
|
||||
$ go build -tags libdm_no_deferred_remove contrib/docker-device-tool/device_tool.go
|
|
@ -1,150 +0,0 @@
|
|||
<!--[metadata]>
|
||||
+++
|
||||
aliases = ["/engine/articles/cfengine/"]
|
||||
title = "Process management with CFEngine"
|
||||
description = "Managing containerized processes with CFEngine"
|
||||
keywords = ["cfengine, process, management, usage, docker, documentation"]
|
||||
[menu.main]
|
||||
parent = "engine_admin"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# Process management with CFEngine
|
||||
|
||||
Create Docker containers with managed processes.
|
||||
|
||||
Docker monitors one process in each running container and the container
|
||||
lives or dies with that process. By introducing CFEngine inside Docker
|
||||
containers, we can alleviate a few of the issues that may arise:
|
||||
|
||||
- It is possible to easily start multiple processes within a
|
||||
container, all of which will be managed automatically, with the
|
||||
normal `docker run` command.
|
||||
- If a managed process dies or crashes, CFEngine will start it again
|
||||
within 1 minute.
|
||||
- The container itself will live as long as the CFEngine scheduling
|
||||
daemon (cf-execd) lives. With CFEngine, we are able to decouple the
|
||||
life of the container from the uptime of the service it provides.
|
||||
|
||||
## How it works
|
||||
|
||||
CFEngine, together with the cfe-docker integration policies, are
|
||||
installed as part of the Dockerfile. This builds CFEngine into our
|
||||
Docker image.
|
||||
|
||||
The Dockerfile's `ENTRYPOINT` takes an arbitrary
|
||||
amount of commands (with any desired arguments) as parameters. When we
|
||||
run the Docker container these parameters get written to CFEngine
|
||||
policies and CFEngine takes over to ensure that the desired processes
|
||||
are running in the container.
|
||||
|
||||
CFEngine scans the process table for the `basename` of the commands given
|
||||
to the `ENTRYPOINT` and runs the command to start the process if the `basename`
|
||||
is not found. For example, if we start the container with
|
||||
`docker run "/path/to/my/application parameters"`, CFEngine will look for a
|
||||
process named `application` and run the command. If an entry for `application`
|
||||
is not found in the process table at any point in time, CFEngine will execute
|
||||
`/path/to/my/application parameters` to start the application once again. The
|
||||
check on the process table happens every minute.
|
||||
|
||||
Note that it is therefore important that the command to start your
|
||||
application leaves a process with the basename of the command. This can
|
||||
be made more flexible by making some minor adjustments to the CFEngine
|
||||
policies, if desired.
|
||||
|
||||
## Usage
|
||||
|
||||
This example assumes you have Docker installed and working. We will
|
||||
install and manage `apache2` and `sshd`
|
||||
in a single container.
|
||||
|
||||
There are three steps:
|
||||
|
||||
1. Install CFEngine into the container.
|
||||
2. Copy the CFEngine Docker process management policy into the
|
||||
containerized CFEngine installation.
|
||||
3. Start your application processes as part of the `docker run` command.
|
||||
|
||||
### Building the image
|
||||
|
||||
The first two steps can be done as part of a Dockerfile, as follows.
|
||||
|
||||
FROM ubuntu
|
||||
MAINTAINER Eystein Måløy Stenberg <eytein.stenberg@gmail.com>
|
||||
|
||||
RUN apt-get update && apt-get install -y wget lsb-release unzip ca-certificates
|
||||
|
||||
# install latest CFEngine
|
||||
RUN wget -qO- http://cfengine.com/pub/gpg.key | apt-key add -
|
||||
RUN echo "deb http://cfengine.com/pub/apt $(lsb_release -cs) main" > /etc/apt/sources.list.d/cfengine-community.list
|
||||
RUN apt-get update && apt-get install -y cfengine-community
|
||||
|
||||
# install cfe-docker process management policy
|
||||
RUN wget https://github.com/estenberg/cfe-docker/archive/master.zip -P /tmp/ && unzip /tmp/master.zip -d /tmp/
|
||||
RUN cp /tmp/cfe-docker-master/cfengine/bin/* /var/cfengine/bin/
|
||||
RUN cp /tmp/cfe-docker-master/cfengine/inputs/* /var/cfengine/inputs/
|
||||
RUN rm -rf /tmp/cfe-docker-master /tmp/master.zip
|
||||
|
||||
# apache2 and openssh are just for testing purposes, install your own apps here
|
||||
RUN apt-get update && apt-get install -y openssh-server apache2
|
||||
RUN mkdir -p /var/run/sshd
|
||||
RUN echo "root:password" | chpasswd # need a password for ssh
|
||||
|
||||
ENTRYPOINT ["/var/cfengine/bin/docker_processes_run.sh"]
|
||||
|
||||
By saving this file as Dockerfile to a working directory, you can then build
|
||||
your image with the docker build command, e.g.,
|
||||
`docker build -t managed_image`.
|
||||
|
||||
### Testing the container
|
||||
|
||||
Start the container with `apache2` and `sshd` running and managed, forwarding
|
||||
a port to our SSH instance:
|
||||
|
||||
$ docker run -p 127.0.0.1:222:22 -d managed_image "/usr/sbin/sshd" "/etc/init.d/apache2 start"
|
||||
|
||||
We now clearly see one of the benefits of the cfe-docker integration: it
|
||||
allows to start several processes as part of a normal `docker run` command.
|
||||
|
||||
We can now log in to our new container and see that both `apache2` and `sshd`
|
||||
are running. We have set the root password to "password" in the Dockerfile
|
||||
above and can use that to log in with ssh:
|
||||
|
||||
ssh -p222 root@127.0.0.1
|
||||
|
||||
ps -ef
|
||||
UID PID PPID C STIME TTY TIME CMD
|
||||
root 1 0 0 07:48 ? 00:00:00 /bin/bash /var/cfengine/bin/docker_processes_run.sh /usr/sbin/sshd /etc/init.d/apache2 start
|
||||
root 18 1 0 07:48 ? 00:00:00 /var/cfengine/bin/cf-execd -F
|
||||
root 20 1 0 07:48 ? 00:00:00 /usr/sbin/sshd
|
||||
root 32 1 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
|
||||
www-data 34 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
|
||||
www-data 35 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
|
||||
www-data 36 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
|
||||
root 93 20 0 07:48 ? 00:00:00 sshd: root@pts/0
|
||||
root 105 93 0 07:48 pts/0 00:00:00 -bash
|
||||
root 112 105 0 07:49 pts/0 00:00:00 ps -ef
|
||||
|
||||
If we stop apache2, it will be started again within a minute by
|
||||
CFEngine.
|
||||
|
||||
service apache2 status
|
||||
Apache2 is running (pid 32).
|
||||
service apache2 stop
|
||||
* Stopping web server apache2 ... waiting [ OK ]
|
||||
service apache2 status
|
||||
Apache2 is NOT running.
|
||||
# ... wait up to 1 minute...
|
||||
service apache2 status
|
||||
Apache2 is running (pid 173).
|
||||
|
||||
## Adapting to your applications
|
||||
|
||||
To make sure your applications get managed in the same manner, there are
|
||||
just two things you need to adjust from the above example:
|
||||
|
||||
- In the Dockerfile used above, install your applications instead of
|
||||
`apache2` and `sshd`.
|
||||
- When you start the container with `docker run`,
|
||||
specify the command line arguments to your applications rather than
|
||||
`apache2` and `sshd`.
|
|
@ -17,7 +17,6 @@ This section contains the following:
|
|||
* [Dockerizing MongoDB](mongodb.md)
|
||||
* [Dockerizing PostgreSQL](postgresql_service.md)
|
||||
* [Dockerizing a CouchDB service](couchdb_data_volumes.md)
|
||||
* [Dockerizing a Node.js web app](nodejs_web_app.md)
|
||||
* [Dockerizing a Redis service](running_redis_service.md)
|
||||
* [Dockerizing an apt-cacher-ng service](apt-cacher-ng.md)
|
||||
* [Dockerizing applications: A 'Hello world'](../userguide/containers/dockerizing.md)
|
||||
|
|
|
@ -1,199 +0,0 @@
|
|||
<!--[metadata]>
|
||||
+++
|
||||
title = "Dockerizing a Node.js web app"
|
||||
description = "Installing and running a Node.js app with Docker"
|
||||
keywords = ["docker, example, package installation, node, centos"]
|
||||
[menu.main]
|
||||
parent = "engine_dockerize"
|
||||
+++
|
||||
<![end-metadata]-->
|
||||
|
||||
# Dockerizing a Node.js web app
|
||||
|
||||
> **Note**:
|
||||
> - **If you don't like sudo** then see [*Giving non-root
|
||||
> access*](../installation/binaries.md#giving-non-root-access)
|
||||
|
||||
The goal of this example is to show you how you can build your own
|
||||
Docker images from a parent image using a `Dockerfile`
|
||||
. We will do that by making a simple Node.js hello world web
|
||||
application running on CentOS. You can get the full source code at[https://github.com/enokd/docker-node-hello/](https://github.com/enokd/docker-node-hello/).
|
||||
|
||||
## Create Node.js app
|
||||
|
||||
First, create a directory `src` where all the files
|
||||
would live. Then create a `package.json` file that
|
||||
describes your app and its dependencies:
|
||||
|
||||
{
|
||||
"name": "docker-centos-hello",
|
||||
"private": true,
|
||||
"version": "0.0.1",
|
||||
"description": "Node.js Hello world app on CentOS using docker",
|
||||
"author": "Daniel Gasienica <daniel@gasienica.ch>",
|
||||
"dependencies": {
|
||||
"express": "3.2.4"
|
||||
}
|
||||
}
|
||||
|
||||
Then, create an `index.js` file that defines a web
|
||||
app using the [Express.js](http://expressjs.com/) framework:
|
||||
|
||||
var express = require('express');
|
||||
|
||||
// Constants
|
||||
var PORT = 8080;
|
||||
|
||||
// App
|
||||
var app = express();
|
||||
app.get('/', function (req, res) {
|
||||
res.send('Hello world\n');
|
||||
});
|
||||
|
||||
app.listen(PORT);
|
||||
console.log('Running on http://localhost:' + PORT);
|
||||
|
||||
In the next steps, we'll look at how you can run this app inside a
|
||||
CentOS container using Docker. First, you'll need to build a Docker
|
||||
image of your app.
|
||||
|
||||
## Creating a Dockerfile
|
||||
|
||||
Create an empty file called `Dockerfile`:
|
||||
|
||||
touch Dockerfile
|
||||
|
||||
Open the `Dockerfile` in your favorite text editor
|
||||
|
||||
Define the parent image you want to use to build your own image on
|
||||
top of. Here, we'll use
|
||||
[CentOS](https://hub.docker.com/_/centos/) (tag: `centos6`)
|
||||
available on the [Docker Hub](https://hub.docker.com/):
|
||||
|
||||
FROM centos:centos6
|
||||
|
||||
Since we're building a Node.js app, you'll have to install Node.js as
|
||||
well as npm on your CentOS image. Node.js is required to run your app
|
||||
and npm is required to install your app's dependencies defined in
|
||||
`package.json`. To install the right package for
|
||||
CentOS, we'll use the instructions from the [Node.js wiki](
|
||||
https://github.com/joyent/node/wiki/Installing-Node.js-
|
||||
via-package-manager#rhelcentosscientific-linux-6):
|
||||
|
||||
# Enable Extra Packages for Enterprise Linux (EPEL) for CentOS
|
||||
RUN yum install -y epel-release
|
||||
# Install Node.js and npm
|
||||
RUN yum install -y nodejs npm
|
||||
|
||||
Install your app dependencies using the `npm` binary:
|
||||
|
||||
# Install app dependencies
|
||||
COPY package.json /src/package.json
|
||||
RUN cd /src; npm install --production
|
||||
|
||||
To bundle your app's source code inside the Docker image, use the `COPY`
|
||||
instruction:
|
||||
|
||||
# Bundle app source
|
||||
COPY . /src
|
||||
|
||||
Your app binds to port `8080` so you'll use the `EXPOSE` instruction to have
|
||||
it mapped by the `docker` daemon:
|
||||
|
||||
EXPOSE 8080
|
||||
|
||||
Last but not least, define the command to run your app using `CMD` which
|
||||
defines your runtime, i.e. `node`, and the path to our app, i.e. `src/index.js`
|
||||
(see the step where we added the source to the container):
|
||||
|
||||
CMD ["node", "/src/index.js"]
|
||||
|
||||
Your `Dockerfile` should now look like this:
|
||||
|
||||
FROM centos:centos6
|
||||
|
||||
# Enable Extra Packages for Enterprise Linux (EPEL) for CentOS
|
||||
RUN yum install -y epel-release
|
||||
# Install Node.js and npm
|
||||
RUN yum install -y nodejs npm
|
||||
|
||||
# Install app dependencies
|
||||
COPY package.json /src/package.json
|
||||
RUN cd /src; npm install --production
|
||||
|
||||
# Bundle app source
|
||||
COPY . /src
|
||||
|
||||
EXPOSE 8080
|
||||
CMD ["node", "/src/index.js"]
|
||||
|
||||
## Building your image
|
||||
|
||||
Go to the directory that has your `Dockerfile` and run the following command
|
||||
to build a Docker image. The `-t` flag lets you tag your image so it's easier
|
||||
to find later using the `docker images` command:
|
||||
|
||||
$ docker build -t <your username>/centos-node-hello .
|
||||
|
||||
Your image will now be listed by Docker:
|
||||
|
||||
$ docker images
|
||||
|
||||
# Example
|
||||
REPOSITORY TAG ID CREATED
|
||||
centos centos6 539c0211cd76 8 weeks ago
|
||||
<your username>/centos-node-hello latest d64d3505b0d2 2 hours ago
|
||||
|
||||
## Run the image
|
||||
|
||||
Running your image with `-d` runs the container in detached mode, leaving the
|
||||
container running in the background. The `-p` flag redirects a public port to
|
||||
a private port in the container. Run the image you previously built:
|
||||
|
||||
$ docker run -p 49160:8080 -d <your username>/centos-node-hello
|
||||
|
||||
Print the output of your app:
|
||||
|
||||
# Get container ID
|
||||
$ docker ps
|
||||
|
||||
# Print app output
|
||||
$ docker logs <container id>
|
||||
|
||||
# Example
|
||||
Running on http://localhost:8080
|
||||
|
||||
## Test
|
||||
|
||||
To test your app, get the port of your app that Docker mapped:
|
||||
|
||||
$ docker ps
|
||||
|
||||
# Example
|
||||
ID IMAGE COMMAND ... PORTS
|
||||
ecce33b30ebf <your username>/centos-node-hello:latest node /src/index.js 49160->8080
|
||||
|
||||
In the example above, Docker mapped the `8080` port of the container to `49160`.
|
||||
|
||||
Now you can call your app using `curl` (install if needed via:
|
||||
`sudo apt-get install curl`):
|
||||
|
||||
$ curl -i localhost:49160
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
X-Powered-By: Express
|
||||
Content-Type: text/html; charset=utf-8
|
||||
Content-Length: 12
|
||||
Date: Sun, 02 Jun 2013 03:53:22 GMT
|
||||
Connection: keep-alive
|
||||
|
||||
Hello world
|
||||
|
||||
If you use Docker Machine on OS X, the port is actually mapped to the Docker
|
||||
host VM, and you should use the following command:
|
||||
|
||||
$ curl $(docker-machine ip VM_NAME):49160
|
||||
|
||||
We hope this tutorial helped you get up and running with Node.js and
|
||||
CentOS on Docker. You can get the full source code at
|
||||
[https://github.com/enokd/docker-node-hello/](https://github.com/enokd/docker-node-hello/).
|
|
@ -31,78 +31,40 @@ Follow the instructions in the plugin's documentation.
|
|||
|
||||
## Finding a plugin
|
||||
|
||||
The following plugins exist:
|
||||
The sections below provide an inexhaustive overview of available plugins.
|
||||
|
||||
* The [Blockbridge plugin](https://github.com/blockbridge/blockbridge-docker-volume)
|
||||
is a volume plugin that provides access to an extensible set of
|
||||
container-based persistent storage options. It supports single and multi-host Docker
|
||||
environments with features that include tenant isolation, automated
|
||||
provisioning, encryption, secure deletion, snapshots and QoS.
|
||||
<style>
|
||||
#content tr td:first-child { white-space: nowrap;}
|
||||
</style>
|
||||
|
||||
* The [Convoy plugin](https://github.com/rancher/convoy) is a volume plugin for a
|
||||
variety of storage back-ends including device mapper and NFS. It's a simple standalone
|
||||
executable written in Go and provides the framework to support vendor-specific extensions
|
||||
such as snapshots, backups and restore.
|
||||
### Network plugins
|
||||
|
||||
* The [Flocker plugin](https://clusterhq.com/docker-plugin/) is a volume plugin
|
||||
which provides multi-host portable volumes for Docker, enabling you to run
|
||||
databases and other stateful containers and move them around across a cluster
|
||||
of machines.
|
||||
Plugin | Description
|
||||
----------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
[Contiv Networking](https://github.com/contiv/netplugin) | An open source network plugin to provide infrastructure and security policies for a multi-tenant micro services deployment, while providing an integration to physical network for non-container workload. Contiv Networking implements the remote driver and IPAM APIs available in Docker 1.9 onwards.
|
||||
[Kuryr Network Plugin](https://github.com/openstack/kuryr) | A network plugin is developed as part of the OpenStack Kuryr project and implements the Docker networking (libnetwork) remote driver API by utilizing Neutron, the OpenStack networking service. It includes an IPAM driver as well.
|
||||
[Weave Network Plugin](http://docs.weave.works/weave/latest_release/plugin.html) | A network plugin that creates a virtual network that connects your Docker containers - across multiple hosts or clouds and enables automatic discovery of applications. Weave networks are resilient, partition tolerant, secure and work in partially connected networks, and other adverse environments - all configured with delightful simplicity.
|
||||
|
||||
* The [GlusterFS plugin](https://github.com/calavera/docker-volume-glusterfs) is
|
||||
another volume plugin that provides multi-host volumes management for Docker
|
||||
using GlusterFS.
|
||||
### Volume plugins
|
||||
|
||||
* The [Horcrux Volume Plugin](https://github.com/muthu-r/horcrux) allows on-demand,
|
||||
version controlled access to your data. Horcrux is an open-source plugin,
|
||||
written in Go, and supports SCP, [Minio](https://www.minio.io) and Amazon S3.
|
||||
Plugin | Description
|
||||
----------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
[Blockbridge plugin](https://github.com/blockbridge/blockbridge-docker-volume) | A volume plugin that provides access to an extensible set of container-based persistent storage options. It supports single and multi-host Docker environments with features that include tenant isolation, automated provisioning, encryption, secure deletion, snapshots and QoS.
|
||||
[Contiv Volume Plugin](https://github.com/contiv/volplugin) | An open source volume plugin that provides multi-tenant, persistent, distributed storage with intent based consumption using ceph underneath.
|
||||
[Convoy plugin](https://github.com/rancher/convoy) | A volume plugin for a variety of storage back-ends including device mapper and NFS. It's a simple standalone executable written in Go and provides the framework to support vendor-specific extensions such as snapshots, backups and restore.
|
||||
[Flocker plugin](https://clusterhq.com/docker-plugin/) | A volume plugin that provides multi-host portable volumes for Docker, enabling you to run databases and other stateful containers and move them around across a cluster of machines.
|
||||
[gce-docker plugin](https://github.com/mcuadros/gce-docker) | A volume plugin able to attach, format and mount Google Compute [persistent-disks](https://cloud.google.com/compute/docs/disks/persistent-disks).
|
||||
[GlusterFS plugin](https://github.com/calavera/docker-volume-glusterfs) | A volume plugin that provides multi-host volumes management for Docker using GlusterFS.
|
||||
[Horcrux Volume Plugin](https://github.com/muthu-r/horcrux) | A volume plugin that allows on-demand, version controlled access to your data. Horcrux is an open-source plugin, written in Go, and supports SCP, [Minio](https://www.minio.io) and Amazon S3.
|
||||
[IPFS Volume Plugin](http://github.com/vdemeester/docker-volume-ipfs) | An open source volume plugin that allows using an [ipfs](https://ipfs.io/) filesystem as a volume.
|
||||
[Keywhiz plugin](https://github.com/calavera/docker-volume-keywhiz) | A plugin that provides credentials and secret management using Keywhiz as a central repository.
|
||||
[Local Persist Plugin](https://github.com/CWSpear/local-persist) | A volume plugin that extends the default `local` driver's functionality by allowing you specify a mountpoint anywhere on the host, which enables the files to *always persist*, even if the volume is removed via `docker volume rm`.
|
||||
[NetApp Plugin](https://github.com/NetApp/netappdvp) (nDVP) | A volume plugin that provides direct integration with the Docker ecosystem for the NetApp storage portfolio. The nDVP package supports the provisioning and management of storage resources from the storage platform to Docker hosts, with a robust framework for adding additional platforms in the future.
|
||||
[Netshare plugin](https://github.com/gondor/docker-volume-netshare) | A volume plugin that provides volume management for NFS 3/4, AWS EFS and CIFS file systems.
|
||||
[OpenStorage Plugin](https://github.com/libopenstorage/openstorage) | A cluster-aware volume plugin that provides volume management for file and block storage solutions. It implements a vendor neutral specification for implementing extensions such as CoS, encryption, and snapshots. It has example drivers based on FUSE, NFS, NBD and EBS to name a few.
|
||||
[Quobyte Volume Plugin](https://github.com/quobyte/docker-volume) | A volume plugin that connects Docker to [Quobyte](http://www.quobyte.com/containers)'s data center file system, a general-purpose scalable and fault-tolerant storage platform.
|
||||
[REX-Ray plugin](https://github.com/emccode/rexray) | A volume plugin which is written in Go and provides advanced storage functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.
|
||||
|
||||
* The [IPFS Volume Plugin](http://github.com/vdemeester/docker-volume-ipfs)
|
||||
is an open source volume plugin that allows using an
|
||||
[ipfs](https://ipfs.io/) filesystem as a volume.
|
||||
|
||||
* The [Keywhiz plugin](https://github.com/calavera/docker-volume-keywhiz) is
|
||||
a plugin that provides credentials and secret management using Keywhiz as
|
||||
a central repository.
|
||||
|
||||
* The [Netshare plugin](https://github.com/gondor/docker-volume-netshare) is a volume plugin
|
||||
that provides volume management for NFS 3/4, AWS EFS and CIFS file systems.
|
||||
|
||||
* The [OpenStorage Plugin](https://github.com/libopenstorage/openstorage) is a cluster aware volume plugin that provides volume management for file and block storage solutions. It implements a vendor neutral specification for implementing extensions such as CoS, encryption, and snapshots. It has example drivers based on FUSE, NFS, NBD and EBS to name a few.
|
||||
|
||||
* The [Quobyte Volume Plugin](https://github.com/quobyte/docker-volume) connects Docker to [Quobyte](http://www.quobyte.com/containers)'s data center file system, a general-purpose scalable and fault-tolerant storage platform.
|
||||
|
||||
* The [REX-Ray plugin](https://github.com/emccode/rexray) is a volume plugin
|
||||
which is written in Go and provides advanced storage functionality for many
|
||||
platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.
|
||||
|
||||
* The [Contiv Volume Plugin](https://github.com/contiv/volplugin) is an open
|
||||
source volume plugin that provides multi-tenant, persistent, distributed storage
|
||||
with intent based consumption using ceph underneath.
|
||||
|
||||
* The [Contiv Networking](https://github.com/contiv/netplugin) is an open source
|
||||
libnetwork plugin to provide infrastructure and security policies for a
|
||||
multi-tenant micro services deployment, while providing an integration to
|
||||
physical network for non-container workload. Contiv Networking implements the
|
||||
remote driver and IPAM APIs available in Docker 1.9 onwards.
|
||||
|
||||
* The [Weave Network Plugin](http://docs.weave.works/weave/latest_release/plugin.html)
|
||||
creates a virtual network that connects your Docker containers -
|
||||
across multiple hosts or clouds and enables automatic discovery of
|
||||
applications. Weave networks are resilient, partition tolerant,
|
||||
secure and work in partially connected networks, and other adverse
|
||||
environments - all configured with delightful simplicity.
|
||||
|
||||
* The [Kuryr Network Plugin](https://github.com/openstack/kuryr) is
|
||||
developed as part of the OpenStack Kuryr project and implements the
|
||||
Docker networking (libnetwork) remote driver API by utilizing
|
||||
Neutron, the OpenStack networking service. It includes an IPAM
|
||||
driver as well.
|
||||
|
||||
* The [Local Persist Plugin](https://github.com/CWSpear/local-persist)
|
||||
extends the default `local` driver's functionality by allowing you specify
|
||||
a mountpoint anywhere on the host, which enables the files to *always persist*,
|
||||
even if the volume is removed via `docker volume rm`.
|
||||
|
||||
## Troubleshooting a plugin
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ parent = "engine_extend"
|
|||
Docker Engine network plugins enable Engine deployments to be extended to
|
||||
support a wide range of networking technologies, such as VXLAN, IPVLAN, MACVLAN
|
||||
or something completely different. Network driver plugins are supported via the
|
||||
LibNetwork project. Each plugin is implemented asa "remote driver" for
|
||||
LibNetwork project. Each plugin is implemented as a "remote driver" for
|
||||
LibNetwork, which shares plugin infrastructure with Engine. Effectively, network
|
||||
driver plugins are activated in the same way as other plugins, and use the same
|
||||
kind of protocol.
|
||||
|
|
|
@ -49,10 +49,6 @@ Docker version | API version | Changes
|
|||
1.8.x | [1.20](docker_remote_api_v1.20.md) | [API changes](docker_remote_api.md#v1-20-api-changes)
|
||||
1.7.x | [1.19](docker_remote_api_v1.19.md) | [API changes](docker_remote_api.md#v1-19-api-changes)
|
||||
1.6.x | [1.18](docker_remote_api_v1.18.md) | [API changes](docker_remote_api.md#v1-18-api-changes)
|
||||
1.5.x | [1.17](docker_remote_api_v1.17.md) | [API changes](docker_remote_api.md#v1-17-api-changes)
|
||||
1.4.x | [1.16](docker_remote_api_v1.16.md) | [API changes](docker_remote_api.md#v1-16-api-changes)
|
||||
1.3.x | [1.15](docker_remote_api_v1.15.md) | [API changes](docker_remote_api.md#v1-15-api-changes)
|
||||
1.2.x | [1.14](docker_remote_api_v1.14.md) | [API changes](docker_remote_api.md#v1-14-api-changes)
|
||||
|
||||
Refer to the [GitHub repository](
|
||||
https://github.com/docker/docker/tree/master/docs/reference/api) for
|
||||
|
@ -60,12 +56,12 @@ older releases.
|
|||
|
||||
## Authentication
|
||||
|
||||
Since API version 1.2, the auth configuration is now handled client side, so the
|
||||
Authentication configuration is handled client side, so the
|
||||
client has to send the `authConfig` as a `POST` in `/images/(name)/push`. The
|
||||
`authConfig`, set as the `X-Registry-Auth` header, is currently a Base64 encoded
|
||||
(JSON) string with the following structure:
|
||||
|
||||
```
|
||||
```JSON
|
||||
{"username": "string", "password": "string", "email": "string",
|
||||
"serveraddress" : "string", "auth": ""}
|
||||
```
|
||||
|
@ -239,53 +235,3 @@ end point now returns the new boolean fields `CpuCfsPeriod`, `CpuCfsQuota`, and
|
|||
* `POST /build` closing the HTTP request cancels the build
|
||||
* `POST /containers/(id)/exec` includes `Warnings` field to response.
|
||||
|
||||
### v1.17 API changes
|
||||
|
||||
[Docker Remote API v1.17](docker_remote_api_v1.17.md) documentation
|
||||
|
||||
* The build supports `LABEL` command. Use this to add metadata to an image. For
|
||||
example you could add data describing the content of an image. `LABEL
|
||||
"com.example.vendor"="ACME Incorporated"`
|
||||
* `POST /containers/(id)/attach` and `POST /exec/(id)/start`
|
||||
* The Docker client now hints potential proxies about connection hijacking using HTTP Upgrade headers.
|
||||
* `POST /containers/create` sets labels on container create describing the container.
|
||||
* `GET /containers/json` returns the labels associated with the containers (`Labels`).
|
||||
* `GET /containers/(id)/json` returns the list current execs associated with the
|
||||
container (`ExecIDs`). This endpoint now returns the container labels
|
||||
(`Config.Labels`).
|
||||
* `POST /containers/(id)/rename` renames a container `id` to a new name.*
|
||||
* `POST /containers/create` and `POST /containers/(id)/start` callers can pass
|
||||
`ReadonlyRootfs` in the host config to mount the container's root filesystem as
|
||||
read only.
|
||||
* `GET /containers/(id)/stats` returns a live stream of a container's resource usage statistics.
|
||||
* `GET /images/json` returns the labels associated with each image (`Labels`).
|
||||
|
||||
|
||||
### v1.16 API changes
|
||||
|
||||
[Docker Remote API v1.16](docker_remote_api_v1.16.md)
|
||||
|
||||
* `GET /info` returns the number of CPUs available on the machine (`NCPU`),
|
||||
total memory available (`MemTotal`), a user-friendly name describing the running Docker daemon (`Name`), a unique ID identifying the daemon (`ID`), and
|
||||
a list of daemon labels (`Labels`).
|
||||
* `POST /containers/create` callers can set the new container's MAC address explicitly.
|
||||
* Volumes are now initialized when the container is created.
|
||||
* `POST /containers/(id)/copy` copies data which is contained in a volume.
|
||||
|
||||
### v1.15 API changes
|
||||
|
||||
[Docker Remote API v1.15](docker_remote_api_v1.15.md) documentation
|
||||
|
||||
`POST /containers/create` you can set a container's `HostConfig` when creating a
|
||||
container. Previously this was only available when starting a container.
|
||||
|
||||
### v1.14 API changes
|
||||
|
||||
[Docker Remote API v1.14](docker_remote_api_v1.14.md) documentation
|
||||
|
||||
* `DELETE /containers/(id)` when using `force`, the container will be immediately killed with SIGKILL.
|
||||
* `POST /containers/(id)/start` the `HostConfig` option accepts the field `CapAdd`, which specifies a list of capabilities
|
||||
to add, and the field `CapDrop`, which specifies a list of capabilities to drop.
|
||||
* `POST /images/create` th `fromImage` and `repo` parameters support the
|
||||
`repo:tag` format. Consequently, the `tag` parameter is now obsolete. Using the
|
||||
new format and the `tag` parameter at the same time will return an error.
|
||||
|
|
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
|
@ -206,13 +206,6 @@ Json Parameters:
|
|||
- **Domainname** - A string value containing the desired domain name to use
|
||||
for the container.
|
||||
- **User** - A string value containing the user to use inside the container.
|
||||
- **Memory** - Memory limit in bytes.
|
||||
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
|
||||
You must use this with `memory` and make the swap value larger than `memory`.
|
||||
- **CpuShares** - An integer value containing the CPU Shares for container
|
||||
(ie. the relative weight vs other containers).
|
||||
- **Cpuset** - The same as CpusetCpus, but deprecated, please don't use.
|
||||
- **CpusetCpus** - String value containing the cgroups CpusetCpus to use.
|
||||
- **AttachStdin** - Boolean value, attaches to stdin.
|
||||
- **AttachStdout** - Boolean value, attaches to stdout.
|
||||
- **AttachStderr** - Boolean value, attaches to stderr.
|
||||
|
@ -243,6 +236,12 @@ Json Parameters:
|
|||
in the form of `container_name:alias`.
|
||||
- **LxcConf** - LXC specific configurations. These configurations will only
|
||||
work when using the `lxc` execution driver.
|
||||
- **Memory** - Memory limit in bytes.
|
||||
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
|
||||
You must use this with `memory` and make the swap value larger than `memory`.
|
||||
- **CpuShares** - An integer value containing the CPU Shares for container
|
||||
(ie. the relative weight vs other containers).
|
||||
- **CpusetCpus** - String value containing the cgroups CpusetCpus to use.
|
||||
- **PortBindings** - A map of exposed container ports and the host port they
|
||||
should map to. It should be specified in the form
|
||||
`{ <port>/<protocol>: [{ "HostPort": "<port>" }] }`
|
||||
|
@ -1252,7 +1251,6 @@ Query Parameters:
|
|||
can be retrieved or `-` to read the image from the request body.
|
||||
- **repo** – repository
|
||||
- **tag** – tag
|
||||
- **registry** – the registry to pull from
|
||||
|
||||
Request Headers:
|
||||
|
||||
|
|
|
@ -213,18 +213,6 @@ Json Parameters:
|
|||
- **Domainname** - A string value containing the domain name to use
|
||||
for the container.
|
||||
- **User** - A string value specifying the user inside the container.
|
||||
- **Memory** - Memory limit in bytes.
|
||||
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
|
||||
You must use this with `memory` and make the swap value larger than `memory`.
|
||||
- **CpuShares** - An integer value containing the container's CPU Shares
|
||||
(ie. the relative weight vs other containers).
|
||||
- **CpuPeriod** - The length of a CPU period in microseconds.
|
||||
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
|
||||
- **Cpuset** - Deprecated please don't use. Use `CpusetCpus` instead.
|
||||
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
|
||||
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
|
||||
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
|
||||
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
|
||||
- **AttachStdin** - Boolean value, attaches to `stdin`.
|
||||
- **AttachStdout** - Boolean value, attaches to `stdout`.
|
||||
- **AttachStderr** - Boolean value, attaches to `stderr`.
|
||||
|
@ -254,6 +242,17 @@ Json Parameters:
|
|||
in the form of `container_name:alias`.
|
||||
- **LxcConf** - LXC specific configurations. These configurations only
|
||||
work when using the `lxc` execution driver.
|
||||
- **Memory** - Memory limit in bytes.
|
||||
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
|
||||
You must use this with `memory` and make the swap value larger than `memory`.
|
||||
- **CpuShares** - An integer value containing the container's CPU Shares
|
||||
(ie. the relative weight vs other containers).
|
||||
- **CpuPeriod** - The length of a CPU period in microseconds.
|
||||
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
|
||||
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
|
||||
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
|
||||
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
|
||||
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
|
||||
- **PortBindings** - A map of exposed container ports and the host port they
|
||||
should map to. A JSON object in the form
|
||||
`{ <port>/<protocol>: [{ "HostPort": "<port>" }] }`
|
||||
|
@ -1301,7 +1300,6 @@ Query Parameters:
|
|||
can be retrieved or `-` to read the image from the request body.
|
||||
- **repo** – Repository name.
|
||||
- **tag** – Tag.
|
||||
- **registry** – The registry to pull from.
|
||||
|
||||
Request Headers:
|
||||
|
||||
|
|
|
@ -215,19 +215,6 @@ Json Parameters:
|
|||
- **Domainname** - A string value containing the domain name to use
|
||||
for the container.
|
||||
- **User** - A string value specifying the user inside the container.
|
||||
- **Memory** - Memory limit in bytes.
|
||||
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
|
||||
You must use this with `memory` and make the swap value larger than `memory`.
|
||||
- **CpuShares** - An integer value containing the container's CPU Shares
|
||||
(ie. the relative weight vs other containers).
|
||||
- **CpuPeriod** - The length of a CPU period in microseconds.
|
||||
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
|
||||
- **Cpuset** - Deprecated please don't use. Use `CpusetCpus` instead.
|
||||
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
|
||||
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
|
||||
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
|
||||
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
|
||||
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
|
||||
- **AttachStdin** - Boolean value, attaches to `stdin`.
|
||||
- **AttachStdout** - Boolean value, attaches to `stdout`.
|
||||
- **AttachStderr** - Boolean value, attaches to `stderr`.
|
||||
|
@ -257,6 +244,18 @@ Json Parameters:
|
|||
in the form of `container_name:alias`.
|
||||
- **LxcConf** - LXC specific configurations. These configurations only
|
||||
work when using the `lxc` execution driver.
|
||||
- **Memory** - Memory limit in bytes.
|
||||
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
|
||||
You must use this with `memory` and make the swap value larger than `memory`.
|
||||
- **CpuShares** - An integer value containing the container's CPU Shares
|
||||
(ie. the relative weight vs other containers).
|
||||
- **CpuPeriod** - The length of a CPU period in microseconds.
|
||||
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
|
||||
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
|
||||
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
|
||||
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
|
||||
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
|
||||
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
|
||||
- **PortBindings** - A map of exposed container ports and the host port they
|
||||
should map to. A JSON object in the form
|
||||
`{ <port>/<protocol>: [{ "HostPort": "<port>" }] }`
|
||||
|
@ -1398,14 +1397,14 @@ Query Parameters:
|
|||
}
|
||||
}
|
||||
|
||||
This object maps the hostname of a registry to an object containing the
|
||||
"username" and "password" for that registry. Multiple registries may
|
||||
be specified as the build may be based on an image requiring
|
||||
authentication to pull from any arbitrary registry. Only the registry
|
||||
domain name (and port if not the default "443") are required. However
|
||||
(for legacy reasons) the "official" Docker, Inc. hosted registry must
|
||||
be specified with both a "https://" prefix and a "/v1/" suffix even
|
||||
though Docker will prefer to use the v2 registry API.
|
||||
This object maps the hostname of a registry to an object containing the
|
||||
"username" and "password" for that registry. Multiple registries may
|
||||
be specified as the build may be based on an image requiring
|
||||
authentication to pull from any arbitrary registry. Only the registry
|
||||
domain name (and port if not the default "443") are required. However
|
||||
(for legacy reasons) the "official" Docker, Inc. hosted registry must
|
||||
be specified with both a "https://" prefix and a "/v1/" suffix even
|
||||
though Docker will prefer to use the v2 registry API.
|
||||
|
||||
Status Codes:
|
||||
|
||||
|
@ -1443,7 +1442,6 @@ Query Parameters:
|
|||
can be retrieved or `-` to read the image from the request body.
|
||||
- **repo** – Repository name.
|
||||
- **tag** – Tag.
|
||||
- **registry** – The registry to pull from.
|
||||
|
||||
Request Headers:
|
||||
|
||||
|
|
|
@ -224,21 +224,6 @@ Json Parameters:
|
|||
- **Domainname** - A string value containing the domain name to use
|
||||
for the container.
|
||||
- **User** - A string value specifying the user inside the container.
|
||||
- **Memory** - Memory limit in bytes.
|
||||
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
|
||||
You must use this with `memory` and make the swap value larger than `memory`.
|
||||
- **MemoryReservation** - Memory soft limit in bytes.
|
||||
- **KernelMemory** - Kernel memory limit in bytes.
|
||||
- **CpuShares** - An integer value containing the container's CPU Shares
|
||||
(ie. the relative weight vs other containers).
|
||||
- **CpuPeriod** - The length of a CPU period in microseconds.
|
||||
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
|
||||
- **Cpuset** - Deprecated please don't use. Use `CpusetCpus` instead.
|
||||
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
|
||||
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
|
||||
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
|
||||
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
|
||||
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
|
||||
- **AttachStdin** - Boolean value, attaches to `stdin`.
|
||||
- **AttachStdout** - Boolean value, attaches to `stdout`.
|
||||
- **AttachStderr** - Boolean value, attaches to `stderr`.
|
||||
|
@ -271,6 +256,20 @@ Json Parameters:
|
|||
in the form of `container_name:alias`.
|
||||
- **LxcConf** - LXC specific configurations. These configurations only
|
||||
work when using the `lxc` execution driver.
|
||||
- **Memory** - Memory limit in bytes.
|
||||
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
|
||||
You must use this with `memory` and make the swap value larger than `memory`.
|
||||
- **MemoryReservation** - Memory soft limit in bytes.
|
||||
- **KernelMemory** - Kernel memory limit in bytes.
|
||||
- **CpuShares** - An integer value containing the container's CPU Shares
|
||||
(ie. the relative weight vs other containers).
|
||||
- **CpuPeriod** - The length of a CPU period in microseconds.
|
||||
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
|
||||
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
|
||||
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
|
||||
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
|
||||
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
|
||||
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
|
||||
- **PortBindings** - A map of exposed container ports and the host port they
|
||||
should map to. A JSON object in the form
|
||||
`{ <port>/<protocol>: [{ "HostPort": "<port>" }] }`
|
||||
|
@ -1487,14 +1486,14 @@ Query Parameters:
|
|||
}
|
||||
}
|
||||
|
||||
This object maps the hostname of a registry to an object containing the
|
||||
"username" and "password" for that registry. Multiple registries may
|
||||
be specified as the build may be based on an image requiring
|
||||
authentication to pull from any arbitrary registry. Only the registry
|
||||
domain name (and port if not the default "443") are required. However
|
||||
(for legacy reasons) the "official" Docker, Inc. hosted registry must
|
||||
be specified with both a "https://" prefix and a "/v1/" suffix even
|
||||
though Docker will prefer to use the v2 registry API.
|
||||
This object maps the hostname of a registry to an object containing the
|
||||
"username" and "password" for that registry. Multiple registries may
|
||||
be specified as the build may be based on an image requiring
|
||||
authentication to pull from any arbitrary registry. Only the registry
|
||||
domain name (and port if not the default "443") are required. However
|
||||
(for legacy reasons) the "official" Docker, Inc. hosted registry must
|
||||
be specified with both a "https://" prefix and a "/v1/" suffix even
|
||||
though Docker will prefer to use the v2 registry API.
|
||||
|
||||
Status Codes:
|
||||
|
||||
|
|
|
@ -329,31 +329,6 @@ Json Parameters:
|
|||
- **Domainname** - A string value containing the domain name to use
|
||||
for the container.
|
||||
- **User** - A string value specifying the user inside the container.
|
||||
- **Memory** - Memory limit in bytes.
|
||||
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
|
||||
You must use this with `memory` and make the swap value larger than `memory`.
|
||||
- **MemoryReservation** - Memory soft limit in bytes.
|
||||
- **KernelMemory** - Kernel memory limit in bytes.
|
||||
- **CpuShares** - An integer value containing the container's CPU Shares
|
||||
(ie. the relative weight vs other containers).
|
||||
- **CpuPeriod** - The length of a CPU period in microseconds.
|
||||
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
|
||||
- **Cpuset** - Deprecated please don't use. Use `CpusetCpus` instead.
|
||||
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
|
||||
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
|
||||
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
|
||||
- **BlkioWeightDevice** - Block IO weight (relative device weight) in the form of: `"BlkioWeightDevice": [{"Path": "device_path", "Weight": weight}]`
|
||||
- **BlkioDeviceReadBps** - Limit read rate (bytes per second) from a device in the form of: `"BlkioDeviceReadBps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceReadBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
|
||||
- **BlkioDeviceWriteBps** - Limit write rate (bytes per second) to a device in the form of: `"BlkioDeviceWriteBps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceWriteBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
|
||||
- **BlkioDeviceReadIOps** - Limit read rate (IO per second) from a device in the form of: `"BlkioDeviceReadIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceReadIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
|
||||
- **BlkioDeviceWiiteIOps** - Limit write rate (IO per second) to a device in the form of: `"BlkioDeviceWriteIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceWriteIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
|
||||
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
|
||||
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
|
||||
- **OomScoreAdj** - An integer value containing the score given to the container in order to tune OOM killer preferences.
|
||||
- **AttachStdin** - Boolean value, attaches to `stdin`.
|
||||
- **AttachStdout** - Boolean value, attaches to `stdout`.
|
||||
- **AttachStderr** - Boolean value, attaches to `stderr`.
|
||||
|
@ -383,6 +358,30 @@ Json Parameters:
|
|||
+ `volume_name:container_path:ro` to make the bind mount read-only inside the container.
|
||||
- **Links** - A list of links for the container. Each link entry should be
|
||||
in the form of `container_name:alias`.
|
||||
- **Memory** - Memory limit in bytes.
|
||||
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
|
||||
You must use this with `memory` and make the swap value larger than `memory`.
|
||||
- **MemoryReservation** - Memory soft limit in bytes.
|
||||
- **KernelMemory** - Kernel memory limit in bytes.
|
||||
- **CpuShares** - An integer value containing the container's CPU Shares
|
||||
(ie. the relative weight vs other containers).
|
||||
- **CpuPeriod** - The length of a CPU period in microseconds.
|
||||
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
|
||||
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
|
||||
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
|
||||
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
|
||||
- **BlkioWeightDevice** - Block IO weight (relative device weight) in the form of: `"BlkioWeightDevice": [{"Path": "device_path", "Weight": weight}]`
|
||||
- **BlkioDeviceReadBps** - Limit read rate (bytes per second) from a device in the form of: `"BlkioDeviceReadBps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceReadBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
|
||||
- **BlkioDeviceWriteBps** - Limit write rate (bytes per second) to a device in the form of: `"BlkioDeviceWriteBps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceWriteBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
|
||||
- **BlkioDeviceReadIOps** - Limit read rate (IO per second) from a device in the form of: `"BlkioDeviceReadIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceReadIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
|
||||
- **BlkioDeviceWiiteIOps** - Limit write rate (IO per second) to a device in the form of: `"BlkioDeviceWriteIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceWriteIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
|
||||
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
|
||||
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
|
||||
- **OomScoreAdj** - An integer value containing the score given to the container in order to tune OOM killer preferences.
|
||||
- **PortBindings** - A map of exposed container ports and the host port they
|
||||
should map to. A JSON object in the form
|
||||
`{ <port>/<protocol>: [{ "HostPort": "<port>" }] }`
|
||||
|
@ -1667,14 +1666,14 @@ Query Parameters:
|
|||
}
|
||||
}
|
||||
|
||||
This object maps the hostname of a registry to an object containing the
|
||||
"username" and "password" for that registry. Multiple registries may
|
||||
be specified as the build may be based on an image requiring
|
||||
authentication to pull from any arbitrary registry. Only the registry
|
||||
domain name (and port if not the default "443") are required. However
|
||||
(for legacy reasons) the "official" Docker, Inc. hosted registry must
|
||||
be specified with both a "https://" prefix and a "/v1/" suffix even
|
||||
though Docker will prefer to use the v2 registry API.
|
||||
This object maps the hostname of a registry to an object containing the
|
||||
"username" and "password" for that registry. Multiple registries may
|
||||
be specified as the build may be based on an image requiring
|
||||
authentication to pull from any arbitrary registry. Only the registry
|
||||
domain name (and port if not the default "443") are required. However
|
||||
(for legacy reasons) the "official" Docker, Inc. hosted registry must
|
||||
be specified with both a "https://" prefix and a "/v1/" suffix even
|
||||
though Docker will prefer to use the v2 registry API.
|
||||
|
||||
Status Codes:
|
||||
|
||||
|
|
|
@ -296,6 +296,7 @@ Create a container
|
|||
"MemorySwappiness": 60,
|
||||
"OomKillDisable": false,
|
||||
"OomScoreAdj": 500,
|
||||
"PidsLimit": -1,
|
||||
"PortBindings": { "22/tcp": [{ "HostPort": "11022" }] },
|
||||
"PublishAllPorts": false,
|
||||
"Privileged": false,
|
||||
|
@ -348,32 +349,6 @@ Json Parameters:
|
|||
- **Domainname** - A string value containing the domain name to use
|
||||
for the container.
|
||||
- **User** - A string value specifying the user inside the container.
|
||||
- **Memory** - Memory limit in bytes.
|
||||
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
|
||||
You must use this with `memory` and make the swap value larger than `memory`.
|
||||
- **MemoryReservation** - Memory soft limit in bytes.
|
||||
- **KernelMemory** - Kernel memory limit in bytes.
|
||||
- **CpuShares** - An integer value containing the container's CPU Shares
|
||||
(ie. the relative weight vs other containers).
|
||||
- **CpuPeriod** - The length of a CPU period in microseconds.
|
||||
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
|
||||
- **Cpuset** - Deprecated please don't use. Use `CpusetCpus` instead.
|
||||
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
|
||||
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
|
||||
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
|
||||
- **BlkioWeightDevice** - Block IO weight (relative device weight) in the form of: `"BlkioWeightDevice": [{"Path": "device_path", "Weight": weight}]`
|
||||
- **BlkioDeviceReadBps** - Limit read rate (bytes per second) from a device in the form of: `"BlkioDeviceReadBps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceReadBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
|
||||
- **BlkioDeviceWriteBps** - Limit write rate (bytes per second) to a device in the form of: `"BlkioDeviceWriteBps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceWriteBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
|
||||
- **BlkioDeviceReadIOps** - Limit read rate (IO per second) from a device in the form of: `"BlkioDeviceReadIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceReadIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
|
||||
- **BlkioDeviceWiiteIOps** - Limit write rate (IO per second) to a device in the form of: `"BlkioDeviceWriteIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceWriteIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
|
||||
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
|
||||
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
|
||||
- **OomScoreAdj** - An integer value containing the score given to the container in order to tune OOM killer preferences.
|
||||
- **PidsLimit** - Tune a container's pids limit. Set -1 for unlimited.
|
||||
- **AttachStdin** - Boolean value, attaches to `stdin`.
|
||||
- **AttachStdout** - Boolean value, attaches to `stdout`.
|
||||
- **AttachStderr** - Boolean value, attaches to `stderr`.
|
||||
|
@ -403,6 +378,31 @@ Json Parameters:
|
|||
+ `volume_name:container_path:ro` to make the bind mount read-only inside the container.
|
||||
- **Links** - A list of links for the container. Each link entry should be
|
||||
in the form of `container_name:alias`.
|
||||
- **Memory** - Memory limit in bytes.
|
||||
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
|
||||
You must use this with `memory` and make the swap value larger than `memory`.
|
||||
- **MemoryReservation** - Memory soft limit in bytes.
|
||||
- **KernelMemory** - Kernel memory limit in bytes.
|
||||
- **CpuShares** - An integer value containing the container's CPU Shares
|
||||
(ie. the relative weight vs other containers).
|
||||
- **CpuPeriod** - The length of a CPU period in microseconds.
|
||||
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
|
||||
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
|
||||
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
|
||||
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
|
||||
- **BlkioWeightDevice** - Block IO weight (relative device weight) in the form of: `"BlkioWeightDevice": [{"Path": "device_path", "Weight": weight}]`
|
||||
- **BlkioDeviceReadBps** - Limit read rate (bytes per second) from a device in the form of: `"BlkioDeviceReadBps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceReadBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
|
||||
- **BlkioDeviceWriteBps** - Limit write rate (bytes per second) to a device in the form of: `"BlkioDeviceWriteBps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceWriteBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
|
||||
- **BlkioDeviceReadIOps** - Limit read rate (IO per second) from a device in the form of: `"BlkioDeviceReadIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceReadIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
|
||||
- **BlkioDeviceWiiteIOps** - Limit write rate (IO per second) to a device in the form of: `"BlkioDeviceWriteIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
|
||||
`"BlkioDeviceWriteIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
|
||||
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
|
||||
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
|
||||
- **OomScoreAdj** - An integer value containing the score given to the container in order to tune OOM killer preferences.
|
||||
- **PidsLimit** - Tune a container's pids limit. Set -1 for unlimited.
|
||||
- **PortBindings** - A map of exposed container ports and the host port they
|
||||
should map to. A JSON object in the form
|
||||
`{ <port>/<protocol>: [{ "HostPort": "<port>" }] }`
|
||||
|
@ -517,8 +517,8 @@ Return low-level information on the container `id`
|
|||
"Tty": false,
|
||||
"User": "",
|
||||
"Volumes": {
|
||||
"/volumes/data": {}
|
||||
},
|
||||
"/volumes/data": {}
|
||||
},
|
||||
"WorkingDir": "",
|
||||
"StopSignal": "SIGTERM"
|
||||
},
|
||||
|
@ -1660,7 +1660,7 @@ Query Parameters:
|
|||
You can provide one or more `t` parameters.
|
||||
- **remote** – A Git repository URI or HTTP/HTTPS URI build source. If the
|
||||
URI specifies a filename, the file's contents are placed into a file
|
||||
called `Dockerfile`.
|
||||
called `Dockerfile`.
|
||||
- **q** – Suppress verbose build output.
|
||||
- **nocache** – Do not use the cache when building the image.
|
||||
- **pull** - Attempt to pull the image even if an older image exists locally.
|
||||
|
@ -1678,6 +1678,7 @@ Query Parameters:
|
|||
variable expansion in other Dockerfile instructions. This is not meant for
|
||||
passing secret values. [Read more about the buildargs instruction](../../reference/builder.md#arg)
|
||||
- **shmsize** - Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB.
|
||||
- **labels** – JSON map of string pairs for labels to set on the image.
|
||||
|
||||
Request Headers:
|
||||
|
||||
|
@ -1696,14 +1697,14 @@ Query Parameters:
|
|||
}
|
||||
}
|
||||
|
||||
This object maps the hostname of a registry to an object containing the
|
||||
"username" and "password" for that registry. Multiple registries may
|
||||
be specified as the build may be based on an image requiring
|
||||
authentication to pull from any arbitrary registry. Only the registry
|
||||
domain name (and port if not the default "443") are required. However
|
||||
(for legacy reasons) the "official" Docker, Inc. hosted registry must
|
||||
be specified with both a "https://" prefix and a "/v1/" suffix even
|
||||
though Docker will prefer to use the v2 registry API.
|
||||
This object maps the hostname of a registry to an object containing the
|
||||
"username" and "password" for that registry. Multiple registries may
|
||||
be specified as the build may be based on an image requiring
|
||||
authentication to pull from any arbitrary registry. Only the registry
|
||||
domain name (and port if not the default "443") are required. However
|
||||
(for legacy reasons) the "official" Docker, Inc. hosted registry must
|
||||
be specified with both a "https://" prefix and a "/v1/" suffix even
|
||||
though Docker will prefer to use the v2 registry API.
|
||||
|
||||
Status Codes:
|
||||
|
||||
|
@ -2639,7 +2640,7 @@ interactive session with the `exec` command.
|
|||
**Example response**:
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: vnd.docker.raw-stream
|
||||
Content-Type: application/vnd.docker.raw-stream
|
||||
|
||||
{{ STREAM }}
|
||||
|
||||
|
@ -2774,7 +2775,11 @@ Create a volume
|
|||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"Name": "tardis"
|
||||
"Name": "tardis",
|
||||
"Labels": {
|
||||
"com.example.some-label": "some-value",
|
||||
"com.example.some-other-label": "some-other-value"
|
||||
},
|
||||
}
|
||||
|
||||
**Example response**:
|
||||
|
@ -2785,7 +2790,11 @@ Create a volume
|
|||
{
|
||||
"Name": "tardis",
|
||||
"Driver": "local",
|
||||
"Mountpoint": "/var/lib/docker/volumes/tardis"
|
||||
"Mountpoint": "/var/lib/docker/volumes/tardis",
|
||||
"Labels": {
|
||||
"com.example.some-label": "some-value",
|
||||
"com.example.some-other-label": "some-other-value"
|
||||
},
|
||||
}
|
||||
|
||||
Status Codes:
|
||||
|
@ -2799,6 +2808,7 @@ JSON Parameters:
|
|||
- **Driver** - Name of the volume driver to use. Defaults to `local` for the name.
|
||||
- **DriverOpts** - A mapping of driver options and values. These options are
|
||||
passed directly to the driver and are driver specific.
|
||||
- **Labels** - Labels to set on the volume, specified as a map: `{"key":"value" [,"key2":"value2"]}`
|
||||
|
||||
### Inspect a volume
|
||||
|
||||
|
@ -2816,9 +2826,13 @@ Return low-level information on the volume `name`
|
|||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"Name": "tardis",
|
||||
"Driver": "local",
|
||||
"Mountpoint": "/var/lib/docker/volumes/tardis"
|
||||
"Name": "tardis",
|
||||
"Driver": "local",
|
||||
"Mountpoint": "/var/lib/docker/volumes/tardis/_data",
|
||||
"Labels": {
|
||||
"com.example.some-label": "some-value",
|
||||
"com.example.some-other-label": "some-other-value"
|
||||
}
|
||||
}
|
||||
|
||||
Status Codes:
|
||||
|
@ -2989,6 +3003,10 @@ Content-Type: application/json
|
|||
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
|
||||
"com.docker.network.bridge.name": "docker0",
|
||||
"com.docker.network.driver.mtu": "1500"
|
||||
},
|
||||
"Labels": {
|
||||
"com.example.some-label": "some-value",
|
||||
"com.example.some-other-label": "some-other-value"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
@ -3012,6 +3030,7 @@ Content-Type: application/json
|
|||
|
||||
{
|
||||
"Name":"isolated_nw",
|
||||
"CheckDuplicate":false,
|
||||
"Driver":"bridge",
|
||||
"EnableIPv6": true,
|
||||
"IPAM":{
|
||||
|
@ -3030,7 +3049,19 @@ Content-Type: application/json
|
|||
"foo": "bar"
|
||||
}
|
||||
},
|
||||
"Internal":true
|
||||
"Internal":true,
|
||||
"Options": {
|
||||
"com.docker.network.bridge.default_bridge": "true",
|
||||
"com.docker.network.bridge.enable_icc": "true",
|
||||
"com.docker.network.bridge.enable_ip_masquerade": "true",
|
||||
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
|
||||
"com.docker.network.bridge.name": "docker0",
|
||||
"com.docker.network.driver.mtu": "1500"
|
||||
},
|
||||
"Labels": {
|
||||
"com.example.some-label": "some-value",
|
||||
"com.example.some-other-label": "some-other-value"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -3055,12 +3086,13 @@ Status Codes:
|
|||
JSON Parameters:
|
||||
|
||||
- **Name** - The new network's name. this is a mandatory field
|
||||
- **CheckDuplicate** - Requests daemon to check for networks with same name
|
||||
- **Driver** - Name of the network driver plugin to use. Defaults to `bridge` driver
|
||||
- **Internal** - Restrict external access to the network
|
||||
- **IPAM** - Optional custom IP scheme for the network
|
||||
- **EnableIPv6** - Enable IPv6 on the network
|
||||
- **Options** - Network specific options to be used by the drivers
|
||||
- **CheckDuplicate** - Requests daemon to check for networks with same name
|
||||
- **Labels** - Labels to set on the network, specified as a map: `{"key":"value" [,"key2":"value2"]}`
|
||||
|
||||
### Connect a container to a network
|
||||
|
||||
|
|
|
@ -888,7 +888,7 @@ This is a full example of the allowed configuration options in the file:
|
|||
"exec-opts": [],
|
||||
"exec-root": "",
|
||||
"storage-driver": "",
|
||||
"storage-opts": "",
|
||||
"storage-opts": [],
|
||||
"labels": [],
|
||||
"log-driver": "",
|
||||
"log-opts": [],
|
||||
|
|
|
@ -291,7 +291,8 @@ you can override this with `--dns`.
|
|||
|
||||
By default, the MAC address is generated using the IP address allocated to the
|
||||
container. You can set the container's MAC address explicitly by providing a
|
||||
MAC address via the `--mac-address` parameter (format:`12:34:56:78:9a:bc`).
|
||||
MAC address via the `--mac-address` parameter (format:`12:34:56:78:9a:bc`).Be
|
||||
aware that Docker does not check if manually specified MAC addresses are unique.
|
||||
|
||||
Supported networks :
|
||||
|
||||
|
@ -562,20 +563,18 @@ the exit codes follow the `chroot` standard, see below:
|
|||
**_126_** if the **_contained command_** cannot be invoked
|
||||
|
||||
$ docker run busybox /etc; echo $?
|
||||
# exec: "/etc": permission denied
|
||||
docker: Error response from daemon: Contained command could not be invoked
|
||||
# docker: Error response from daemon: Container command '/etc' could not be invoked.
|
||||
126
|
||||
|
||||
**_127_** if the **_contained command_** cannot be found
|
||||
|
||||
$ docker run busybox foo; echo $?
|
||||
# exec: "foo": executable file not found in $PATH
|
||||
docker: Error response from daemon: Contained command not found or does not exist
|
||||
# docker: Error response from daemon: Container command 'foo' not found or does not exist.
|
||||
127
|
||||
|
||||
**_Exit code_** of **_contained command_** otherwise
|
||||
|
||||
$ docker run busybox /bin/sh -c 'exit 3'
|
||||
$ docker run busybox /bin/sh -c 'exit 3'; echo $?
|
||||
# 3
|
||||
|
||||
## Clean up (--rm)
|
||||
|
@ -1084,7 +1083,7 @@ By default, Docker containers are "unprivileged" and cannot, for
|
|||
example, run a Docker daemon inside a Docker container. This is because
|
||||
by default a container is not allowed to access any devices, but a
|
||||
"privileged" container is given access to all devices (see
|
||||
the documentation on [cgroups devices](https://www.kernel.org/doc/Documentation/cgroups/devices.txt)).
|
||||
the documentation on [cgroups devices](https://www.kernel.org/doc/Documentation/cgroup-v1/devices.txt)).
|
||||
|
||||
When the operator executes `docker run --privileged`, Docker will enable
|
||||
to access to all devices on the host as well as set some configuration
|
||||
|
|
|
@ -20,8 +20,8 @@ Docker automatically loads container profiles. The Docker binary installs
|
|||
a `docker-default` profile in the `/etc/apparmor.d/docker` file. This profile
|
||||
is used on containers, _not_ on the Docker Daemon.
|
||||
|
||||
A profile for the Docker Engine Daemon exists but it is not currently installed
|
||||
with the deb packages. If you are interested in the source for the Daemon
|
||||
A profile for the Docker Engine daemon exists but it is not currently installed
|
||||
with the `deb` packages. If you are interested in the source for the daemon
|
||||
profile, it is located in
|
||||
[contrib/apparmor](https://github.com/docker/docker/tree/master/contrib/apparmor)
|
||||
in the Docker Engine source repository.
|
||||
|
@ -72,15 +72,15 @@ explicitly specifies the default policy:
|
|||
$ docker run --rm -it --security-opt apparmor=docker-default hello-world
|
||||
```
|
||||
|
||||
## Loading and Unloading Profiles
|
||||
## Load and unload profiles
|
||||
|
||||
To load a new profile into AppArmor, for use with containers:
|
||||
To load a new profile into AppArmor for use with containers:
|
||||
|
||||
```
|
||||
```bash
|
||||
$ apparmor_parser -r -W /path/to/your_profile
|
||||
```
|
||||
|
||||
Then you can run the custom profile with `--security-opt` like so:
|
||||
Then, run the custom profile with `--security-opt` like so:
|
||||
|
||||
```bash
|
||||
$ docker run --rm -it --security-opt apparmor=your_profile hello-world
|
||||
|
@ -97,39 +97,174 @@ $ apparmor_parser -R /path/to/profile
|
|||
$ /etc/init.d/apparmor start
|
||||
```
|
||||
|
||||
## Debugging AppArmor
|
||||
### Resources for writing profiles
|
||||
|
||||
### Using `dmesg`
|
||||
The syntax for file globbing in AppArmor is a bit different than some other
|
||||
globbing implementations. It is highly suggested you take a look at some of the
|
||||
below resources with regard to AppArmor profile syntax.
|
||||
|
||||
- [Quick Profile Language](http://wiki.apparmor.net/index.php/QuickProfileLanguage)
|
||||
- [Globbing Syntax](http://wiki.apparmor.net/index.php/AppArmor_Core_Policy_Reference#AppArmor_globbing_syntax)
|
||||
|
||||
## Nginx example profile
|
||||
|
||||
In this example, you create a custom AppArmor profile for Nginx. Below is the
|
||||
custom profile.
|
||||
|
||||
```
|
||||
#include <tunables/global>
|
||||
|
||||
|
||||
profile docker-nginx flags=(attach_disconnected,mediate_deleted) {
|
||||
#include <abstractions/base>
|
||||
|
||||
network inet tcp,
|
||||
network inet udp,
|
||||
network inet icmp,
|
||||
|
||||
deny network raw,
|
||||
|
||||
deny network packet,
|
||||
|
||||
file,
|
||||
umount,
|
||||
|
||||
deny /bin/** wl,
|
||||
deny /boot/** wl,
|
||||
deny /dev/** wl,
|
||||
deny /etc/** wl,
|
||||
deny /home/** wl,
|
||||
deny /lib/** wl,
|
||||
deny /lib64/** wl,
|
||||
deny /media/** wl,
|
||||
deny /mnt/** wl,
|
||||
deny /opt/** wl,
|
||||
deny /proc/** wl,
|
||||
deny /root/** wl,
|
||||
deny /sbin/** wl,
|
||||
deny /srv/** wl,
|
||||
deny /tmp/** wl,
|
||||
deny /sys/** wl,
|
||||
deny /usr/** wl,
|
||||
|
||||
audit /** w,
|
||||
|
||||
/var/run/nginx.pid w,
|
||||
|
||||
/usr/sbin/nginx ix,
|
||||
|
||||
deny /bin/dash mrwklx,
|
||||
deny /bin/sh mrwklx,
|
||||
deny /usr/bin/top mrwklx,
|
||||
|
||||
|
||||
capability chown,
|
||||
capability dac_override,
|
||||
capability setuid,
|
||||
capability setgid,
|
||||
capability net_bind_service,
|
||||
|
||||
deny @{PROC}/{*,**^[0-9*],sys/kernel/shm*} wkx,
|
||||
deny @{PROC}/sysrq-trigger rwklx,
|
||||
deny @{PROC}/mem rwklx,
|
||||
deny @{PROC}/kmem rwklx,
|
||||
deny @{PROC}/kcore rwklx,
|
||||
deny mount,
|
||||
deny /sys/[^f]*/** wklx,
|
||||
deny /sys/f[^s]*/** wklx,
|
||||
deny /sys/fs/[^c]*/** wklx,
|
||||
deny /sys/fs/c[^g]*/** wklx,
|
||||
deny /sys/fs/cg[^r]*/** wklx,
|
||||
deny /sys/firmware/efi/efivars/** rwklx,
|
||||
deny /sys/kernel/security/** rwklx,
|
||||
}
|
||||
```
|
||||
|
||||
1. Save the custom profile to disk in the
|
||||
`/etc/apparmor.d/containers/docker-nginx` file.
|
||||
|
||||
The file path in this example is not a requirement. In production, you could
|
||||
use another.
|
||||
|
||||
2. Load the profile.
|
||||
|
||||
```bash
|
||||
$ sudo apparmor_parser -r -W /etc/apparmor.d/containers/docker-nginx
|
||||
```
|
||||
|
||||
3. Run a container with the profile.
|
||||
|
||||
To run nginx in detached mode:
|
||||
|
||||
```bash
|
||||
$ docker run --security-opt "apparmor=docker-nginx" \
|
||||
-p 80:80 -d --name apparmor-nginx nginx
|
||||
```
|
||||
|
||||
4. Exec into the running container
|
||||
|
||||
```bash
|
||||
$ docker exec -it apparmor-nginx bash
|
||||
```
|
||||
|
||||
5. Try some operations to test the profile.
|
||||
|
||||
```bash
|
||||
root@6da5a2a930b9:~# ping 8.8.8.8
|
||||
ping: Lacking privilege for raw socket.
|
||||
|
||||
root@6da5a2a930b9:/# top
|
||||
bash: /usr/bin/top: Permission denied
|
||||
|
||||
root@6da5a2a930b9:~# touch ~/thing
|
||||
touch: cannot touch 'thing': Permission denied
|
||||
|
||||
root@6da5a2a930b9:/# sh
|
||||
bash: /bin/sh: Permission denied
|
||||
|
||||
root@6da5a2a930b9:/# dash
|
||||
bash: /bin/dash: Permission denied
|
||||
```
|
||||
|
||||
|
||||
Congrats! You just deployed a container secured with a custom apparmor profile!
|
||||
|
||||
|
||||
## Debug AppArmor
|
||||
|
||||
You can use `demsg` to debug problems and `aa-status` check the loaded profiles.
|
||||
|
||||
### Use dmesg
|
||||
|
||||
Here are some helpful tips for debugging any problems you might be facing with
|
||||
regard to AppArmor.
|
||||
|
||||
AppArmor sends quite verbose messaging to `dmesg`. Usually an AppArmor line
|
||||
will look like the following:
|
||||
looks like the following:
|
||||
|
||||
```
|
||||
[ 5442.864673] audit: type=1400 audit(1453830992.845:37): apparmor="ALLOWED" operation="open" profile="/usr/bin/docker" name="/home/jessie/docker/man/man1/docker-attach.1" pid=10923 comm="docker" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0
|
||||
```
|
||||
|
||||
In the above example, the you can see `profile=/usr/bin/docker`. This means the
|
||||
In the above example, you can see `profile=/usr/bin/docker`. This means the
|
||||
user has the `docker-engine` (Docker Engine Daemon) profile loaded.
|
||||
|
||||
> **Note:** On version of Ubuntu > 14.04 this is all fine and well, but Trusty
|
||||
> users might run into some issues when trying to `docker exec`.
|
||||
|
||||
Let's look at another log line:
|
||||
Look at another log line:
|
||||
|
||||
```
|
||||
[ 3256.689120] type=1400 audit(1405454041.341:73): apparmor="DENIED" operation="ptrace" profile="docker-default" pid=17651 comm="docker" requested_mask="receive" denied_mask="receive"
|
||||
```
|
||||
|
||||
This time the profile is `docker-default`, which is run on containers by
|
||||
default unless in `privileged` mode. It is telling us, that apparmor has denied
|
||||
`ptrace` in the container. This is great.
|
||||
default unless in `privileged` mode. This line shows that apparmor has denied
|
||||
`ptrace` in the container. This is exactly as expected.
|
||||
|
||||
### Using `aa-status`
|
||||
### Use aa-status
|
||||
|
||||
If you need to check which profiles are loaded you can use `aa-status`. The
|
||||
If you need to check which profiles are loaded, you can use `aa-status`. The
|
||||
output looks like:
|
||||
|
||||
```bash
|
||||
|
@ -162,17 +297,17 @@ apparmor module is loaded.
|
|||
0 processes are unconfined but have a profile defined.
|
||||
```
|
||||
|
||||
In the above output you can tell that the `docker-default` profile running on
|
||||
various container PIDs is in `enforce` mode. This means AppArmor will actively
|
||||
block and audit in `dmesg` anything outside the bounds of the `docker-default`
|
||||
The above output shows that the `docker-default` profile running on various
|
||||
container PIDs is in `enforce` mode. This means AppArmor is actively blocking
|
||||
and auditing in `dmesg` anything outside the bounds of the `docker-default`
|
||||
profile.
|
||||
|
||||
The output above also shows the `/usr/bin/docker` (Docker Engine Daemon)
|
||||
profile is running in `complain` mode. This means AppArmor will _only_ log to
|
||||
`dmesg` activity outside the bounds of the profile. (Except in the case of
|
||||
Ubuntu Trusty, where we have seen some interesting behaviors being enforced.)
|
||||
The output above also shows the `/usr/bin/docker` (Docker Engine daemon) profile
|
||||
is running in `complain` mode. This means AppArmor _only_ logs to `dmesg`
|
||||
activity outside the bounds of the profile. (Except in the case of Ubuntu
|
||||
Trusty, where some interesting behaviors are enforced.)
|
||||
|
||||
## Contributing to AppArmor code in Docker
|
||||
## Contribute Docker's AppArmor code
|
||||
|
||||
Advanced users and package managers can find a profile for `/usr/bin/docker`
|
||||
(Docker Engine Daemon) underneath
|
||||
|
|
|
@ -106,7 +106,7 @@ arbitrary containers.
|
|||
For this reason, the REST API endpoint (used by the Docker CLI to
|
||||
communicate with the Docker daemon) changed in Docker 0.5.2, and now
|
||||
uses a UNIX socket instead of a TCP socket bound on 127.0.0.1 (the
|
||||
latter being prone to cross-site-scripting attacks if you happen to run
|
||||
latter being prone to cross-site request forgery attacks if you happen to run
|
||||
Docker directly on your local machine, outside of a VM). You can then
|
||||
use traditional UNIX permission checks to limit access to the control
|
||||
socket.
|
||||
|
|
|
@ -13,15 +13,11 @@ weight=-1
|
|||
|
||||
When transferring data among networked systems, *trust* is a central concern. In
|
||||
particular, when communicating over an untrusted medium such as the internet, it
|
||||
is critical to ensure the integrity and publisher of all the data a system
|
||||
operates on. You use Docker to push and pull images (data) to a registry. Content trust
|
||||
gives you the ability to both verify the integrity and the publisher of all the
|
||||
is critical to ensure the integrity and the publisher of all the data a system
|
||||
operates on. You use Docker Engine to push and pull images (data) to a public or private registry. Content trust
|
||||
gives you the ability to verify both the integrity and the publisher of all the
|
||||
data received from a registry over any channel.
|
||||
|
||||
Content trust is currently only available for users of the public Docker Hub. It
|
||||
is currently not available for the Docker Trusted Registry or for private
|
||||
registries.
|
||||
|
||||
## Understand trust in Docker
|
||||
|
||||
Content trust allows operations with a remote Docker registry to enforce
|
||||
|
@ -82,7 +78,7 @@ desirable, unsigned image tags are "invisible" to them.
|
|||
|
||||

|
||||
|
||||
To the consumer who does not enabled content trust, nothing about how they
|
||||
To the consumer who has not enabled content trust, nothing about how they
|
||||
work with Docker images changes. Every image is visible regardless of whether it
|
||||
is signed or not.
|
||||
|
||||
|
@ -127,7 +123,7 @@ The following image depicts the various signing keys and their relationships:
|
|||
>tag from this repository prior to the loss.
|
||||
|
||||
You should backup the root key somewhere safe. Given that it is only required
|
||||
to create new repositories, it is a good idea to store it offline.
|
||||
to create new repositories, it is a good idea to store it offline in hardware.
|
||||
For details on securing, and backing up your keys, make sure you
|
||||
read how to [manage keys for content trust](trust_key_mng.md).
|
||||
|
||||
|
@ -198,11 +194,12 @@ When you push your first tagged image with content trust enabled, the `docker`
|
|||
client recognizes this is your first push and:
|
||||
|
||||
- alerts you that it will create a new root key
|
||||
- requests a passphrase for the key
|
||||
- requests a passphrase for the root key
|
||||
- generates a root key in the `~/.docker/trust` directory
|
||||
- requests a passphrase for the repository key
|
||||
- generates a repository key for in the `~/.docker/trust` directory
|
||||
|
||||
The passphrase you chose for both the root key and your content key-pair
|
||||
The passphrase you chose for both the root key and your repository key-pair
|
||||
should be randomly generated and stored in a *password manager*.
|
||||
|
||||
> **NOTE**: If you omit the `latest` tag, content trust is skipped. This is true
|
||||
|
@ -267,7 +264,7 @@ Because the tag `docker/cliffs:latest` is not trusted, the `pull` fails.
|
|||
|
||||
### Disable content trust for specific operations
|
||||
|
||||
A user that wants to disable content trust for a particular operation can use the
|
||||
A user who wants to disable content trust for a particular operation can use the
|
||||
`--disable-content-trust` flag. **Warning: this flag disables content trust for
|
||||
this operation**. With this flag, Docker will ignore content-trust and allow all
|
||||
operations to be done without verifying any signatures. If we wanted the
|
||||
|
|
|
@ -21,7 +21,7 @@ The easiest way to deploy Notary Server is by using Docker Compose. To follow th
|
|||
docker-compose up -d
|
||||
|
||||
|
||||
For more detailed documentation about how to deploy Notary Server see https://github.com/docker/notary.
|
||||
For more detailed documentation about how to deploy Notary Server see the [instructions to run a Notary service](/notary/running_a_service.md) as well as https://github.com/docker/notary for more information.
|
||||
3. Make sure that your Docker or Notary client trusts Notary Server's certificate before you try to interact with the Notary server.
|
||||
|
||||
See the instructions for [Docker](../../reference/commandline/cli.md#notary) or
|
||||
|
|
|
@ -10,7 +10,7 @@ parent= "smn_content_trust"
|
|||
|
||||
# Automation with content trust
|
||||
|
||||
Your automation systems that pull or build images can also work with trust. Any automation environment must set `DOCKER_TRUST_ENABLED` either manually or in in a scripted fashion before processing images.
|
||||
Your automation systems that pull or build images can also work with trust. Any automation environment must set `DOCKER_TRUST_ENABLED` either manually or in a scripted fashion before processing images.
|
||||
|
||||
## Bypass requests for passphrases
|
||||
|
||||
|
@ -43,7 +43,7 @@ Signing and pushing trust metadata
|
|||
|
||||
## Building with content trust
|
||||
|
||||
You can also build with content trust. Before running the `docker build` command, you should set the environment variable `DOCKER_CONTENT_TRUST` either manually or in in a scripted fashion. Consider the simple Dockerfile below.
|
||||
You can also build with content trust. Before running the `docker build` command, you should set the environment variable `DOCKER_CONTENT_TRUST` either manually or in a scripted fashion. Consider the simple Dockerfile below.
|
||||
|
||||
```Dockerfile
|
||||
FROM docker/trusttest:latest
|
||||
|
|
|
@ -18,7 +18,7 @@ sharing your repository key (a combination of your targets and snapshot keys -
|
|||
please see "[Manage keys for content trust](trust_key_mng.md)" for more information).
|
||||
A collaborator can keep their own delegation key private.
|
||||
|
||||
The `targest/releases` delegation is currently an optional feature - in order
|
||||
The `targets/releases` delegation is currently an optional feature - in order
|
||||
to set up delegations, you must use the Notary CLI:
|
||||
|
||||
1. [Download the client](https://github.com/docker/notary/releases) and ensure that it is
|
||||
|
@ -40,7 +40,7 @@ available on your path
|
|||
|
||||
For more detailed information about how to use Notary outside of the default
|
||||
Docker Content Trust use cases, please refer to the
|
||||
[the Notary CLI documentation](https://docs.docker.com/notary/getting_started/).
|
||||
[the Notary CLI documentation](/notary/getting_started.md).
|
||||
|
||||
Note that when publishing and listing delegation changes using the Notary client,
|
||||
your Docker Hub credentials are required.
|
||||
|
|
|
@ -37,7 +37,7 @@ workflow. They need to be
|
|||
|
||||
Note: Prior to Docker Engine 1.11, the snapshot key was also generated and stored
|
||||
locally client-side. [Use the Notary CLI to manage your snapshot key locally
|
||||
again](https://docs.docker.com/notary/advanced_usage/#rotate-keys) for
|
||||
again](/notary/advanced_usage.md#rotate-keys) for
|
||||
repositories created with newer versions of Docker.
|
||||
|
||||
## Choosing a passphrase
|
||||
|
@ -64,6 +64,16 @@ Before backing them up, you should `tar` them into an archive:
|
|||
$ umask 077; tar -zcvf private_keys_backup.tar.gz ~/.docker/trust/private; umask 022
|
||||
```
|
||||
|
||||
## Hardware storage and signing
|
||||
|
||||
Docker Content Trust can store and sign with root keys from a Yubikey 4. The
|
||||
Yubikey is prioritized over keys stored in the filesystem. When you initialize a
|
||||
new repository with content trust, Docker Engine looks for a root key locally. If a
|
||||
key is not found and the Yubikey 4 exists, Docker Engine creates a root key in the
|
||||
Yubikey 4. Please consult the [Notary documentation](/notary/advanced_usage.md#use-a-yubikey) for more details.
|
||||
|
||||
Prior to Docker Engine 1.11, this feature was only in the experimental branch.
|
||||
|
||||
## Lost keys
|
||||
|
||||
If a publisher loses keys it means losing the ability to sign trusted content for
|
||||
|
|
|
@ -214,8 +214,7 @@ In order, Docker Engine does the following:
|
|||
- **Pulls the `ubuntu` image:** Docker Engine checks for the presence of the `ubuntu`
|
||||
image. If the image already exists, then Docker Engine uses it for the new container.
|
||||
If it doesn't exist locally on the host, then Docker Engine pulls it from
|
||||
[Docker Hub](https://hub.docker.com). If the image already exists, then Docker Engine
|
||||
uses it for the new container.
|
||||
[Docker Hub](https://hub.docker.com).
|
||||
- **Creates a new container:** Once Docker Engine has the image, it uses it to create a
|
||||
container.
|
||||
- **Allocates a filesystem and mounts a read-write _layer_:** The container is created in
|
||||
|
|
|
@ -18,7 +18,7 @@ applications run on. Docker container networks give you that control.
|
|||
|
||||
This section provides an overview of the default networking behavior that Docker
|
||||
Engine delivers natively. It describes the type of networks created by default
|
||||
and how to create your own, user--defined networks. It also describes the
|
||||
and how to create your own, user-defined networks. It also describes the
|
||||
resources required to create networks on a single host or across a cluster of
|
||||
hosts.
|
||||
|
||||
|
@ -138,7 +138,7 @@ $ docker run -itd --name=container2 busybox
|
|||
94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c
|
||||
```
|
||||
|
||||
Inspecting the `bridge` network again after starting two containers shows both newly launched containers in the network. Their ids show up in the container
|
||||
Inspecting the `bridge` network again after starting two containers shows both newly launched containers in the network. Their ids show up in the "Containers" section of `docker network inspect`:
|
||||
|
||||
```
|
||||
$ docker network inspect bridge
|
||||
|
@ -293,7 +293,9 @@ specifications.
|
|||
You can create multiple networks. You can add containers to more than one
|
||||
network. Containers can only communicate within networks but not across
|
||||
networks. A container attached to two networks can communicate with member
|
||||
containers in either network.
|
||||
containers in either network. When a container is connected to multiple
|
||||
networks, its external connectivity is provided via the first non-internal
|
||||
network, in lexical order.
|
||||
|
||||
The next few sections describe each of Docker's built-in network drivers in
|
||||
greater detail.
|
||||
|
|
|
@ -397,6 +397,172 @@ there are two key directories. The `/var/lib/docker/devicemapper/mnt` directory
|
|||
image layer and container snapshot. The files contain metadata about each
|
||||
snapshot in JSON format.
|
||||
|
||||
## Increase capacity on a running device
|
||||
|
||||
You can increase the capacity of the pool on a running thin-pool device. This is
|
||||
useful if the data's logical volume is full and the volume group is at full
|
||||
capacity.
|
||||
|
||||
### For a loop-lvm configuration
|
||||
|
||||
In this scenario, the thin pool is configured to use `loop-lvm` mode. To show the specifics of the existing configuration use `docker info`:
|
||||
|
||||
```bash
|
||||
$ sudo docker info
|
||||
Containers: 0
|
||||
Running: 0
|
||||
Paused: 0
|
||||
Stopped: 0
|
||||
Images: 2
|
||||
Server Version: 1.11.0-rc2
|
||||
Storage Driver: devicemapper
|
||||
Pool Name: docker-8:1-123141-pool
|
||||
Pool Blocksize: 65.54 kB
|
||||
Base Device Size: 10.74 GB
|
||||
Backing Filesystem: ext4
|
||||
Data file: /dev/loop0
|
||||
Metadata file: /dev/loop1
|
||||
Data Space Used: 1.202 GB
|
||||
Data Space Total: 107.4 GB
|
||||
Data Space Available: 4.506 GB
|
||||
Metadata Space Used: 1.729 MB
|
||||
Metadata Space Total: 2.147 GB
|
||||
Metadata Space Available: 2.146 GB
|
||||
Udev Sync Supported: true
|
||||
Deferred Removal Enabled: false
|
||||
Deferred Deletion Enabled: false
|
||||
Deferred Deleted Device Count: 0
|
||||
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
|
||||
WARNING: Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
|
||||
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
|
||||
Library Version: 1.02.90 (2014-09-01)
|
||||
Logging Driver: json-file
|
||||
[...]
|
||||
```
|
||||
|
||||
The `Data Space` values show that the pool is 100GiB total. This example extends the pool to 200GiB.
|
||||
|
||||
1. List the sizes of the devices.
|
||||
|
||||
```bash
|
||||
$ sudo ls -lh /var/lib/docker/devicemapper/devicemapper/
|
||||
total 1.2G
|
||||
-rw------- 1 root root 100G Apr 14 08:47 data
|
||||
-rw------- 1 root root 2.0G Apr 19 13:27 metadata
|
||||
```
|
||||
|
||||
2. Truncate `data` file to 200GiB.
|
||||
|
||||
```bash
|
||||
$ sudo truncate -s 214748364800 /var/lib/docker/devicemapper/devicemapper/data
|
||||
```
|
||||
|
||||
3. Verify the file size changed.
|
||||
|
||||
```bash
|
||||
$ sudo ls -lh /var/lib/docker/devicemapper/devicemapper/
|
||||
total 1.2G
|
||||
-rw------- 1 root root 200G Apr 14 08:47 data
|
||||
-rw------- 1 root root 2.0G Apr 19 13:27 metadata
|
||||
```
|
||||
|
||||
4. Reload data loop device
|
||||
|
||||
```bash
|
||||
$ sudo blockdev --getsize64 /dev/loop0
|
||||
107374182400
|
||||
$ sudo losetup -c /dev/loop0
|
||||
$ sudo blockdev --getsize64 /dev/loop0
|
||||
214748364800
|
||||
```
|
||||
|
||||
5. Reload devicemapper thin pool.
|
||||
|
||||
a. Get the pool name first.
|
||||
|
||||
$ sudo dmsetup status | grep pool
|
||||
docker-8:1-123141-pool: 0 209715200 thin-pool 91 422/524288 18338/1638400 - rw discard_passdown queue_if_no_space -
|
||||
|
||||
The name is the string before the colon.
|
||||
|
||||
b. Dump the device mapper table first.
|
||||
|
||||
$ sudo dmsetup table docker-8:1-123141-pool
|
||||
0 209715200 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing
|
||||
|
||||
c. Calculate the real total sectors of the thin pool now.
|
||||
|
||||
Change the second number of the table info (i.e. the number of sectors) to reflect the new number of 512 byte sectors in the disk. For example, as the new loop size is 200GiB, change the second number to 419430400.
|
||||
|
||||
d. Reload the thin pool with the new sector number
|
||||
|
||||
$ sudo dmsetup suspend docker-8:1-123141-pool && sudo dmsetup reload docker-8:1-123141-pool --table '0 419430400 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing' && sudo dmsetup resume docker-8:1-123141-pool
|
||||
|
||||
#### The device_tool
|
||||
|
||||
The Docker's projects `contrib` directory contains not part of the core
|
||||
distribution. These tools that are often useful but can also be out-of-date. <a
|
||||
href="https://goo.gl/wNfDTi">In this directory, is the `device_tool.go`</a>
|
||||
which you can also resize the loop-lvm thin pool.
|
||||
|
||||
To use the tool, compile it first. Then, do the following to resize the pool:
|
||||
|
||||
```bash
|
||||
$ ./device_tool resize 200GB
|
||||
```
|
||||
|
||||
### For a direct-lvm mode configuration
|
||||
|
||||
In this example, you extend the capacity of a running device that uses the
|
||||
`direct-lvm` configuration. This example assumes you are using the `/dev/sdh1`
|
||||
disk partition.
|
||||
|
||||
1. Extend the volume group (VG) `vg-docker`.
|
||||
|
||||
```bash
|
||||
$ sudo vgextend vg-docker /dev/sdh1
|
||||
Volume group "vg-docker" successfully extended
|
||||
```
|
||||
|
||||
Your volume group may use a different name.
|
||||
|
||||
2. Extend the `data` logical volume(LV) `vg-docker/data`
|
||||
|
||||
```bash
|
||||
$ sudo lvextend -l+100%FREE -n vg-docker/data
|
||||
Extending logical volume data to 200 GiB
|
||||
Logical volume data successfully resized
|
||||
```
|
||||
|
||||
3. Reload devicemapper thin pool.
|
||||
|
||||
a. Get the pool name.
|
||||
|
||||
$ sudo dmsetup status | grep pool
|
||||
docker-253:17-1835016-pool: 0 96460800 thin-pool 51593 6270/1048576 701943/753600 - rw no_discard_passdown queue_if_no_space
|
||||
|
||||
The name is the string before the colon.
|
||||
|
||||
b. Dump the device mapper table.
|
||||
|
||||
$ sudo dmsetup table docker-253:17-1835016-pool
|
||||
0 96460800 thin-pool 252:0 252:1 128 32768 1 skip_block_zeroing
|
||||
|
||||
c. Calculate the real total sectors of the thin pool now. we can use `blockdev` to get the real size of data lv.
|
||||
|
||||
Change the second number of the table info (i.e. the number of sectors) to
|
||||
reflect the new number of 512 byte sectors in the disk. For example, as the
|
||||
new data `lv` size is `264132100096` bytes, change the second number to
|
||||
`515883008`.
|
||||
|
||||
$ sudo blockdev --getsize64 /dev/vg-docker/data
|
||||
264132100096
|
||||
|
||||
d. Then reload the thin pool with the new sector number.
|
||||
|
||||
$ sudo dmsetup suspend docker-253:17-1835016-pool && sudo dmsetup reload docker-253:17-1835016-pool --table '0 515883008 thin-pool 252:0 252:1 128 32768 1 skip_block_zeroing' && sudo dmsetup resume docker-253:17-1835016-pool
|
||||
|
||||
|
||||
## Device Mapper and Docker performance
|
||||
|
||||
It is important to understand the impact that allocate-on-demand and
|
||||
|
|
|
@ -70,7 +70,7 @@ first time you start an updated Docker daemon. After the migration is complete,
|
|||
all images and tags will have brand new secure IDs.
|
||||
|
||||
Although the migration is automatic and transparent, it is computationally
|
||||
intensive. This means it and can take time if you have lots of image data.
|
||||
intensive. This means it can take time if you have lots of image data.
|
||||
During this time your Docker daemon will not respond to other requests.
|
||||
|
||||
A migration tool exists that allows you to migrate existing images to the new
|
||||
|
|
Loading…
Add table
Reference in a new issue