|
@@ -51,46 +51,46 @@ Device Mapper technology works at the block level rather than the file level.
|
|
This means that `devicemapper` storage driver's thin provisioning and
|
|
This means that `devicemapper` storage driver's thin provisioning and
|
|
copy-on-write operations work with blocks rather than entire files.
|
|
copy-on-write operations work with blocks rather than entire files.
|
|
|
|
|
|
->**Note**: Snapshots are also referred to as *thin devices* or *virtual
|
|
|
|
->devices*. They all mean the same thing in the context of the `devicemapper`
|
|
|
|
|
|
+>**Note**: Snapshots are also referred to as *thin devices* or *virtual
|
|
|
|
+>devices*. They all mean the same thing in the context of the `devicemapper`
|
|
>storage driver.
|
|
>storage driver.
|
|
|
|
|
|
With `devicemapper` the high level process for creating images is as follows:
|
|
With `devicemapper` the high level process for creating images is as follows:
|
|
|
|
|
|
1. The `devicemapper` storage driver creates a thin pool.
|
|
1. The `devicemapper` storage driver creates a thin pool.
|
|
|
|
|
|
- The pool is created from block devices or loop mounted sparse files (more
|
|
|
|
|
|
+ The pool is created from block devices or loop mounted sparse files (more
|
|
on this later).
|
|
on this later).
|
|
|
|
|
|
2. Next it creates a *base device*.
|
|
2. Next it creates a *base device*.
|
|
|
|
|
|
- A base device is a thin device with a filesystem. You can see which
|
|
|
|
-filesystem is in use by running the `docker info` command and checking the
|
|
|
|
|
|
+ A base device is a thin device with a filesystem. You can see which
|
|
|
|
+filesystem is in use by running the `docker info` command and checking the
|
|
`Backing filesystem` value.
|
|
`Backing filesystem` value.
|
|
|
|
|
|
3. Each new image (and image layer) is a snapshot of this base device.
|
|
3. Each new image (and image layer) is a snapshot of this base device.
|
|
|
|
|
|
- These are thin provisioned copy-on-write snapshots. This means that they
|
|
|
|
-are initially empty and only consume space from the pool when data is written
|
|
|
|
|
|
+ These are thin provisioned copy-on-write snapshots. This means that they
|
|
|
|
+are initially empty and only consume space from the pool when data is written
|
|
to them.
|
|
to them.
|
|
|
|
|
|
-With `devicemapper`, container layers are snapshots of the image they are
|
|
|
|
-created from. Just as with images, container snapshots are thin provisioned
|
|
|
|
-copy-on-write snapshots. The container snapshot stores all updates to the
|
|
|
|
-container. The `devicemapper` allocates space to them on-demand from the pool
|
|
|
|
|
|
+With `devicemapper`, container layers are snapshots of the image they are
|
|
|
|
+created from. Just as with images, container snapshots are thin provisioned
|
|
|
|
+copy-on-write snapshots. The container snapshot stores all updates to the
|
|
|
|
+container. The `devicemapper` allocates space to them on-demand from the pool
|
|
as and when data is written to the container.
|
|
as and when data is written to the container.
|
|
|
|
|
|
-The high level diagram below shows a thin pool with a base device and two
|
|
|
|
|
|
+The high level diagram below shows a thin pool with a base device and two
|
|
images.
|
|
images.
|
|
|
|
|
|

|
|

|
|
|
|
|
|
-If you look closely at the diagram you'll see that it's snapshots all the way
|
|
|
|
|
|
+If you look closely at the diagram you'll see that it's snapshots all the way
|
|
down. Each image layer is a snapshot of the layer below it. The lowest layer of
|
|
down. Each image layer is a snapshot of the layer below it. The lowest layer of
|
|
- each image is a snapshot of the base device that exists in the pool. This
|
|
|
|
|
|
+ each image is a snapshot of the base device that exists in the pool. This
|
|
base device is a `Device Mapper` artifact and not a Docker image layer.
|
|
base device is a `Device Mapper` artifact and not a Docker image layer.
|
|
|
|
|
|
-A container is a snapshot of the image it is created from. The diagram below
|
|
|
|
|
|
+A container is a snapshot of the image it is created from. The diagram below
|
|
shows two containers - one based on the Ubuntu image and the other based on the
|
|
shows two containers - one based on the Ubuntu image and the other based on the
|
|
Busybox image.
|
|
Busybox image.
|
|
|
|
|
|
@@ -99,22 +99,22 @@ shows two containers - one based on the Ubuntu image and the other based on the
|
|
|
|
|
|
## Reads with the devicemapper
|
|
## Reads with the devicemapper
|
|
|
|
|
|
-Let's look at how reads and writes occur using the `devicemapper` storage
|
|
|
|
-driver. The diagram below shows the high level process for reading a single
|
|
|
|
|
|
+Let's look at how reads and writes occur using the `devicemapper` storage
|
|
|
|
+driver. The diagram below shows the high level process for reading a single
|
|
block (`0x44f`) in an example container.
|
|
block (`0x44f`) in an example container.
|
|
|
|
|
|

|
|

|
|
|
|
|
|
1. An application makes a read request for block `0x44f` in the container.
|
|
1. An application makes a read request for block `0x44f` in the container.
|
|
|
|
|
|
- Because the container is a thin snapshot of an image it does not have the
|
|
|
|
-data. Instead, it has a pointer (PTR) to where the data is stored in the image
|
|
|
|
|
|
+ Because the container is a thin snapshot of an image it does not have the
|
|
|
|
+data. Instead, it has a pointer (PTR) to where the data is stored in the image
|
|
snapshot lower down in the image stack.
|
|
snapshot lower down in the image stack.
|
|
|
|
|
|
-2. The storage driver follows the pointer to block `0xf33` in the snapshot
|
|
|
|
|
|
+2. The storage driver follows the pointer to block `0xf33` in the snapshot
|
|
relating to image layer `a005...`.
|
|
relating to image layer `a005...`.
|
|
|
|
|
|
-3. The `devicemapper` copies the contents of block `0xf33` from the image
|
|
|
|
|
|
+3. The `devicemapper` copies the contents of block `0xf33` from the image
|
|
snapshot to memory in the container.
|
|
snapshot to memory in the container.
|
|
|
|
|
|
4. The storage driver returns the data to the requesting application.
|
|
4. The storage driver returns the data to the requesting application.
|
|
@@ -122,11 +122,11 @@ snapshot to memory in the container.
|
|
### Write examples
|
|
### Write examples
|
|
|
|
|
|
With the `devicemapper` driver, writing new data to a container is accomplished
|
|
With the `devicemapper` driver, writing new data to a container is accomplished
|
|
- by an *allocate-on-demand* operation. Updating existing data uses a
|
|
|
|
-copy-on-write operation. Because Device Mapper is a block-based technology
|
|
|
|
|
|
+ by an *allocate-on-demand* operation. Updating existing data uses a
|
|
|
|
+copy-on-write operation. Because Device Mapper is a block-based technology
|
|
these operations occur at the block level.
|
|
these operations occur at the block level.
|
|
|
|
|
|
-For example, when making a small change to a large file in a container, the
|
|
|
|
|
|
+For example, when making a small change to a large file in a container, the
|
|
`devicemapper` storage driver does not copy the entire file. It only copies the
|
|
`devicemapper` storage driver does not copy the entire file. It only copies the
|
|
blocks to be modified. Each block is 64KB.
|
|
blocks to be modified. Each block is 64KB.
|
|
|
|
|
|
@@ -136,10 +136,10 @@ To write 56KB of new data to a container:
|
|
|
|
|
|
1. An application makes a request to write 56KB of new data to the container.
|
|
1. An application makes a request to write 56KB of new data to the container.
|
|
|
|
|
|
-2. The allocate-on-demand operation allocates a single new 64KB block to the
|
|
|
|
|
|
+2. The allocate-on-demand operation allocates a single new 64KB block to the
|
|
container's snapshot.
|
|
container's snapshot.
|
|
|
|
|
|
- If the write operation is larger than 64KB, multiple new blocks are
|
|
|
|
|
|
+ If the write operation is larger than 64KB, multiple new blocks are
|
|
allocated to the container's snapshot.
|
|
allocated to the container's snapshot.
|
|
|
|
|
|
3. The data is written to the newly allocated block.
|
|
3. The data is written to the newly allocated block.
|
|
@@ -152,7 +152,7 @@ To modify existing data for the first time:
|
|
|
|
|
|
2. A copy-on-write operation locates the blocks that need updating.
|
|
2. A copy-on-write operation locates the blocks that need updating.
|
|
|
|
|
|
-3. The operation allocates new empty blocks to the container snapshot and
|
|
|
|
|
|
+3. The operation allocates new empty blocks to the container snapshot and
|
|
copies the data into those blocks.
|
|
copies the data into those blocks.
|
|
|
|
|
|
4. The modified data is written into the newly allocated blocks.
|
|
4. The modified data is written into the newly allocated blocks.
|
|
@@ -164,18 +164,18 @@ to the application's read and write operations.
|
|
## Configuring Docker with Device Mapper
|
|
## Configuring Docker with Device Mapper
|
|
|
|
|
|
The `devicemapper` is the default Docker storage driver on some Linux
|
|
The `devicemapper` is the default Docker storage driver on some Linux
|
|
-distributions. This includes RHEL and most of its forks. Currently, the
|
|
|
|
|
|
+distributions. This includes RHEL and most of its forks. Currently, the
|
|
following distributions support the driver:
|
|
following distributions support the driver:
|
|
|
|
|
|
* RHEL/CentOS/Fedora
|
|
* RHEL/CentOS/Fedora
|
|
-* Ubuntu 12.04
|
|
|
|
-* Ubuntu 14.04
|
|
|
|
-* Debian
|
|
|
|
|
|
+* Ubuntu 12.04
|
|
|
|
+* Ubuntu 14.04
|
|
|
|
+* Debian
|
|
|
|
|
|
Docker hosts running the `devicemapper` storage driver default to a
|
|
Docker hosts running the `devicemapper` storage driver default to a
|
|
configuration mode known as `loop-lvm`. This mode uses sparse files to build
|
|
configuration mode known as `loop-lvm`. This mode uses sparse files to build
|
|
-the thin pool used by image and container snapshots. The mode is designed to
|
|
|
|
-work out-of-the-box with no additional configuration. However, production
|
|
|
|
|
|
+the thin pool used by image and container snapshots. The mode is designed to
|
|
|
|
+work out-of-the-box with no additional configuration. However, production
|
|
deployments should not run under `loop-lvm` mode.
|
|
deployments should not run under `loop-lvm` mode.
|
|
|
|
|
|
You can detect the mode by viewing the `docker info` command:
|
|
You can detect the mode by viewing the `docker info` command:
|
|
@@ -193,83 +193,83 @@ You can detect the mode by viewing the `docker info` command:
|
|
Library Version: 1.02.93-RHEL7 (2015-01-28)
|
|
Library Version: 1.02.93-RHEL7 (2015-01-28)
|
|
...
|
|
...
|
|
|
|
|
|
-The output above shows a Docker host running with the `devicemapper` storage
|
|
|
|
-driver operating in `loop-lvm` mode. This is indicated by the fact that the
|
|
|
|
-`Data loop file` and a `Metadata loop file` are on files under
|
|
|
|
-`/var/lib/docker/devicemapper/devicemapper`. These are loopback mounted sparse
|
|
|
|
|
|
+The output above shows a Docker host running with the `devicemapper` storage
|
|
|
|
+driver operating in `loop-lvm` mode. This is indicated by the fact that the
|
|
|
|
+`Data loop file` and a `Metadata loop file` are on files under
|
|
|
|
+`/var/lib/docker/devicemapper/devicemapper`. These are loopback mounted sparse
|
|
files.
|
|
files.
|
|
|
|
|
|
### Configure direct-lvm mode for production
|
|
### Configure direct-lvm mode for production
|
|
|
|
|
|
The preferred configuration for production deployments is `direct lvm`. This
|
|
The preferred configuration for production deployments is `direct lvm`. This
|
|
mode uses block devices to create the thin pool. The following procedure shows
|
|
mode uses block devices to create the thin pool. The following procedure shows
|
|
-you how to configure a Docker host to use the `devicemapper` storage driver in
|
|
|
|
|
|
+you how to configure a Docker host to use the `devicemapper` storage driver in
|
|
a `direct-lvm` configuration.
|
|
a `direct-lvm` configuration.
|
|
|
|
|
|
-> **Caution:** If you have already run the Docker daemon on your Docker host
|
|
|
|
-> and have images you want to keep, `push` them Docker Hub or your private
|
|
|
|
|
|
+> **Caution:** If you have already run the Docker daemon on your Docker host
|
|
|
|
+> and have images you want to keep, `push` them Docker Hub or your private
|
|
> Docker Trusted Registry before attempting this procedure.
|
|
> Docker Trusted Registry before attempting this procedure.
|
|
|
|
|
|
-The procedure below will create a 90GB data volume and 4GB metadata volume to
|
|
|
|
-use as backing for the storage pool. It assumes that you have a spare block
|
|
|
|
-device at `/dev/xvdf` with enough free space to complete the task. The device
|
|
|
|
-identifier and volume sizes may be be different in your environment and you
|
|
|
|
-should substitute your own values throughout the procedure. The procedure also
|
|
|
|
|
|
+The procedure below will create a 90GB data volume and 4GB metadata volume to
|
|
|
|
+use as backing for the storage pool. It assumes that you have a spare block
|
|
|
|
+device at `/dev/xvdf` with enough free space to complete the task. The device
|
|
|
|
+identifier and volume sizes may be be different in your environment and you
|
|
|
|
+should substitute your own values throughout the procedure. The procedure also
|
|
assumes that the Docker daemon is in the `stopped` state.
|
|
assumes that the Docker daemon is in the `stopped` state.
|
|
|
|
|
|
1. Log in to the Docker host you want to configure and stop the Docker daemon.
|
|
1. Log in to the Docker host you want to configure and stop the Docker daemon.
|
|
|
|
|
|
-2. If it exists, delete your existing image store by removing the
|
|
|
|
|
|
+2. If it exists, delete your existing image store by removing the
|
|
`/var/lib/docker` directory.
|
|
`/var/lib/docker` directory.
|
|
|
|
|
|
$ sudo rm -rf /var/lib/docker
|
|
$ sudo rm -rf /var/lib/docker
|
|
|
|
|
|
-3. Create an LVM physical volume (PV) on your spare block device using the
|
|
|
|
|
|
+3. Create an LVM physical volume (PV) on your spare block device using the
|
|
`pvcreate` command.
|
|
`pvcreate` command.
|
|
|
|
|
|
$ sudo pvcreate /dev/xvdf
|
|
$ sudo pvcreate /dev/xvdf
|
|
Physical volume `/dev/xvdf` successfully created
|
|
Physical volume `/dev/xvdf` successfully created
|
|
|
|
|
|
- The device identifier may be different on your system. Remember to
|
|
|
|
|
|
+ The device identifier may be different on your system. Remember to
|
|
substitute your value in the command above.
|
|
substitute your value in the command above.
|
|
|
|
|
|
-4. Create a new volume group (VG) called `vg-docker` using the PV created in
|
|
|
|
|
|
+4. Create a new volume group (VG) called `vg-docker` using the PV created in
|
|
the previous step.
|
|
the previous step.
|
|
|
|
|
|
$ sudo vgcreate vg-docker /dev/xvdf
|
|
$ sudo vgcreate vg-docker /dev/xvdf
|
|
Volume group `vg-docker` successfully created
|
|
Volume group `vg-docker` successfully created
|
|
|
|
|
|
-5. Create a new 90GB logical volume (LV) called `data` from space in the
|
|
|
|
|
|
+5. Create a new 90GB logical volume (LV) called `data` from space in the
|
|
`vg-docker` volume group.
|
|
`vg-docker` volume group.
|
|
|
|
|
|
$ sudo lvcreate -L 90G -n data vg-docker
|
|
$ sudo lvcreate -L 90G -n data vg-docker
|
|
Logical volume `data` created.
|
|
Logical volume `data` created.
|
|
|
|
|
|
- The command creates an LVM logical volume called `data` and an associated
|
|
|
|
-block device file at `/dev/vg-docker/data`. In a later step, you instruct the
|
|
|
|
-`devicemapper` storage driver to use this block device to store image and
|
|
|
|
|
|
+ The command creates an LVM logical volume called `data` and an associated
|
|
|
|
+block device file at `/dev/vg-docker/data`. In a later step, you instruct the
|
|
|
|
+`devicemapper` storage driver to use this block device to store image and
|
|
container data.
|
|
container data.
|
|
|
|
|
|
- If you receive a signature detection warning, make sure you are working on
|
|
|
|
-the correct devices before continuing. Signature warnings indicate that the
|
|
|
|
-device you're working on is currently in use by LVM or has been used by LVM in
|
|
|
|
|
|
+ If you receive a signature detection warning, make sure you are working on
|
|
|
|
+the correct devices before continuing. Signature warnings indicate that the
|
|
|
|
+device you're working on is currently in use by LVM or has been used by LVM in
|
|
the past.
|
|
the past.
|
|
|
|
|
|
-6. Create a new logical volume (LV) called `metadata` from space in the
|
|
|
|
|
|
+6. Create a new logical volume (LV) called `metadata` from space in the
|
|
`vg-docker` volume group.
|
|
`vg-docker` volume group.
|
|
|
|
|
|
$ sudo lvcreate -L 4G -n metadata vg-docker
|
|
$ sudo lvcreate -L 4G -n metadata vg-docker
|
|
Logical volume `metadata` created.
|
|
Logical volume `metadata` created.
|
|
|
|
|
|
- This creates an LVM logical volume called `metadata` and an associated
|
|
|
|
-block device file at `/dev/vg-docker/metadata`. In the next step you instruct
|
|
|
|
-the `devicemapper` storage driver to use this block device to store image and
|
|
|
|
|
|
+ This creates an LVM logical volume called `metadata` and an associated
|
|
|
|
+block device file at `/dev/vg-docker/metadata`. In the next step you instruct
|
|
|
|
+the `devicemapper` storage driver to use this block device to store image and
|
|
container metadata.
|
|
container metadata.
|
|
|
|
|
|
-7. Start the Docker daemon with the `devicemapper` storage driver and the
|
|
|
|
|
|
+7. Start the Docker daemon with the `devicemapper` storage driver and the
|
|
`--storage-opt` flags.
|
|
`--storage-opt` flags.
|
|
|
|
|
|
- The `data` and `metadata` devices that you pass to the `--storage-opt`
|
|
|
|
|
|
+ The `data` and `metadata` devices that you pass to the `--storage-opt`
|
|
options were created in the previous steps.
|
|
options were created in the previous steps.
|
|
|
|
|
|
$ sudo docker daemon --storage-driver=devicemapper --storage-opt dm.datadev=/dev/vg-docker/data --storage-opt dm.metadatadev=/dev/vg-docker/metadata &
|
|
$ sudo docker daemon --storage-driver=devicemapper --storage-opt dm.datadev=/dev/vg-docker/data --storage-opt dm.metadatadev=/dev/vg-docker/metadata &
|
|
@@ -279,13 +279,13 @@ options were created in the previous steps.
|
|
INFO[0027] Option DefaultNetwork: bridge
|
|
INFO[0027] Option DefaultNetwork: bridge
|
|
<output truncated>
|
|
<output truncated>
|
|
INFO[0027] Daemon has completed initialization
|
|
INFO[0027] Daemon has completed initialization
|
|
- INFO[0027] Docker daemon commit=0a8c2e3 execdriver=native-0.2 graphdriver=devicemapper version=1.8.2
|
|
|
|
|
|
+ INFO[0027] Docker daemon commit=1b09a95-unsupported graphdriver=aufs version=1.11.0-dev
|
|
|
|
|
|
It is also possible to set the `--storage-driver` and `--storage-opt` flags
|
|
It is also possible to set the `--storage-driver` and `--storage-opt` flags
|
|
in the Docker config file and start the daemon normally using the `service` or
|
|
in the Docker config file and start the daemon normally using the `service` or
|
|
`systemd` commands.
|
|
`systemd` commands.
|
|
|
|
|
|
-8. Use the `docker info` command to verify that the daemon is using `data` and
|
|
|
|
|
|
+8. Use the `docker info` command to verify that the daemon is using `data` and
|
|
`metadata` devices you created.
|
|
`metadata` devices you created.
|
|
|
|
|
|
$ sudo docker info
|
|
$ sudo docker info
|
|
@@ -301,12 +301,12 @@ options were created in the previous steps.
|
|
[...]
|
|
[...]
|
|
|
|
|
|
The output of the command above shows the storage driver as `devicemapper`.
|
|
The output of the command above shows the storage driver as `devicemapper`.
|
|
- The last two lines also confirm that the correct devices are being used for
|
|
|
|
|
|
+ The last two lines also confirm that the correct devices are being used for
|
|
the `Data file` and the `Metadata file`.
|
|
the `Data file` and the `Metadata file`.
|
|
|
|
|
|
### Examine devicemapper structures on the host
|
|
### Examine devicemapper structures on the host
|
|
|
|
|
|
-You can use the `lsblk` command to see the device files created above and the
|
|
|
|
|
|
+You can use the `lsblk` command to see the device files created above and the
|
|
`pool` that the `devicemapper` storage driver creates on top of them.
|
|
`pool` that the `devicemapper` storage driver creates on top of them.
|
|
|
|
|
|
$ sudo lsblk
|
|
$ sudo lsblk
|
|
@@ -319,7 +319,7 @@ You can use the `lsblk` command to see the device files created above and the
|
|
└─vg--docker-metadata 253:1 0 4G 0 lvm
|
|
└─vg--docker-metadata 253:1 0 4G 0 lvm
|
|
└─docker-202:1-1032-pool 253:2 0 10G 0 dm
|
|
└─docker-202:1-1032-pool 253:2 0 10G 0 dm
|
|
|
|
|
|
-The diagram below shows the image from prior examples updated with the detail
|
|
|
|
|
|
+The diagram below shows the image from prior examples updated with the detail
|
|
from the `lsblk` command above.
|
|
from the `lsblk` command above.
|
|
|
|
|
|

|
|

|
|
@@ -335,73 +335,73 @@ Docker-MAJ:MIN-INO-pool
|
|
`MAJ`, `MIN` and `INO` refer to the major and minor device numbers and inode.
|
|
`MAJ`, `MIN` and `INO` refer to the major and minor device numbers and inode.
|
|
|
|
|
|
Because Device Mapper operates at the block level it is more difficult to see
|
|
Because Device Mapper operates at the block level it is more difficult to see
|
|
-diffs between image layers and containers. Docker 1.10 and later no longer
|
|
|
|
-matches image layer IDs with directory names in `/var/lib/docker`. However,
|
|
|
|
|
|
+diffs between image layers and containers. Docker 1.10 and later no longer
|
|
|
|
+matches image layer IDs with directory names in `/var/lib/docker`. However,
|
|
there are two key directories. The `/var/lib/docker/devicemapper/mnt` directory
|
|
there are two key directories. The `/var/lib/docker/devicemapper/mnt` directory
|
|
- contains the mount points for image and container layers. The
|
|
|
|
-`/var/lib/docker/devicemapper/metadata`directory contains one file for every
|
|
|
|
-image layer and container snapshot. The files contain metadata about each
|
|
|
|
|
|
+ contains the mount points for image and container layers. The
|
|
|
|
+`/var/lib/docker/devicemapper/metadata`directory contains one file for every
|
|
|
|
+image layer and container snapshot. The files contain metadata about each
|
|
snapshot in JSON format.
|
|
snapshot in JSON format.
|
|
|
|
|
|
## Device Mapper and Docker performance
|
|
## Device Mapper and Docker performance
|
|
|
|
|
|
-It is important to understand the impact that allocate-on-demand and
|
|
|
|
|
|
+It is important to understand the impact that allocate-on-demand and
|
|
copy-on-write operations can have on overall container performance.
|
|
copy-on-write operations can have on overall container performance.
|
|
|
|
|
|
### Allocate-on-demand performance impact
|
|
### Allocate-on-demand performance impact
|
|
|
|
|
|
-The `devicemapper` storage driver allocates new blocks to a container via an
|
|
|
|
-allocate-on-demand operation. This means that each time an app writes to
|
|
|
|
-somewhere new inside a container, one or more empty blocks has to be located
|
|
|
|
|
|
+The `devicemapper` storage driver allocates new blocks to a container via an
|
|
|
|
+allocate-on-demand operation. This means that each time an app writes to
|
|
|
|
+somewhere new inside a container, one or more empty blocks has to be located
|
|
from the pool and mapped into the container.
|
|
from the pool and mapped into the container.
|
|
|
|
|
|
All blocks are 64KB. A write that uses less than 64KB still results in a single
|
|
All blocks are 64KB. A write that uses less than 64KB still results in a single
|
|
- 64KB block being allocated. Writing more than 64KB of data uses multiple 64KB
|
|
|
|
-blocks. This can impact container performance, especially in containers that
|
|
|
|
|
|
+ 64KB block being allocated. Writing more than 64KB of data uses multiple 64KB
|
|
|
|
+blocks. This can impact container performance, especially in containers that
|
|
perform lots of small writes. However, once a block is allocated to a container
|
|
perform lots of small writes. However, once a block is allocated to a container
|
|
subsequent reads and writes can operate directly on that block.
|
|
subsequent reads and writes can operate directly on that block.
|
|
|
|
|
|
### Copy-on-write performance impact
|
|
### Copy-on-write performance impact
|
|
|
|
|
|
-Each time a container updates existing data for the first time, the
|
|
|
|
-`devicemapper` storage driver has to perform a copy-on-write operation. This
|
|
|
|
-copies the data from the image snapshot to the container's snapshot. This
|
|
|
|
|
|
+Each time a container updates existing data for the first time, the
|
|
|
|
+`devicemapper` storage driver has to perform a copy-on-write operation. This
|
|
|
|
+copies the data from the image snapshot to the container's snapshot. This
|
|
process can have a noticeable impact on container performance.
|
|
process can have a noticeable impact on container performance.
|
|
|
|
|
|
-All copy-on-write operations have a 64KB granularity. As a results, updating
|
|
|
|
-32KB of a 1GB file causes the driver to copy a single 64KB block into the
|
|
|
|
-container's snapshot. This has obvious performance advantages over file-level
|
|
|
|
-copy-on-write operations which would require copying the entire 1GB file into
|
|
|
|
|
|
+All copy-on-write operations have a 64KB granularity. As a results, updating
|
|
|
|
+32KB of a 1GB file causes the driver to copy a single 64KB block into the
|
|
|
|
+container's snapshot. This has obvious performance advantages over file-level
|
|
|
|
+copy-on-write operations which would require copying the entire 1GB file into
|
|
the container layer.
|
|
the container layer.
|
|
|
|
|
|
-In practice, however, containers that perform lots of small block writes
|
|
|
|
|
|
+In practice, however, containers that perform lots of small block writes
|
|
(<64KB) can perform worse with `devicemapper` than with AUFS.
|
|
(<64KB) can perform worse with `devicemapper` than with AUFS.
|
|
|
|
|
|
### Other device mapper performance considerations
|
|
### Other device mapper performance considerations
|
|
|
|
|
|
-There are several other things that impact the performance of the
|
|
|
|
|
|
+There are several other things that impact the performance of the
|
|
`devicemapper` storage driver.
|
|
`devicemapper` storage driver.
|
|
|
|
|
|
-- **The mode.** The default mode for Docker running the `devicemapper` storage
|
|
|
|
-driver is `loop-lvm`. This mode uses sparse files and suffers from poor
|
|
|
|
|
|
+- **The mode.** The default mode for Docker running the `devicemapper` storage
|
|
|
|
+driver is `loop-lvm`. This mode uses sparse files and suffers from poor
|
|
performance. It is **not recommended for production**. The recommended mode for
|
|
performance. It is **not recommended for production**. The recommended mode for
|
|
- production environments is `direct-lvm` where the storage driver writes
|
|
|
|
|
|
+ production environments is `direct-lvm` where the storage driver writes
|
|
directly to raw block devices.
|
|
directly to raw block devices.
|
|
|
|
|
|
- **High speed storage.** For best performance you should place the `Data file`
|
|
- **High speed storage.** For best performance you should place the `Data file`
|
|
- and `Metadata file` on high speed storage such as SSD. This can be direct
|
|
|
|
|
|
+ and `Metadata file` on high speed storage such as SSD. This can be direct
|
|
attached storage or from a SAN or NAS array.
|
|
attached storage or from a SAN or NAS array.
|
|
|
|
|
|
-- **Memory usage.** `devicemapper` is not the most memory efficient Docker
|
|
|
|
-storage driver. Launching *n* copies of the same container loads *n* copies of
|
|
|
|
-its files into memory. This can have a memory impact on your Docker host. As a
|
|
|
|
-result, the `devicemapper` storage driver may not be the best choice for PaaS
|
|
|
|
|
|
+- **Memory usage.** `devicemapper` is not the most memory efficient Docker
|
|
|
|
+storage driver. Launching *n* copies of the same container loads *n* copies of
|
|
|
|
+its files into memory. This can have a memory impact on your Docker host. As a
|
|
|
|
+result, the `devicemapper` storage driver may not be the best choice for PaaS
|
|
and other high density use cases.
|
|
and other high density use cases.
|
|
|
|
|
|
-One final point, data volumes provide the best and most predictable
|
|
|
|
-performance. This is because they bypass the storage driver and do not incur
|
|
|
|
-any of the potential overheads introduced by thin provisioning and
|
|
|
|
-copy-on-write. For this reason, you should to place heavy write workloads on
|
|
|
|
|
|
+One final point, data volumes provide the best and most predictable
|
|
|
|
+performance. This is because they bypass the storage driver and do not incur
|
|
|
|
+any of the potential overheads introduced by thin provisioning and
|
|
|
|
+copy-on-write. For this reason, you should to place heavy write workloads on
|
|
data volumes.
|
|
data volumes.
|
|
|
|
|
|
## Related Information
|
|
## Related Information
|