These functions are not part of the graphdriver.Driver
interface and should therefore be private.
Also, remove comments added by commit e27c904 as they are
* pretty obvious
* no longer required by golint
Signed-off-by: Kir Kolyshkin <kir@openvz.org>
Ploop graph driver provides its own ext4 filesystem to every
container. It so happens that ext4 root comes with lost+found
directory, causing failures from DriverTestCreateEmpty() and
DriverTestCreateBase() tests on ploop.
While I am not yet ready to submit ploop graph driver for review,
this change looks simple enough to push.
Note that filtering is done without any additional allocations,
as described in https://github.com/golang/go/wiki/SliceTricks.
[v2: added a comment about lost+found]
Signed-off-by: Kir Kolyshkin <kir@openvz.org>
TL;DR: check for IsExist(err) after a failed MkdirAll() is both
redundant and wrong -- so two reasons to remove it.
Quoting MkdirAll documentation:
> MkdirAll creates a directory named path, along with any necessary
> parents, and returns nil, or else returns an error. If path
> is already a directory, MkdirAll does nothing and returns nil.
This means two things:
1. If a directory to be created already exists, no error is returned.
2. If the error returned is IsExist (EEXIST), it means there exists
a non-directory with the same name as MkdirAll need to use for
directory. Example: we want to MkdirAll("a/b"), but file "a"
(or "a/b") already exists, so MkdirAll fails.
The above is a theory, based on quoted documentation and my UNIX
knowledge.
3. In practice, though, current MkdirAll implementation [1] returns
ENOTDIR in most of cases described in #2, with the exception when
there is a race between MkdirAll and someone else creating the
last component of MkdirAll argument as a file. In this very case
MkdirAll() will indeed return EEXIST.
Because of #1, IsExist check after MkdirAll is not needed.
Because of #2 and #3, ignoring IsExist error is just plain wrong,
as directory we require is not created. It's cleaner to report
the error now.
Note this error is all over the tree, I guess due to copy-paste,
or trying to follow the same usage pattern as for Mkdir(),
or some not quite correct examples on the Internet.
[v2: a separate aufs commit is merged into this one]
[1] https://github.com/golang/go/blob/f9ed2f75/src/os/path.go
Signed-off-by: Kir Kolyshkin <kir@openvz.org>
The `ApplyDiff` function takes a tar archive stream that is
automagically decompressed later. This was causing a double
decompression, and when the layer was empty, that causes an early EOF.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
The ZFS driver should raise proper errors when the ZFS utility is
missing or when there's no zfs partition active on the system. Raising the
proper errors make possible to silently ignore the ZFS storage
driver when no default storage driver is specified.
Previous to this commit it was no longer possible to start the
docker daemon in that way:
docker -d --storage-opt dm.loopdatasize=2GB
The above command resulted in an exit error because the ZFS driver
tried to use the storage options.
Signed-off-by: Flavio Castelli <fcastelli@suse.com>
Current default basesize is 10G. Change it to 100G. Reason being that for
some people 10G is turning out to be too small and we don't have capabilities
to grow it dyamically.
This is just overcommitting and no real space is allocated till container
actually writes data. And this is no different then fs based graphdrivers
where virtual size of a container root is unlimited.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Replaced github.com/docker/libcontainer with
github.com/opencontainers/runc/libcontaier.
Also I moved AppArmor profile generation to docker.
Main idea of this update is to fix mounting cgroups inside containers.
After updating docker on CI we can even remove dind.
Signed-off-by: Alexander Morozov <lk4d4@docker.com>
The ability to save and verify base device UUID (#13896) introduced a
situation where the initialization would panic when removing the device
returns EBUSY.
Functions `verifyBaseDeviceUUID` and `saveBaseDeviceUUID` now take the
lock on the `DeviceSet`, which solves the problem as `removeDevice`
assumes it owns the lock.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
Often it happens that docker is not able to shutdown/remove the thin
pool it created because some device has leaked into some mount name
space. That means device is in use and that means pool can't be removed.
Docker will leave pool as it is and exit. Later when user starts the
docker, it finds pool is already there and docker uses it. But docker
does not know it is same pool which is using the loop devices. Now
docker thinks loop devices are not being used. That means it does not
display the data correctly in "docker info", giving user wrong information.
This patch tries to detect if loop devices as created by docker are
being used for pool and fills in the right details in "docker info".
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
DeviceMapper must be explicitly selected because the Docker binary might not be linked to the right devmapper library.
With this change, Docker fails fast if the driver detection finds the devicemapper directory but the driver is not the default option.
The option `override_udev_sync_check` doesn't make sense anymore, since the user must be explicit to select devicemapper, so it's being removed.
Docker fails to use devicemapper only if Docker has been built statically unless the option was explicit.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Export metadata for container and image in docker-inspect when overlay
graphdriver is in use. Right now it is done only for devicemapper graph
driver.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
It is easy for one to use docker for a while, shut it down and restart
docker with different set of storage options for device mapper driver
which will effectively change the thin pool. That means any of the
metadata stored in /var/lib/docker/devicemapper/metadata/ is not valid
for the new pool and user will run into various kind of issues like
container not found in the pool etc.
Users think that their images or containers are lost but it might just
be the case of configuration issue. People might use wrong metadata
with wrong pool.
To detect such situations, save UUID of base image and once docker
starts later, query and compare the UUID of base image with the
stored one. If they don't match, fail the initialization with the
error that UUID failed to match.
That way user will be forced to cleanup /var/lib/docker/ directory
and start docker again.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Export image/container metadata stored in graph driver. Right now 3 fields
DeviceId, DeviceSize and DeviceName are being exported from devicemapper.
Other graph drivers can export fields as they see fit.
This data can be used to mount the thin device outside of docker and tools
can look into image/container and do some kind of inspection.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Previously the cache was only updated once on startup, because the graph
code only check for filesystems on startup. However this breaks the API as it
was supposed and so unit tests.
Fixes#13142
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
The docker graph call driver.Exists() on initialisation for each filesystem in
the graph. This results will results in a lot `zfs get all` commands. To reduce
this, retrieve all descend filesystem at startup and cache it for later checks
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
instead of let zfs automaticly mount datasets, mount them on demand using mount(2).
This speed up this graph driver in 2 ways:
- less zfs processes needed to start a container
- /proc/mounts get smaller, so zfs userspace tools has less to read (which can
a significant amount of data as the number of layer grows)
This ways it can be also ensured that the correct mountpoint is always used.
Signed-off-by: Jörg Thalheim <joerg@higgsboson.tk>
Right now devicemapper mounts thin device using online discards by default
and passes mount option "discard". Generally people discourage usage of
online discards as they can be a drain on performance. Instead it is
recommended to use fstrim once in a while to reclaim the space.
In case of containers, we recommend to keep data volumes separate. So
there might not be lot of rm, unlink operations going on and there might
not be lot of space being freed by containers. So it might not matter
much if we don't reclaim that free space in pool.
User can still pass mount option explicitly using dm.mountopt=discard to
enable discards if they would like to.
So this is more like setting the containers by default for better performance
instead of better space efficiency in pool. And user can change the behavior
if they don't like default behavior.
Reported-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
If device is being reactivated before it could go away and deferred
deactivation is scheduled on it, cancel it.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
This will help with debugging as one could just do "docker info" and figure
out of deferred removal is enabled or not.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Provide a new command line knob dm.deferred_device_removal which will enable
deferred device deactivation if driver and library support it.
This patch also checks for library support and driver version.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Before this, a storage driver would be defaulted to based on the
priority list, and only print a warning if there is state from other
drivers.
This meant a reordering of priority list would "break" users in an
upgrade of docker, such that there images in the prior driver's state
were now invisible.
With this change, prior state is scanned, and if present that driver is
preferred.
As such, we can reorder the priority list, and after an upgrade,
existing installs with prior drivers can have a contiguous experience,
while fresh installs may default to a driver in the new priority list.
Ref: https://github.com/docker/docker/pull/11962#issuecomment-88274858
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Signed-off-by: Megan Kostick <mkostick@us.ibm.com>
Alphabetize FSMagic list to make more human-readable.
Signed-off-by: Megan Kostick <mkostick@us.ibm.com>
This provides an override for forcing the daemon to still attempt
running the devicemapper driver even when udev sync is not supported.
Intended to be a very clear impairment for those choosing to use it. If
udev sync is false, there will still be an error in the daemon logs,
even when the override is in place. The docs have an explicit WARNING.
Including link to the docs for users that encounter this daemon error
during an upgrade.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Right now we try device removal at the interval of 10ms and keep on trying
till either device is removed or 10 seconds are over. That means if device
is busy, we will try 1000 times in those 10 seconds.
Sounds too high a frequency of deivce removal retrial. All the logs are
filled easily. I think it is a good idea to slow down a bit and retry at
the interval of 100ms instead of 10ms.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
During device removal, we are first waiting for device to close() in a tight
loop for 10 seconds. I am not sure why do we need it. First of all we come
here once the umount() is successful so device should be free. For some reason
of device is temporarily busy, then removeDevice() logic retries device removal
logic in a loop for 10 seconds and that should cover it. Can't see why one
more 10 seoncds loop is required before attempting device removal.
One loop should be able to cover all the temporary device busy conditions and
if condition is not temporary then 10 seconds loop is not going to help anyway.
So instead of two loops of 10 seconds each, I am converting it to a single
loop of 20 seconds. May be 10 second loop is good enough but for now I am
keeping it 20 seconds to avoid any regressions.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Currently in device removal path (device deactivation), we wait
for 10 seconds for devive to actually go away. waitRemove().
In current code this is not required. If dm removal task has completed
and one has done the wait on udev cookie, then device is gone and there
is no need to write another loop to wait for device removal.
This patch removes the waitRemove() which waits for 10 seconds after
device removal. This seems unnecessary.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
devmapper graph driver retries device removal 1000 times in case of failure
and if this fills up console with 1000 messages (when daemon is running in
debug mode). So remove these debug messages.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
There are issues with libdm logging. Right now if docker daemon is run
in debug mode, logging by libdm is too verbose. And if a device can't
be removed, thousands of messages fill the console and one can not see
what's going on.
This patch removes devicemapper.LogInitVerbose() call as that call will
only work if docker was not registering its own log handler with libdm.
For some reason docker registers one with libdm and libdm hands over
all the messages to docker (including debug ones). And now it is up to
devmapper backend to figure out which ones should go to console and
which ones should not.
So by default log only fatal messages from libdm. One can easily modify
the code to change it for debugging purposes.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
It's about time to let folks not hit 'vfs', when 'overlay' is supported
on their kernel. Especially now that v3.18.y is a long-term kernel.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Automatically detect support for aufs `dirperm1` option and apply it.
`dirperm1` tells aufs to check the permission bits of the directory on the
topmost branch and ignore the permission bits on all lower branches.
It can be used to fix aufs' permission bug (i.e., upper layer having
broader mask than the lower layer).
More information about the bug can be found at https://github.com/docker/docker/issues/783
`dirperm1` man page is at: http://aufs.sourceforge.net/aufs3/man.html
Signed-off-by: Daniel, Dao Quang Minh <dqminh89@gmail.com>
We removed it, because upstream removed it. But now it will be coming
back, so work with it either way.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
They say we should only use the BTRFS_LIB_VERSION
They will no longer support this since it had to be managed manually
Docker-DCO-1.1-Signed-off-by: Dan Walsh <dwalsh@redhat.com> (github: rhatdan)
In several cases graphdriver were just returning the low-level syscall
error and that was making it all the way up to the daemon logs and in
many cases was difficult to tell it was even coming from the graphdriver
at all.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
daemon/volumes.go
This SetFileCon call made no sense, it was changing the labels of any
directory mounted into the containers SELinux label. If it came from me,
then I apologize since it is a huge bug.
The Volumes Mount code should optionally do this, but it should not always
happen, and should never happen on a --privileged container.
The change to
daemon/graphdriver/vfs/driver.go, is a simplification since this it not
a relabel, it is only a setting of the shared label for docker volumes.
Docker-DCO-1.1-Signed-off-by: Dan Walsh <dwalsh@redhat.com> (github: rhatdan)
when initializing the devmapper driver, attempt to sync udev and device
mapper. If udev sync is not supported, print a warning. Eventually we'll
likely bail here to avoid unpredictable behavior for users.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Fixes#9960
This adds the output of a "Backing Filesystem:" entry to `docker info`
to overlay, aufs, and devicemapper graphdrivers. The default list
includes a fairly complete list of common filesystem names from
linux/include/uapi/linux/magic.h, but if the backing filesystem is not
recognized, the code will simply show "<unknown>"
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com>
There are a couple of drivers that swallow errors that may occur in
their Put() implementation.
This changes the signature of (*Driver).Put for all the drivers implemented.
Signed-off-by: Vincent Batts <vbatts@hashbangbash.com>
Presenly the "Data file:" shows either the loopback _file_ or the block device.
With this, the "Data file:" will always show the device, and if it is a
loopback, then there will additionally be a "Data loop file:".
(Same for "Metadata file:")
Signed-off-by: Vincent Batts <vbatts@redhat.com>
`uint64(buf.Type)` on i686 is ffffffff9123683e on i686 due to sign extension, so it cannot be compared with `FsMagic(0x9123683E)`
Signed-off-by: Andrii Melnykov <andy.melnikov@gmail.com>
If .dockerignore mentions either then the client will send them to the
daemon but the daemon will erase them after the Dockerfile has been parsed
to simulate them never being sent in the first place.
an events test kept failing for me so I tried to fix that too
Closes#8330
Signed-off-by: Doug Davis <dug@us.ibm.com>
syscall.Unmount failed sometimes when user interrupted exporting,
for example a Ctrl-C, or pipe to commands which closed the pipe early,
like "docker export <container_name> | file -"; this syscall.Unmount
could sometimes return EBUSY and didn't actually umount the filesystem;
which would cause a following export command fail to mount;
change to lazy Unmount with MNT_DETACH can fix the problem, this is
the same behavior as in Shutdown;
```text
time="2015-01-03T21:27:26Z" level=error msg="Warning: error unmounting device
34a3e77cdbca17ceffd0636aee0415bb412996adb12360bfe2585ce30467fa8e: device or resource busy"
```
```
$ docker export thirsty_ardinghelli | file -
/dev/stdin: POSIX tar archive
time="2015-01-03T21:58:17Z" level=fatal msg="write /dev/stdout: broken pipe"
$ docker export thirsty_ardinghelli
time="2015-01-03T21:54:33Z" level=fatal msg="Error: thirsty_ardinghelli: Error getting container
34a3e77cdbca17ceffd0636aee0415bb412996adb12360bfe2585ce30467fa8e from driver devicemapper:
Error mounting '/dev/mapper/docker-253:0-3148372-34a3e77cdbca17ceffd0636aee0415bb412996adb12360bfe2585ce30467fa8e'
on '/var/lib/docker/devicemapper/mnt/34a3e77cdbca17ceffd0636aee0415bb412996adb12360bfe2585ce30467fa8e': device or resource busy"
```
Signed-off-by: Derek Che <drc@yahoo-inc.com>
To avoid an expensive call to archive.ChangesDirs() which walks two directory
trees and compares every entry, archive.ApplyLayer() has been extended to
also return the size of the layer changes.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Use transaction logic during device deletion and do rollback if transaction
is not complete. Following is the sequence of events.
- Open transaction and save to metafile
- Delete device from pool
- Delete device metadata file from disk
- Close Transaction
If docker crashes without closing transaction then rollback will take
place upon next docker start.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Finally this patch uses the notion of transaction for device or snapshot
device creation.
Following is sequence of event.
- Open a trasaction and save details in a file.
- Create a new device/snapshot device
- If a new device id is used, refresh transaction with new device id details.
- Create device metadata file
- Close transaction.
If docker crashes anywhere in between without closing transaction, then
upon next start, docker will figure out that there was a pending transaction
and it will roll back transaction. That is it will do following.
- Delete Device from pool
- Delete device metadata file
- Remove transaction file to mark no transaction is pending.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Finally, we seem to have all the bits to keep track of all used device
Ids and find a free device Id to use when creating a new device. Start
using it.
Ideally we should completely move away from retry logic when pool returns
-EEXISTS. For now I have retained that logic and I simply output a warning.
When things are stable, we should be able to get rid of it.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Open code createDevice() and createSnapDevice() and move all the logic
in the caller.
This is a sheer code reorganization so that all device Id allocation
logic is in one function. That way in case of erros, one can easily
cleanup and mark device Id free again. (Later patches benefit from
it).
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Right now we are accessing devices.NextDeviceId directly and also
incrementing it at various places.
Instead provide a helper function which is responsile for
incrementing NextDeviceId and return next deviceId.
This is just code structuring. This will help later once we
convert this function to find a free device Id and it goes
through a bitmap of used/free device Ids.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
When docker starts, build a used/free Device Id map from the per
device meta files we already have. These meta files have the data
which device Ids are in use. Parse these files and mark device as
used in the map.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Currently devicemapper backend does not keep track of used device Ids in
the pool. It tries a device Id and if that device Id exists in pool, it
tries with a different Id and keeps on doing this in a loop till it succeeds.
This worked fine so far but now we are moving to transaction based
device creation and deletion. We will keep deviceId information in
transaction which will be rolled back if docker crashed before transaction
was complete.
If we store a deviceId in transaction and later figure out it already
existed in pool and docker crashed, then we will rollback and remove
that existing device Id from pool (which we should not have).
That means, we should know free device Id in pool in advance before
we put that device Id in transaction.
Hence this patch creates a bitmap (one bit each for a deviceId), and
sets the bit if device Id is used otherwise resets it. This patch
is just preparing the ground right now. Actual usage will follow
in later patches.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Right now setupBaseImage() uses deleteDevice() to delete uninitialized
base image while rest of the code uses DeleteDevice(). Change it and
use a common function everywhere for the sake of uniformity.
I can't see what harm can be done by doing little extra locking done
by DeleteDevice().
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Very soon we will have the notion of an open transaction and keep its
details in a metafile.
When a new transaction is opened, we allocate a new transaction Id,
do the device creation/deletion and then we will close the transaction.
I thought that OpenTransactionId better represents the semantics of
transaction Id associated with an open transaction instead of NewtransactionId.
This patch just does the renaming. No functionality change.
I have also introduced a structure "Transaction" which will keep all
the details associated with a transaction. Later patches will add more
fields in this structure.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Currently new transaction Id is created using allocateTransactionId()
function. This function takes NewTransactionId and bumps up by one
to create NewTransactionId.
I think ideally we should be bumping up devices.TransactionId by 1
to come up with NewTransactionId. Because idea is that devices.TransactionId
contains the current pool transaction Id and to come up with a new
transaction Id bump it up by one.
Current code is not wrong as we are keeping NewTransactionId and
TransactionId in sync. But it will be more direct if we look at
devices.TransactionId to come up with NewTransactionId. That way
we don't have to even initialize NewTransactionId during startup
as first time somebody wants to do a transaction, it will be
allocated fresh.
So simplify the code a bit. No functionality change.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Currently updatePoolTransactionId() checks if NewTransactionId and
TransactionId are not same only then update the transaction Id in pool. This
check is redundant. Currently we call updatePoolTransactionId() only from
two places and both of these first allocate a new transaction Id.
Also updatePoolTransactionId() should only be called after allocating
new transaction Id otherwise it does not make any sense.
Remove the redundant check and reduce confusion.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Create two new helper functions for device and snap device creation. These
functions will not only create the device and also register the device.
Again, makes the code structure better and keeps all transaction logic
contained to functions instead of spilling over into functions like
setupBaseImage or AddDevice().
Just the code reorganization. No functionality change.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Currently registerDevice() adds a device to in-memory table, saves metadata
and also updates the pool transaction ID.
Now move transaciton Id update out of registerDevice() and provide a new
function unregisterDevice() which does the reverse of registerDevice().
This will simplify some code down the line and make it more structured.
This is just code reorganization and should not change functionality.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Currently devicemapper CreateDevice and CreateSnapDevice keep on retrying
device creation till a suitable device id is found.
With new transaction mechanism we need to store device id in transaction
before it has been created.
So change the logic in such a way that caller decides the devices Id to
use. If that device Id is not available, caller bumps up the device Id
and retries.
That way caller can update transaciton too when it tries a new Id. Transaction
related patches will come later in the series.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
When we are deleting a device, we also delete associated metadata file. If
that file removal fails, we are adding back the device in in-memory
table. I really can't see what's the point. When next lookup takes place
it will be automatically loaded if need be. Remove that code.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Right now initMetaData() first queries the pool for current transaciton Id
and then it migrates the old metafile.
Move pool transaction Id query and file migration in separate functions
for better code reuse and organization.
Given we have removed device transaction Id dependency from saveMetaData(),
we don't have to query pool transaction Id before migrating files.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Right now saveMetaData() is kind of little overloaded function. It is
supposed to save file metadata to disk. But in addition if user has
bumped up NewTransactionId before calling saveMetaData(), then it will
also update the transaction ID in pool.
Keep saveMetaData() simple and let it just save the file. Any update
of pool transaction ID is done inline in the code which needs it.
Also create an helper function updatePoolTransactionId() to update pool
transaction Id.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Remove call to allocateTransactionId() during device removal. This seems to
be unnecessary and it is not clear what this call is doing.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Again, just because device transaction id is greater than pool transaction
id, it does not guarantee that device is in the pool. So do not check
of this during loading of device metadata.
Docker needs to deal with it. And device activation will fail when we try
to activate a device for whom metafile is present but there is no device
in the pool.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Current code is associating a transaction id with each device and if pool
transaction id is greater that value, then current code assumes that device
is there in pool.
Transaction id of pool is a mechanism so that during device creation and
removal one can define a transaction and during startup figure out if
transaction was complete or not. I think we are using transaction id
throughout the code little inappropriately.
For example, if a device is being deleted, it is possible that we deleted
the device from pool but before we could delete metafile docker crashed.
When docker comes back it will think that device is in the pool (due to
device transaction id being less than pool transaction id) but device
is not in the pool.
Similary, it could happen that some data in the pool is corrupted and
during pool repair some devices are lost (without docker knowing about
it). In that case tool pool transaction id will be higher than device
transaction id and there are no guaratees that device is actually in
the pool.
So move away from this model where we think that a device is in pool if pool
transaction id is greater than device transaction Id. Per device
transaction Id just says that after device creation this should be pool's
transaction Id and nothing more.
Transaction id is per pool property (as opposed to per device property) and
will be used internally to figure out if last transaction was complete or
not and recover from failure during docker startup.
If for some reason metafile is present but device is not in pool, then
device activation will fail later.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Since Linux 3.18-rc6, overlayfs has been renamed overlay.
This change was introduced by the following commit in linux.git:
ef94b1864d1ed5be54376404bb23d22ed0481feb ovl: rename filesystem type to "overlay"
Signed-off-by: Lénaïc Huard <lhuard@amadeus.com>
TreeSize uses syscall.Stat_t which is not available on Windows.
It's called only on daemon path, therefore extracting it to daemon
with build tag 'daemon'
Signed-off-by: Ahmet Alp Balkan <ahmetb@microsoft.com>
Fixes#1171Fixes#6465
Data passed to mount(2) is clipped to PAGE_SIZE if its bigger. Previous
implementation checked if error was returned and then started to append layers
one by one. But if the PAGE_SIZE clipping appeared in between the paths, in the
permission sections or in xino definition the call would not error and
remaining layers would just be skipped(or some other unknown situation).
This also optimizes system calls as it tries to mount as much as possible with
the first mount.
Signed-off-by: Tõnis Tiigi <tonistiigi@gmail.com> (github: tonistiigi)
Ideally lvm2 would be used to create/manage the thin-pool volume that is
then handed to docker to exclusively create/manage the thin and thin
snapshot volumes needed for it's containers. Managing the thin-pool
outside of docker makes for the most feature-rich method of having
docker utilize device mapper thin provisioning as the backing storage
for docker's containers. lvm2-based thin-pool management feature
highlights include: automatic or interactive thin-pool resize support,
dynamically change thin-pool features, automatic thinp metadata checking
when lvm2 activates the thin-pool, etc.
Docker will not activate/deactivate the specified thin-pool device but
it will exclusively manage/create thin and thin snapshot volumes in it.
Docker will not take ownership of the specified thin-pool device unless
it has 0 data blocks used and a transaction id of 0. This should help
guard against using a thin-pool that is already in use.
Also fix typos in setupBaseImage() relative to the thin volume type of
the base image.
Docker-DCO-1.1-Signed-off-by: Mike Snitzer <snitzer@redhat.com> (github: snitm)
Took care of some review comments from crosbymichael.
v2:
- Return "err = nil" if file deviceset-metadata file does not exist.
- Use json.Decoder() interface for loading deviceset metadata.
v3:
- Reverted back to json marshal interface in loadDeviceSetMetaData().
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
In previous patch I had introduce json:"-" tags to be on safer side to make
sure certain fields are not marshalled/unmarshalled. But struct fields
starting with small letter are not exported so they will not be marshalled
anyway. So remove json:"-" tags from there.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
My pull request failed the build due to gofmat issues. I have run gofmt
on specified files and this commit fixes it.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
The way thin-pool right now is designed, user space is supposed to keep
track of what device ids have already been used. If user space tries to
create a new thin/snap device and device id has already been used, thin
pool retuns -EEXIST.
Upon receiving -EEXIST, current docker implementation simply tries the
NextDeviceId++ and keeps on doing this till it finds a free device id.
This approach has two issues.
- It is little suboptimal.
- If device id already exists, current kenrel implementation spits out
a messsage on console.
[17991.140135] device-mapper: thin: Creation of new snapshot 33 of device 3 failed.
Here kenrel is trying to tell user that device id 33 has already been used.
And this shows up for every device id docker tries till it reaches a point
where device ids are not used. So if there are thousands of container and
one is trying to create a new container after fresh docker start, expect
thousands of such warnings to flood console.
This patch saves the NextDeviceId in a file in
/var/lib/docker/devmapper/metadata/deviceset-metadata and reads it back
when docker starts. This way we don't retry lots of device ids which
have already been used.
There might be some device ids which are free but we will get back to them
once device numbers wrap around (24bit limit on device ids).
This patch should cut down on number of kernel warnings.
Notice that I am creating a deviceset metadata file which is a global file
for this pool. So down the line if we need to save more data we should be
able to do that.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
I was trying to save nextDeviceId to a file but it would not work and
json.Marshal() will do nothing. Then some search showed that I need to
make first letter of struct field capital, exporting this field and
now json.Marshal() works.
This is a preparatory patch for the next one.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Currently we save device metadata and have a helper function saveMetadata()
which converts data in json format as well as saves it to file. For
converting data in json format, one needs to know what is being saved.
Break this function down in two functions. One function only has file
write capability and takes in argument about byte array of json data.
Now this function does not have to know what data is being saved. It
only knows about a stream of json data is being saved to a file.
This allows me to reuse this function to save a different type of
metadata. In this case I am planning to save NextDeviceId so that
docker can use this device Id upon next restart. Otherwise docker
starts from 0 which is suboptimal.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
In an effort to make layer content 'stable' between import
and export from two different graph drivers, we must resolve
an issue where AUFS produces metadata files in its layers
which other drivers explicitly ignore when importing.
The issue presents itself like this:
- Generate a layer using AUFS
- On commit of that container, the new stored layer contains
AUFS metadata files/dirs. The stored layer content has some
tarsum value: '1234567'
- `docker save` that image to a USB drive and `docker load`
into another docker engine instance which uses another
graph driver, say 'btrfs'
- On load, this graph driver explicitly ignores any AUFS metadata
that it encounters. The stored layer content now has some
different tarsum value: 'abcdefg'.
The only (apparent) useful aufs metadata to keep are the psuedo link
files located at `/.wh..wh.plink/`. Thes files hold information at the
RW layer about hard linked files between this layer and another layer.
The other graph drivers make sure to copy up these psuedo linked files
but I've tested out a few different situations and it seems that this
is unnecessary (In my test, AUFS already copies up the other hard linked
files to the RW layer).
This changeset adds explicit exclusion of the AUFS metadata files and
directories (NOTE: not the whiteout files!) on commit of a container
using the AUFS storage driver.
Also included is a change to the archive package. It now explicitly
ignores the root directory from being included in the resulting tar archive
for 2 reasons: 1) it's unnecessary. 2) It's another difference between
what other graph drivers produce when exporting a layer to a tar archive.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
This backend uses the overlayfs union filesystem for containers
plus hard link file sharing for images.
Each container/image can have a "root" subdirectory which is a plain
filesystem hierarchy, or they can use overlayfs.
If they use overlayfs there is a "upper" directory and a "lower-id"
file, as well as "merged" and "work" directories. The "upper"
directory has the upper layer of the overlay, and "lower-id" contains
the id of the parent whose "root" directory shall be used as the lower
layer in the overlay. The overlay itself is mounted in the "merged"
directory, and the "work" dir is needed for overlayfs to work.
When a overlay layer is created there are two cases, either the
parent has a "root" dir, then we start out with a empty "upper"
directory overlaid on the parents root. This is typically the
case with the init layer of a container which is based on an image.
If there is no "root" in the parent, we inherit the lower-id from
the parent and start by making a copy if the parents "upper" dir.
This is typically the case for a container layer which copies
its parent -init upper layer.
Additionally we also have a custom implementation of ApplyLayer
which makes a recursive copy of the parent "root" layer using
hardlinks to share file data, and then applies the layer on top
of that. This means all chile images share file (but not directory)
data with the parent.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
The vfs storage driver currently shells out to the `cp` binary on the host
system to perform an 'archive' copy of the base image to a new directory.
The archive option preserves the modified time of the files which are created
but there was an issue where it was unable to preserve the modified time of
copied symbolic links on some host systems with an outdated version of `cp`.
This change no longer relies on the host system implementation and instead
utilizes the `CopyWithTar` function found in `pkg/archive` which is used
to copy from source to destination directory using a Tar archive, which
should correctly preserve file attributes.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Now that the archive package does not depend on any docker-specific
packages, only those in pkg and vendor, it can be safely moved into pkg.
Signed-off-by: Rafe Colton <rafael.colton@gmail.com>
If `--storage-opt dm.datadev=/dev/loop0 --storage-opt
dm.metadatadev=/dev/loop1 ` were provided, the information was not
reflected in the information output.
Closes: #7137
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Some graphdrivers are Differs and type assertions are made
in various places throughout the project. Differ offers some
convenience in generating/applying diffs of filesystem layers
but for most graphdrivers another code path is taken.
This patch brings all of the logic related to filesystem
diffs in one place, and simplifies the implementation of some
common types like Image, Daemon, and Container.
Signed-off-by: Josh Hawn <josh.hawn@docker.com>
Since these will be shared between containers we want to label
them as svirt_sandbox_file_t:s0. That will allow multiple containers
to write to them.
Currently we are allowing container domains to read/write all content in
/var/lib/docker because of container volumes. This is a big security hole
in our SELinux story.
This patch will allow us to tighten up the security of docker containers.
Docker-DCO-1.1-Signed-off-by: Dan Walsh <dwalsh@redhat.com> (github: rhatdan)
functions to pkg/parsers/kernel, and parsing filters to
pkg/parsers/filter. Adjust imports and package references.
Docker-DCO-1.1-Signed-off-by: Erik Hollensbe <github@hollensbe.org> (github: erikh)
Commit 09ee269d ("devmapper: Add option for specifying the thin pool
blocksize") also switched the default dm-thin-pool blocksize from 64K to
512K. That change unfortunately breaks the activation of dm-thin-pool
devices that were previously created using a 64K blocksize. Here is an
example of the dm-thin-pool activation failure users may experience:
device-mapper: thin: 253:4: pool target (204800 blocks) too small: expected 1638400
device-mapper: table: 253:4: thin-pool: preresume failed, error = -22
The reason for this is docker is passing 512K as the blocksize for a
dm-thin-pool that was previously created using a 64K blocksize. Docker
doesn't record the blocksize the is used when it creates a dm-thin-pool.
Until now it never had a need to do so because the blocksize was always
hardcoded. The dm-thin-pool blocksize must be the same every time a
dm-thin-pool is activated.
As a stop-gap fix, revert to using 64K for the default blocksize.
But we do need a proper fix for this now that 'dm.blocksize' is exposed
as a proper storage option. One possible fix would be to record the
blocksize for each dm-thin-pool that docker creates and to pass that
recorded blocksize down in the dmsetup table load each time the
dm-thin-pool is activated (this would be comparable to what lvm2 does).
Docker-DCO-1.1-Signed-off-by: Mike Snitzer <snitzer@redhat.com> (github: snitm)
Add dm.blocksize option that you can use with --storage-opt to set a
specific blocksize for the thin provisioning pool.
Also change the default dm-thin-pool blocksize from 64K to 512K. This
strikes a balance between the desire to have smaller blocksize given
docker's use of snapshots versus the desire to have more performance
that comes with using a larger blocksize. But if very small files will
be used on average the user is encouraged to override this default.
Docker-DCO-1.1-Signed-off-by: Mike Snitzer <snitzer@redhat.com> (github: snitm)
Device Mapper needs device sizes in binary (1024) multiples. Otherwise
kernel checks can find that the specified thin-pool device sizes aren't
a multiple of the specified thin-pool blocksize.
The name for "RAMInBytes" is likely too narrow given the new consumers
but... Also add "tebibyte" support to RAMInBytes.
Docker-DCO-1.1-Signed-off-by: Mike Snitzer <snitzer@redhat.com> (github: snitm)
createPool() and reloadPool() should be consistent with the thin-pool
table params they use.
Since createPool() specifies '1 skip_block_zeroing' reloadPool() should
too. Otherwise, if the pool is reloaded (as is done when resizing
loopback devices) block zeroing will be enabled after the reload
completes.
Docker-DCO-1.1-Signed-off-by: Mike Snitzer <snitzer@redhat.com> (github: snitm)
If this is at the root directory for the daemon you could unmount
somones filesystem when you stop docker and this is actually only needed
for the palces that the graph drivers mount the container's root
filesystems.
Docker-DCO-1.1-Signed-off-by: Michael Crosby <michael@crosbymichael.com> (github: crosbymichael)
The blkdiscard hack we do on container/image delete is pretty slow, but
required to restore space to the "host" root filesystem. However, it
is pretty useless on raw devices, and you may not need it in development
either.
In a simple test of the devicemapper backend on loopback the time to
delete 20 container went from 11 seconds to 0.4 seconds with
--storage-opt blkdiscard=false.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
This adds dm.datadev and dm.metadatadev options that you can use with
--storage-opt to set to specific devices to use for the thin
provisioning pool.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
This adds the following --storage-opts for the daemon:
dm.fs: The filesystem to use for the base image
dm.mkfsarg: Add an argument to the mkfs command for the base image
dm.mountopt: Add a mount option for devicemapper mount
Currently supported filesystems are xfs and ext4.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
This allows setting these settings to be passed:
dm.basesize
dm.loopdatasize
dm.loopmetadatasize
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
If we can't even get the current device mapper driver version, then
we cleanly fail the devmapper driver as not supported and fall back
on the next one.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
There are two cases where we can't use a graphdriver:
1) the graphdriver itself isn't supported by the system
2) the graphdriver is supported by some configuration/prerequisites are
missing
This introduces a new error for the 2) case and uses it when trying to
run docker with btrfs backend on a non-btrfs filesystem.
Docker-DCO-1.1-Signed-off-by: Johannes 'fish' Ziemke <github@freigeist.org> (github: discordianfish)
There is no reason to do discard durink mkfs, as the filesystem
is on a newly allocated device anyway. Discard is a slow operation,
so this may help initial startup a bit, especially if you use a larger
thin pool.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
Now that we have the generic graphtest tests that actually tests
the driver we can remove the old mock-using tests. Almost all of
these tests were disabled anyway, and the four remaining ones
didn't really test much while at the same time being really
fragile and making the rest of the code more complex due to
the mocking setup.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
For now this means the btrfs backend is skipped when run
inside make test. You can however run it manually if you want.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
If a graphdriver fails initialization due to ErrNotSupported we ignore
that and keep trying the next. But if some driver has a different
error (for instance if you specified an unknown option for it) we fail
the daemon startup, printing the error, rather than falling back to an
unexected driver (typically vfs) which may not match what you have run
earlier.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
This adds daemon/graphdriver/graphtest/graphtest which has a few
generic tests for all graph drivers, and then uses these
from the btrs, devicemapper and vfs backends.
I've not yet added the aufs backend, because i can't test that here
atm. It should work though.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
Currently the tests that mocks or denies functions leave this state
around for the next test. This is no good if we want to actually
test the devicemapper code in later tests.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
This has every container using the docker daemon's pid for the processes
label so it does not work correctly.
Docker-DCO-1.1-Signed-off-by: Michael Crosby <michael@crosbymichael.com> (github: crosbymichael)
This allows multiple instances of the backend in different containers
to access devices (although generally only one can modify/create them).
Any old metadata is converted on the first run.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
Instead of globally keeping track of the free device ids we just
start from 0 each run and handle EEXIST error and try the next one.
This way we don't need any global state for the device ids, which
means we can read device metadata lazily. This is important for
multi-process use of the backend.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
This moves the EBUSY detection to devmapper.go, and then returns
a real ErrBusy that deviceset uses.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
We used to mount in Create() to be able to create a few files that
needs to be in each device. However, this mount is problematic for
selinux, as we need to set the mount label at mount-time, and it
is not known at the time of Create().
This change just moves the file creation to first Get() call and
drops the mount from Create(). Additionally, this lets us remove
some complexities we had to avoid an extra unmount+mount cycle.
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)