When converting an opaque directory always keep the original
directory tar entry to ensure directory is created with correct
permissions on restore.
Closes#27298
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
The `archive` package defines aliases for `io.ReadCloser` and
`io.Reader`. These don't seem to provide an benefit other than type
decoration. Per this change, several unnecessary type cases were
removed.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
If we are running in a user namespace, don't try to mknod as
it won't be allowed. libcontainer will bind-mount the host's
devices over files in the container anyway, so it's not needed.
The chrootarchive package does a chroot (without mounting /proc) before
its work, so we cannot check /proc/self/uid_map when we need to. So
compute it in advance and pass it along with the tar options.
Signed-off-by: Serge Hallyn <serge.hallyn@ubuntu.com>
Currently when overlay creates a whiteout file then the overlay2 layer is archived,
the correct tar header will be created for the whiteout file, but the tar logic will then attempt to open the file causing a failure.
When tar encounters such failures the file is skipped and excluded for the archive, causing the whiteout to be ignored.
By skipping the copy of empty files, no open attempt will be made on whiteout files.
Fixes#23863
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
This fix tries to fix logrus formatting by removing `f` from
`logrus.[Error|Warn|Debug|Fatal|Panic|Info]f` when formatting string
is not present.
This fix fixes#23459.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
There might be other (valid) reasons for setxattr(2) to fail, so only
ignore it when it's a not supported error (ENOTSUP). Otherwise, bail.
Signed-off-by: Aleksa Sarai <asarai@suse.de>
Since certain filesystems don't support extended attributes, ignore
errors produced (emitting a warning) when attempting to apply extended
attributes to file.
Signed-off-by: Aleksa Sarai <asarai@suse.de>
If the destination does not exist, it needs to be created with ownership
mapping to the remapped uid/gid ranges if user namespaces are enabled.
This fixes ADD operations, similar to the prior fixes for COPY and WORKDIR.
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
Closes#20470
Before this PR we used to scan the entire build context when there were
exclusions in the .dockerignore file (paths that started with !). Now we
only traverse into subdirs when one of the exclusions starts with that dir
path.
Signed-off-by: Doug Davis <dug@us.ibm.com>
During "COPY" or other tar unpack operations, a target/destination
parent dir might not exist and should be created with ownership of the
root in the right context (including remapped root when user namespaces
are enabled)
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
When execute `docker export -o path xxx` and path is a directory docker
has no privilege to write to, daemon will print lots of error logs that
most of them are duplicated and redundant.
This will remove unnecessary error logs and print only once.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
Save was failing file integrity checksums due to bugs in both
Windows and Docker. This commit includes fixes to file time handling
in tarexport and system.chtimes that are necessary along with
the Windows platform fixes to correctly support save. With this
change, sysfile_backups for windowsfilter driver are no longer
needed, so that code is removed.
Signed-off-by: Stefan J. Wernli <swernli@microsoft.com>
aufs kernel module creates whiteout files on upper layer delete (and
other situations) and those files already are 'translated' regarding
ownership in host terms (e.g. they are already "0:0" owned), so when
these layers are copied around with pkg/archive we don't want to try and
translate these files regarding ownership.
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
Moved a defer up to a better spot.
Fixed TestUntarPathWithInvalidDest to actually fail for the right reason
Closes#18170
Signed-off-by: Doug Davis <dug@us.ibm.com>
Fixes#16555
Original docker `cp` always copy symbol link itself instead of target,
now we provide '-L' option to allow docker to follow symbol link to real
target.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
Fixes#17766
Previously, opaque directory whiteouts on non-native
graphdrivers depended on the file order, meaning
files added with the same layer before the whiteout
file `.wh..wh..opq` were also removed.
If that file happened to have subdirs, then calling
chtimes on those dirs after unpack would fail the pull.
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
The race is between pools.Put which calls buf.Reset and exec.Cmd
doing io.Copy from the buffer; it caused a runtime crash, as
described in #16924:
``` docker-daemon cat the-tarball.xz | xz -d -c -q | docker-untar /path/to/... (aufs ) ```
When docker-untar side fails (like try to set xattr on aufs, or a broken
tar), invokeUnpack will be responsible to exhaust all input, otherwise
`xz` will be write pending for ever.
this change add a receive only channel to cmdStream, and will close it
to notify it's now safe to close the input stream;
in CmdStream the change to use Stdin / Stdout / Stderr keeps the
code simple, os/exec.Cmd will spawn goroutines and call io.Copy automatically.
the CmdStream is actually called in the same file only, change it
lowercase to mark as private.
[...]
INFO[0000] Docker daemon commit=0a8c2e3 execdriver=native-0.2 graphdriver=aufs version=1.8.2
DEBU[0006] Calling POST /build
INFO[0006] POST /v1.20/build?cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&memory=0&memswap=0&rm=1&t=gentoo-x32&ulimits=null
DEBU[0008] [BUILDER] Cache miss
DEBU[0009] Couldn't untar /home/lib-docker-v1.8.2-tmp/tmp/docker-build316710953/stage3-x32-20151004.tar.xz to /home/lib-docker-v1.8.2-tmp/aufs/mnt/d909abb87150463939c13e8a349b889a72d9b14f0cfcab42a8711979be285537: Untar re-exec error: exit status 1: output: operation not supported
DEBU[0009] CopyFileWithTar(/home/lib-docker-v1.8.2-tmp/tmp/docker-build316710953/stage3-x32-20151004.tar.xz, /home/lib-docker-v1.8.2-tmp/aufs/mnt/d909abb87150463939c13e8a349b889a72d9b14f0cfcab42a8711979be285537/)
panic: runtime error: slice bounds out of range
goroutine 42 [running]:
bufio.(*Reader).fill(0xc208187800)
/usr/local/go/src/bufio/bufio.go:86 +0x2db
bufio.(*Reader).WriteTo(0xc208187800, 0x7ff39602d150, 0xc2083f11a0, 0x508000, 0x0, 0x0)
/usr/local/go/src/bufio/bufio.go:449 +0x27e
io.Copy(0x7ff39602d150, 0xc2083f11a0, 0x7ff3960261f8, 0xc208187800, 0x0, 0x0, 0x0)
/usr/local/go/src/io/io.go:354 +0xb2
github.com/docker/docker/pkg/archive.func·006()
/go/src/github.com/docker/docker/pkg/archive/archive.go:817 +0x71
created by github.com/docker/docker/pkg/archive.CmdStream
/go/src/github.com/docker/docker/pkg/archive/archive.go:819 +0x1ec
goroutine 1 [chan receive]:
main.(*DaemonCli).CmdDaemon(0xc20809da30, 0xc20800a020, 0xd, 0xd, 0x0, 0x0)
/go/src/github.com/docker/docker/docker/daemon.go:289 +0x1781
reflect.callMethod(0xc208140090, 0xc20828fce0)
/usr/local/go/src/reflect/value.go:605 +0x179
reflect.methodValueCall(0xc20800a020, 0xd, 0xd, 0x1, 0xc208140090, 0x0, 0x0, 0xc208140090, 0x0, 0x45343f, ...)
/usr/local/go/src/reflect/asm_amd64.s:29 +0x36
github.com/docker/docker/cli.(*Cli).Run(0xc208129fb0, 0xc20800a010, 0xe, 0xe, 0x0, 0x0)
/go/src/github.com/docker/docker/cli/cli.go:89 +0x38e
main.main()
/go/src/github.com/docker/docker/docker/docker.go:69 +0x428
goroutine 5 [syscall]:
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:21 +0x1f
created by os/signal.init·1
/usr/local/go/src/os/signal/signal_unix.go:27 +0x35
Signed-off-by: Derek Ch <denc716@gmail.com>
All the go-lint work forced any existing "Uid" -> "UID", but seems to
not have the same rules for Gid, so stat package has calls UID() and
Gid().
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
Adds support for the daemon to handle user namespace maps as a
per-daemon setting.
Support for handling uid/gid mapping is added to the builder,
archive/unarchive packages and functions, all graphdrivers (except
Windows), and the test suite is updated to handle user namespace daemon
rootgraph changes.
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
This fixes the case where directory is removed in
aufs and then the same layer is imported to a
different graphdriver.
Currently when you do `rm -rf /foo && mkdir /foo`
in a layer in aufs the files under `foo` would
only be be hidden on aufs.
The problems with this fix:
1) When a new diff is recreated from non-aufs driver
the `opq` files would not be there. This should not
mean layer differences for the user but still
different content in the tar (one would have one
`opq` file, the others would have `.wh.*` for every
file inside that folder). This difference also only
happens if the tar-split file isn’t stored for the
layer.
2) New files that have the filenames before `.wh..wh..opq`
when they are sorted do not get picked up by non-aufs
graphdrivers. Fixing this would require a bigger
refactoring that is planned in the future.
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
[pkg/archive] Update archive/copy path handling
- Remove unused TarOptions.Name field.
- Add new TarOptions.RebaseNames field.
- Update some of the logic around path dir/base splitting.
- Update some of the logic behind archive entry name rebasing.
[api/types] Add LinkTarget field to PathStat
[daemon] Fix stat, archive, extract of symlinks
These operations *should* resolve symlinks that are in the path but if the
resource itself is a symlink then it *should not* be resolved. This patch
puts this logic into a common function `resolvePath` which resolves symlinks
of the path's dir in scope of the container rootfs but does not resolve the
final element of the path. Now archive, extract, and stat operations will
return symlinks if the path is indeed a symlink.
[api/client] Update cp path hanling
[docs/reference/api] Update description of stat
Add the linkTarget field to the header of the archive endpoint.
Remove path field.
[integration-cli] Fix/Add cp symlink test cases
Copying a symlink should do just that: copy the symlink NOT
copy the target of the symlink. Also, the resulting file from
the copy should have the name of the symlink NOT the name of
the target file.
Copying to a symlink should copy to the symlink target and not
modify the symlink itself.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
TL;DR: check for IsExist(err) after a failed MkdirAll() is both
redundant and wrong -- so two reasons to remove it.
Quoting MkdirAll documentation:
> MkdirAll creates a directory named path, along with any necessary
> parents, and returns nil, or else returns an error. If path
> is already a directory, MkdirAll does nothing and returns nil.
This means two things:
1. If a directory to be created already exists, no error is returned.
2. If the error returned is IsExist (EEXIST), it means there exists
a non-directory with the same name as MkdirAll need to use for
directory. Example: we want to MkdirAll("a/b"), but file "a"
(or "a/b") already exists, so MkdirAll fails.
The above is a theory, based on quoted documentation and my UNIX
knowledge.
3. In practice, though, current MkdirAll implementation [1] returns
ENOTDIR in most of cases described in #2, with the exception when
there is a race between MkdirAll and someone else creating the
last component of MkdirAll argument as a file. In this very case
MkdirAll() will indeed return EEXIST.
Because of #1, IsExist check after MkdirAll is not needed.
Because of #2 and #3, ignoring IsExist error is just plain wrong,
as directory we require is not created. It's cleaner to report
the error now.
Note this error is all over the tree, I guess due to copy-paste,
or trying to follow the same usage pattern as for Mkdir(),
or some not quite correct examples on the Internet.
[v2: a separate aufs commit is merged into this one]
[1] https://github.com/golang/go/blob/f9ed2f75/src/os/path.go
Signed-off-by: Kir Kolyshkin <kir@openvz.org>
In `ApplyLayer` and `Untar`, the stream is magically decompressed. Since
this is not able to be toggled, rather than break this ./pkg/ API, add
an `ApplyUncompressedLayer` and `UntarUncompressed` that does not
magically decompress the layer stream.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Adds TarResource and CopyTo functions to be used for creating
archives for use with the new `docker cp` behavior.
Adds multiple test cases for the CopyFrom and CopyTo
functions in the pkg/archive package.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
The "TestChangesWithChanges" case randomlly fails on my development
VM with the following errors:
```
--- FAIL: TestChangesWithChanges (0.00s)
changes_test.go:201: no change for expected change C /dir1/subfolder != A /dir1/subfolder/newFile
```
If I apply the following patch to changes_test.go, the test passes.
```diff
diff --git a/pkg/archive/changes_test.go b/pkg/archive/changes_test.go
index 290b2dd..ba1aca0 100644
--- a/pkg/archive/changes_test.go
+++ b/pkg/archive/changes_test.go
@@ -156,6 +156,7 @@ func TestChangesWithChanges(t *testing.T) {
}
defer os.RemoveAll(layer)
createSampleDir(t, layer)
+ time.Sleep(5 * time.Millisecond)
os.MkdirAll(path.Join(layer, "dir1/subfolder"), 0740)
// Let's modify modtime for dir1 to be sure it's the same for the two layer (to not having false positive)
```
It seems that if a file is created immediately after the directory is created,
the `archive.Changes` function could't recognize that the parent directory of
the new file is modified.
Perhaps the problem may reproduce on machines with low time precision?
I had successfully reproduced the failure on my development VM as well as
a VM on DigitalOcean.
Signed-off-by: Shijiang Wei <mountkin@gmail.com>
This makes the "Buffering to disk" part of `docker push` 70% faster in
my use-case (having already applied #12833).
fsync'ing here serves no valuable purpose: if the drive's operation is
interrupted, so it the program's, and this archive has no value other
than the immediate and transient one.
Signed-off-by: Burke Libbey <burke.libbey@shopify.com>
If we tear through a few layers of abstraction, we can get at the inodes
contained in a directory without having to stat all the files. This
allows us to eliminate identical files much earlier in the changelist
generation process.
Signed-off-by: Burke Libbey <burke@libbey.me>
Add tests on:
- changes.go
- archive.go
- wrap.go
Should fix#11603 as the coverage is now 81.2% on the ``pkg/archive``
package. There is still room for improvement though :).
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
on overlay fs, the mtime of directories changes in a container where new
files are added in an upper layer (e.g. '/etc'). This flags the
directory as a change where there was none.
Closes#9874
Signed-off-by: Vincent Batts <vbatts@redhat.com>
This change modifies the chmod bits of build context archives built on
windows to preserve the execute bit and remove the r/w bits from
grp/others.
Also adjusted integ-cli tests to verify permissions based on the platform
the tests are running.
Fixes#11047.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
Currently TestBuildRenamedDockerfile fails since passing
custom dockerfile paths like:
docker build -f dir/file .
fails on windows because those are unix paths. Instead, on
windows accept windows style paths like:
docker build -f dir\file .
and convert them to unix style paths using the helper we
have in `pkg/archive` so that daemon can correctly locate
the path in the context.
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
Currently pkg/archive stores nested windows files with
backslashes (e.g. `dir\`, `dir\file.txt`) and this causes
tar not being correctly extracted on Linux daemon.
This change assures we canonicalize all paths to unix
paths and add them to tar with that name independent of platform.
Fixes the following test cases for Windows CI:
- TestBuildAddFileWithWhitespace
- TestBuildCopyFileWithWhitespace
- TestBuildAddDirContentToRoot
- TestBuildAddDirContentToExistingDir
- TestBuildCopyDirContentToRoot
- TestBuildCopyDirContentToExistDir
- TestBuildDockerignore
- TestBuildEnvUsage
- TestBuildEnvUsage2
Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
sort changes found and exported.
Sorting the files before appending them to the tar archive
would mean a dependable ordering for types like hardlinks.
Also, combine sort logic used
Signed-off-by: Vincent Batts <vbatts@redhat.com>
If .dockerignore mentions either then the client will send them to the
daemon but the daemon will erase them after the Dockerfile has been parsed
to simulate them never being sent in the first place.
an events test kept failing for me so I tried to fix that too
Closes#8330
Signed-off-by: Doug Davis <dug@us.ibm.com>
To avoid an expensive call to archive.ChangesDirs() which walks two directory
trees and compares every entry, archive.ApplyLayer() has been extended to
also return the size of the layer changes.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
With 32ba6ab from #9261, TempArchive now closes the underlying file and
cleans it up as soon as the file's contents have been read. When pushing
an image, PushImageLayerRegistry attempts to call Close() on the layer,
which is a TempArchive that has already been closed. In this situation,
Close() returns an "invalid argument" error.
Add a Close method to TempArchive that does a no-op if the underlying
file has already been closed.
Signed-off-by: Andy Goldstein <agoldste@redhat.com>
Signed-off-by: Tibor Vass <teabee89@gmail.com>
Conflicts:
pkg/archive/archive.go
fixed conflict which git couldn't fix with the added BreakoutError
Conflicts:
pkg/archive/archive_test.go
fixed conflict in imports
This fixes the removal of TempArchives which can read with only one
read. Such archives weren't getting removed because EOF wasn't being
triggered.
Docker-DCO-1.1-Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com> (github: unclejack)
pkg/archive contains code both invoked from cli (cross platform) and
daemon (linux only) and Unix-specific dependencies break compilation on
Windows. We extracted those stat-related funcs into platform specific
implementations at pkg/system and added unit tests.
Signed-off-by: Ahmet Alp Balkan <ahmetb@microsoft.com>
Some parts of pkg/archive is called on both client/daemon code. To get
it compiling on Windows, these funcs are extracted into files with
build tags.
Signed-off-by: Ahmet Alp Balkan <ahmetb@microsoft.com>
By default is a demo of file differences, but can be used to create a
tar of changes between an old and new path.
Signed-off-by: Vincent Batts <vbatts@redhat.com>
Signed-off-by: Vincent Batts <vbatts@hashbangbash.com>
In an effort to make layer content 'stable' between import
and export from two different graph drivers, we must resolve
an issue where AUFS produces metadata files in its layers
which other drivers explicitly ignore when importing.
The issue presents itself like this:
- Generate a layer using AUFS
- On commit of that container, the new stored layer contains
AUFS metadata files/dirs. The stored layer content has some
tarsum value: '1234567'
- `docker save` that image to a USB drive and `docker load`
into another docker engine instance which uses another
graph driver, say 'btrfs'
- On load, this graph driver explicitly ignores any AUFS metadata
that it encounters. The stored layer content now has some
different tarsum value: 'abcdefg'.
The only (apparent) useful aufs metadata to keep are the psuedo link
files located at `/.wh..wh.plink/`. Thes files hold information at the
RW layer about hard linked files between this layer and another layer.
The other graph drivers make sure to copy up these psuedo linked files
but I've tested out a few different situations and it seems that this
is unnecessary (In my test, AUFS already copies up the other hard linked
files to the RW layer).
This changeset adds explicit exclusion of the AUFS metadata files and
directories (NOTE: not the whiteout files!) on commit of a container
using the AUFS storage driver.
Also included is a change to the archive package. It now explicitly
ignores the root directory from being included in the resulting tar archive
for 2 reasons: 1) it's unnecessary. 2) It's another difference between
what other graph drivers produce when exporting a layer to a tar archive.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Fixes#1992
Right now when you `docker cp` a path which is in a volume, the cp
itself works, however you end up getting files that are in the
container's fs rather than the files in the volume (which is not in the
container's fs).
This makes it so when you `docker cp` a path that is in a volume it
follows the volume to the real path on the host.
archive.go has been modified so that when you do `docker cp mydata:/foo
.`, and /foo is the volume, the outputed folder is called "foo" instead
of the volume ID (because we are telling it to tar up
`/var/lib/docker/vfs/dir/<some id>` and not "foo", but the user would be
expecting "foo", not the ID
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Now that the archive package does not depend on any docker-specific
packages, only those in pkg and vendor, it can be safely moved into pkg.
Signed-off-by: Rafe Colton <rafael.colton@gmail.com>