In de2447c, the creation of the 'lower' file was changed from using
os.Create to using ioutils.AtomicWriteFile, which ignores the system's
umask. This means that even though the requested permission in the
source code was always 0666, it was 0644 on systems with default
umask of 0022 prior to de2447c, so the move to AtomicFile potentially
increased the file's permissions.
This is not a security issue because the parent directory does not
allow writes into the file, but it can confuse security scanners on
Linux-based systems into giving false positives.
Signed-off-by: Jaroslav Jindrak <dzejrou@gmail.com>
Moby imports Swarmkit; Swarmkit no longer imports Moby. In order to
accomplish this feat, Swarmkit has introduced a new plugin.Getter
interface so it could stop importing our pkg/plugingetter package. This
new interface is not entirely compatible with our
plugingetter.PluginGetter interface, necessitating a thin adapter.
Swarmkit had to jettison the CNM network allocator to stop having to
import libnetwork as the cnmallocator package is deeply tied to
libnetwork. Move the CNM network allocator into libnetwork, where it
belongs. The package had a short an uninteresting Git history in the
Swarmkit repository so no effort was made to retain history.
Signed-off-by: Cory Snider <csnider@mirantis.com>
This patch disables pulling legacy (schema1 and schema 2, version 1) images by
default.
A `DOCKER_ENABLE_DEPRECATED_PULL_SCHEMA_1_IMAGE` environment-variable is
introduced to allow re-enabling this feature, aligning with the environment
variable used in containerd 2.0 (`CONTAINERD_ENABLE_DEPRECATED_PULL_SCHEMA_1_IMAGE`).
With this patch, attempts to pull a legacy image produces an error:
With graphdrivers:
docker pull docker:1.0
1.0: Pulling from library/docker
[DEPRECATION NOTICE] Docker Image Format v1, and Docker Image manifest version 2, schema 1 support will be removed in an upcoming release. Suggest the author of docker.io/library/docker:1.0 to upgrade the image to the OCI Format, or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
With the containerd image store enabled, output is slightly different
as it returns the error before printing the `1.0: pulling ...`:
docker pull docker:1.0
Error response from daemon: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of docker.io/library/docker:1.0 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
Using the "distribution" endpoint to resolve the digest for an image also
produces an error:
curl -v --unix-socket /var/run/docker.sock http://foo/distribution/docker.io/library/docker:1.0/json
* Trying /var/run/docker.sock:0...
* Connected to foo (/var/run/docker.sock) port 80 (#0)
> GET /distribution/docker.io/library/docker:1.0/json HTTP/1.1
> Host: foo
> User-Agent: curl/7.88.1
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< Api-Version: 1.45
< Content-Type: application/json
< Docker-Experimental: false
< Ostype: linux
< Server: Docker/dev (linux)
< Date: Tue, 27 Feb 2024 16:09:42 GMT
< Content-Length: 354
<
{"message":"[DEPRECATION NOTICE] Docker Image Format v1, and Docker Image manifest version 2, schema 1 support will be removed in an upcoming release. Suggest the author of docker.io/library/docker:1.0 to upgrade the image to the OCI Format, or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"}
* Connection #0 to host foo left intact
Starting the daemon with the `DOCKER_ENABLE_DEPRECATED_PULL_SCHEMA_1_IMAGE`
env-var set to a non-empty value allows pulling the image;
docker pull docker:1.0
[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of docker.io/library/docker:1.0 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
b0a0e6710d13: Already exists
d193ad713811: Already exists
ba7268c3149b: Already exists
c862d82a67a2: Already exists
Digest: sha256:5e7081837926c7a40e58881bbebc52044a95a62a2ea52fb240db3fc539212fe5
Status: Image is up to date for docker:1.0
docker.io/library/docker:1.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
All other progress updates are emitted with truncated id.
```diff
$ docker pull --platform linux/amd64 alpine
Using default tag: latest
latest: Pulling from library/alpine
-sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8: Pulling fs layer
+4abcf2066143: Download complete
Digest: sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b
Status: Image is up to date for alpine:latest
docker.io/library/alpine:latest
```
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Don't change the behavior for older clients and keep the same behavior.
Otherwise client can't opt-out (because `ReadOnlyNonRecursive` is
unsupported before 1.44).
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
This will return a single entry for each name/value pair, and for now
all the "image specific" metadata (labels, config, size) should be
either "default platform" or "first platform we have locally" (which
then matches the logic for commands like `docker image inspect`, etc)
with everything else (just ID, maybe?) coming from the manifest
list/index.
That leaves room for the longer-term implementation to add new fields to
describe the _other_ images that are part of the manifest list/index.
Co-authored-by: Tianon Gravi <admwiggin@gmail.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
DNS names were only set up for user-defined networks. On Linux, none
of the built-in networks (bridge/host/none) have built-in DNS, so they
don't need DNS names.
But, on Windows, the default network is "nat" and it does need the DNS
names.
Signed-off-by: Rob Murray <rob.murray@docker.com>
Non-swarm networks created before network-creation-time validation
was added in 25.0.0 continued working, because the checks are not
re-run.
But, swarm creates networks when needed (with 'agent=true'), to
ensure they exist on each agent - ignoring the NetworkNameError
that says the network already existed.
By ignoring validation errors on creation of a network with
agent=true, pre-existing swarm networks with IPAM config that would
fail the new checks will continue to work too.
New swarm (overlay) networks are still validated, because they are
initially created with 'agent=false'.
Signed-off-by: Rob Murray <rob.murray@docker.com>
Replace regex matching/replacement and re-reading of generated files
with a simple parser, and struct to remember and manipulate the file
content.
Annotate the generated file with a header comment saying the file is
generated, but can be modified, and a trailing comment describing how
the file was generated and listing external nameservers.
Always start with the host's resolv.conf file, whether generating config
for host networking, or with/without an internal resolver - rather than
editing a file previously generated for a different use-case.
Resolves an issue where rewrites of the generated file resulted in
default IPv6 nameservers being unnecessarily added to the config.
Signed-off-by: Rob Murray <rob.murray@docker.com>
This const contains the minimum API version that can be supported by the
API server. The daemon is currently configured to use the same version,
but we may increment the _configured_ minimum version when deprecating
old API versions in future.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Commit 08e4e88482 (Docker Engine v25.0.0)
deprecated API version v1.23 and lower, but older API versions could be
enabled through the DOCKER_MIN_API_VERSION environment variable.
This patch removes all support for API versions < v1.24.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
API v1.20 (Docker Engine v1.11.0) and older allowed a HostConfig to be passed
when starting a container. This feature was deprecated in API v1.21 (Docker
Engine v1.10.0) in 3e7405aea8, and removed in
API v1.23 (Docker Engine v1.12.0) in commit 0a8386c8be.
API v1.23 and older are deprecated, and this patch removes the feature.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This endpoint was deprecated in API v1.20 (Docker Engine v1.8.0) in
commit db9cc91a9e, in favor of the
`PUT /containers/{id}/archive` and `HEAD /containers/{id}/archive`
endpoints, and disabled in API v1.24 (Docker Engine v1.12.0) through
commit 428328908d.
This patch removes the endpoint, and the associated `daemon.ContainerCopy`
method in the backend.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
API v1.23 and older are deprecated, so we can remove the code to adjust
responses for API v1.20 and lower.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
API v1.23 and older are deprecated, so we can remove the code to adjust
responses for API v1.19 and lower.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
API versions before 1.19 allowed CpuShares that were greater than the maximum
or less than the minimum supported by the kernel, and relied on the kernel to
do the right thing.
Commit ed39fbeb2a introduced code to adjust the
CPU shares to be within the accepted range when using API version 1.18 or
lower.
API v1.23 and older are deprecated, so we can remove support for this
functionality.
Currently, there's no validation for CPU shares to be within an acceptable
range; a TODO was added to add validation for this option, and to use the
`linuxMinCPUShares` and `linuxMaxCPUShares` consts for this.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Inspect and history used two different ways to find the present images.
This made history fail in some cases where image inspect would work (if
a configuration of a manifest wasn't found in the content store).
With this change we now use the same logic for both inspect and history.
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
Do not set 'Config.MacAddress' in inspect output unless the MAC address
is configured.
Also, make sure it is filled in for a configured address on the default
network before the container is started (by translating the network name
from 'default' to 'config' so that the address lookup works).
Signed-off-by: Rob Murray <rob.murray@docker.com>
The API's EndpointConfig struct has a MacAddress field that's used for
both the configured address, and the current address (which may be generated).
A configured address must be restored when a container is restarted, but a
generated address must not.
The previous attempt to differentiate between the two, without adding a field
to the API's EndpointConfig that would show up in 'inspect' output, was a
field in the daemon's version of EndpointSettings, MACOperational. It did
not work, MACOperational was set to true when a configured address was
used. So, while it ensured addresses were regenerated, it failed to preserve
a configured address.
So, this change removes that code, and adds DesiredMacAddress to the wrapped
version of EndpointSettings, where it is persisted but does not appear in
'inspect' results. Its value is copied from MacAddress (the API field) when
a container is created.
Signed-off-by: Rob Murray <rob.murray@docker.com>
The bbolt library wants exclusive access to the boltdb file and uses
file locking to assure that is the case. The controller and each network
driver that needs persistent storage instantiates its own unique
datastore instance, backed by the same boltdb file. The boltdb kvstore
implementation works around multiple access to the same boltdb file by
aggressively closing the boltdb file between each transaction. This is
very inefficient. Have the controller pass its datastore instance into
the drivers and enable the PersistConnection option to disable closing
the boltdb between transactions.
Set data-dir in unit tests which instantiate libnetwork controllers so
they don't hang trying to lock the default boltdb database file.
Signed-off-by: Cory Snider <csnider@mirantis.com>
If a reader has caught up to the logger and is waiting for the next
message, it should stop waiting when the logger is closed. Otherwise
the reader will unnecessarily wait the full closedDrainTimeout for no
log messages to arrive.
This case was overlooked when the journald reader was recently
overhauled to be compatible with systemd 255, and the reader tests only
failed when a logical race happened to settle in such a way to exercise
the bugged code path. It was only after implicit flushing on close was
added to the journald test harness that the Follow tests would
repeatably fail due to this bug. (No new regression tests are needed.)
Signed-off-by: Cory Snider <csnider@mirantis.com>
The journald reader test harness injects an artificial asynchronous
delay into the logging pipeline: a logged message won't be written to
the journal until at least 150ms after the Log() call returns. If a test
returns while log messages are still in flight to be written, the logs
may attempt to be written after the TempDir has been cleaned up, leading
to spurious errors.
The logger read tests which interleave writing and reading have to
include explicit synchronization points to work reliably with this delay
in place. On the other hand, tests should not be required to sync the
logger explicitly before returning. Override the Close() method in the
test harness wrapper to wait for in-flight logs to be flushed to disk.
Signed-off-by: Cory Snider <csnider@mirantis.com>
- Check the return value when logging messages
- Log the stream (stdout/stderr) and list of messages that were not read
- Wait until the logger is closed before returning early (panic/fatal)
Signed-off-by: Cory Snider <csnider@mirantis.com>
Writing the systemd-journal-remote command output directly to os.Stdout
and os.Stderr makes it nearly impossible to tell which test case the
output is related to when the tests are not run in verbose mode. Extend
the journald sender fake to redirect output to the test log so they
interleave with the rest of the test output.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The Go race detector was detecting a data race when running the
TestLogRead/Follow/Concurrent test against the journald logging driver.
The race was in the test harness, specifically syncLogger. The waitOn
field would be reassigned each time a log entry is sent to the journal,
which is not concurrency-safe. Make it concurrency-safe using the same
patterns that are used in the log follower implementation to synchronize
with the logger.
Signed-off-by: Cory Snider <csnider@mirantis.com>
When saving an image treat `image@sha256:abcdef...` the same as
`abcdef...`, this makes it:
- Not export the digested tag as the image name
- Not try to export all tags from the image repository
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
errDrainDone is a sentinel error which is never supposed to escape the
package. Consequently, it needs to be filtered out of returns all over
the place, adding boilerplate. Forgetting to filter out these errors
would be a logic bug which the compiler would not help us catch. Replace
it with boolean multi-valued returns as they can't be accidentally
ignored or propagated.
Signed-off-by: Cory Snider <csnider@mirantis.com>
While it doesn't really matter if the reader waits for an extra
arbitrary period beyond an arbitrary hardcoded timeout, it's also
trivial and cheap to implement, and nice to have.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The journald reader uses a timer to set an upper bound on how long to
wait for the final log message of a stopped container. However, the
timer channel is only received from in non-blocking select statements!
There isn't enough benefit of using a timer to offset the cost of having
to manage the timer resource. Setting a deadline and comparing the
current time is just as effective, without having to manage the
lifecycle of any runtime resources.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Synthesize a boot ID for journal entries fed into
systemd-journal-remote, as required by systemd 255.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Following logs with a non-negative tail when the container log is empty
is broken on the journald driver when used with systemd 255. Add tests
which cover this edge case to our loggertest suite.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Store additional image property which makes it possible to distinguish
if image was built locally.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
This is a follow-up to 2cf230951f, adding
more directives to adjust for some new code added since:
Before this patch:
make -C ./internal/gocompat/
GO111MODULE=off go generate .
GO111MODULE=on go mod tidy
GO111MODULE=on go test -v
# github.com/docker/docker/internal/sliceutil
internal/sliceutil/sliceutil.go:3:12: type parameter requires go1.18 or later (-lang was set to go1.16; check go.mod)
internal/sliceutil/sliceutil.go:3:14: predeclared comparable requires go1.18 or later (-lang was set to go1.16; check go.mod)
internal/sliceutil/sliceutil.go:4:19: invalid map key type T (missing comparable constraint)
# github.com/docker/docker/libnetwork
libnetwork/endpoint.go:252:17: implicit function instantiation requires go1.18 or later (-lang was set to go1.16; check go.mod)
# github.com/docker/docker/daemon
daemon/container_operations.go:682:9: implicit function instantiation requires go1.18 or later (-lang was set to go1.16; check go.mod)
daemon/inspect.go:42:18: implicit function instantiation requires go1.18 or later (-lang was set to go1.16; check go.mod)
With this patch:
make -C ./internal/gocompat/
GO111MODULE=off go generate .
GO111MODULE=on go mod tidy
GO111MODULE=on go test -v
=== RUN TestModuleCompatibllity
main_test.go:321: all packages have the correct go version specified through //go:build
--- PASS: TestModuleCompatibllity (0.00s)
PASS
ok gocompat 0.031s
make: Leaving directory '/go/src/github.com/docker/docker/internal/gocompat'
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
We need to isolate the images that we are remapping to a userns, we
can't mix them with "normal" images. In the graph driver case this means
we create a new root directory where we store the images and everything
else, in the containerd case we can use a new namespace.
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
To prevent a circular import between api/types and api/types image,
the RequestPrivilegeFunc reference was not moved, but defined as
part of the PullOptions / PushOptions.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Commit 8b7af1d0f added some code to update the DNSNames of all
endpoints attached to a sandbox by loading a new instance of each
affected endpoints from the datastore through a call to
`Network.EndpointByID()`.
This method then calls `Network.getEndpointFromStore()`, that in
turn calls `store.GetObject()`, which then calls `cache.get()`,
which calls `o.CopyTo(kvObject)`. This effectively creates a fresh
new instance of an Endpoint. However, endpoints are already kept in
memory by Sandbox, meaning we now have two in-memory instances of
the same Endpoint.
As it turns out, libnetwork is built around the idea that no two objects
representing the same thing should leave in-memory, otherwise breaking
mutex locking and optimistic locking (as both instances will have a drifting
version tracking ID -- dbIndex in libnetwork parliance).
In this specific case, this bug materializes by container rename failing
when applied a second time for a given container. An integration test is
added to make sure this won't happen again.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Prior to 7a9b680a, the container short ID was added to the network
aliases only for custom networks. However, this logic wasn't preserved
in 6a2542d and now the cid is always added to the list of network
aliases.
This commit reintroduces the old logic.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Commit 21e50b89c9 added a label on the buildkit
worker to advertise the host-gateway-ip. This option can be either set by the
user in the daemon config, or otherwise defaults to the gateway-ip.
If no value is set by the user, discovery of the gateway-ip happens when
initializing the network-controller (`NewDaemon`, `daemon.restore()`).
However d222bf097c changed how we handle the
daemon config. As a result, the `cli.Config` used when initializing the
builder only holds configuration information form the daemon config
(user-specified or defaults), but is not updated with information set
by `NewDaemon`.
This patch adds an accessor on the daemon to get the current daemon config.
An alternative could be to return the config by `NewDaemon` (which should
likely be a _copy_ of the config).
Before this patch:
docker buildx inspect default
Name: default
Driver: docker
Nodes:
Name: default
Endpoint: default
Status: running
Buildkit: v0.12.4+3b6880d2a00f
Platforms: linux/arm64, linux/amd64, linux/amd64/v2, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
Labels:
org.mobyproject.buildkit.worker.moby.host-gateway-ip: <nil>
After this patch:
docker buildx inspect default
Name: default
Driver: docker
Nodes:
Name: default
Endpoint: default
Status: running
Buildkit: v0.12.4+3b6880d2a00f
Platforms: linux/arm64, linux/amd64, linux/amd64/v2, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
Labels:
org.mobyproject.buildkit.worker.moby.host-gateway-ip: 172.18.0.1
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The `GetImageOpts` struct is used for options to be passed to the backend,
and are not used in client code. This struct currently is intended for internal
use only.
This patch moves the `GetImageOpts` struct to the backend package to prevent
it being imported in the client, and to make it more clear that this is part
of internal APIs, and not public-facing.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The MAC address of a running container was stored in the same place as
the configured address for a container.
When starting a stopped container, a generated address was treated as a
configured address. If that generated address (based on an IPAM-assigned
IP address) had been reused, the containers ended up with duplicate MAC
addresses.
So, remember whether the MAC address was explicitly configured, and
clear it if not.
Signed-off-by: Rob Murray <rob.murray@docker.com>
With containerd snapshotters enabled `docker run` currently fails when
creating a container from an image that doesn't have the default host
platform without an explicit `--platform` selection:
```
$ docker run image:amd64
Unable to find image 'asdf:amd64' locally
docker: Error response from daemon: pull access denied for asdf, repository does not exist or may require 'docker login'.
See 'docker run --help'.
```
This is confusing and the graphdriver behavior is much better here,
because it runs whatever platform the image has, but prints a warning:
```
$ docker run image:amd64
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```
This commits changes the containerd snapshotter behavior to be the same
as the graphdriver. This doesn't affect container creation when platform
is specified explicitly.
```
$ docker run --rm --platform linux/arm64 asdf:amd64
Unable to find image 'asdf:amd64' locally
docker: Error response from daemon: pull access denied for asdf, repository does not exist or may require 'docker login'.
See 'docker run --help'.
```
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Since v25.0 (commit ff50388), we validate endpoint settings when
containers are created, instead of doing so when containers are started.
However, a container created prior to that release would still trigger
validation error at start-time. In such case, the API returns a 500
status code because the Go error isn't wrapped into an InvalidParameter
error. This is now fixed.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
This matcher was only used internally in the containerd implementation of
the image store. Un-export it, and make it a local utility in that package
to prevent external use.
This package was introduced in 1616a09b61
(v24.0), and there are no known external consumers of this package, so there
should be no need to deprecate / alias the old location.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This flag was marked deprecated in commit 5a922dc16 (released in v24.0)
and to be removed in the next release.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
`VolumeOptions` now has a `Subpath` field which allows to specify a path
relative to the volume that should be mounted as a destination.
Symlinks are supported, but they cannot escape the base volume
directory.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
We constructed a "function level" logger, which was used once "as-is", but
also added additional Fields in a loop (for each resource), effectively
overwriting the previous one for each iteration. Adding additional
fields can result in some overhead, so let's construct a "logger" only for
inside the loop.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
We have many "image" packages, so these vars easily conflict/shadow
imports. Let's rename them (and in some cases use a const) to
prevent that.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
All commonly used filesystems should use ref-counted mounter, so make it
the default instead of having to whitelist them.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Prior to this commit, a container running with `--net=host` had
`{"type":"network","path":"/var/run/docker/netns/default"}` in
the ``.linux.namespaces` field of the OCI Runtime Config,
but this wasn't needed.
Close issue 47100
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
The actual divergence is due to differences in the snapshotter and
graphfilter mount behaviour on Windows, but the snapshotter behaviour is
better, so we deal with it here rather than changing the snapshotter
behaviour.
We're relying on the internals of containerd's Windows mount
implementation here. Unless this code flow is replaced, future work is
to move getBackingDeviceForContainerdMount into containerd's mount
implementation.
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
The existing API ImageService.GetLayerFolders didn't have access to the
ID of the container, and once we have that, the snapshotter Mounts API
provides all the information we need here.
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Needed for Diff on Windows. Don't remount it afterwards as the layer is
going to be released anyway.
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
This is consistent with layerStore's CreateRWLayer behaviour.
Potentially this can be refactored to avoid creating the -init layer,
but as noted in layerStore's initMount, this name may be special, and
should be cleared-out all-at-once.
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Consider only images that were built `FROM scratch` as valid candidates
for the `FROM scratch` + INSTRUCTION build step.
The images are marked as `FROM scratch` based by the classic builder
with a special label. It must be a new label instead of empty parent
label, because empty label values are not persisted.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
In order for the cache in the classic builder to work we need to:
- use the came comparison function as the graph drivers implementation
- save the container config when commiting the image
- use all images to search a 'FROM "scratch"' image
- load all images if `cacheFrom` is empty
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>