Compare commits

..

211 commits
master ... 25.0

Author SHA1 Message Date
Paweł Gronowski
aee8b332bf
Merge pull request #47681 from vvoland/v25.0-47423
[25.0 backport] ci: Require changelog description
2024-04-09 13:54:14 +02:00
Sebastiaan van Stijn
12c4e03288
Merge pull request #47697 from vvoland/v25.0-47658
[25.0 backport] Fix cases where we are wrapping a nil error
2024-04-09 13:30:27 +02:00
Brian Goff
e2e670299f
Fix cases where we are wrapping a nil error
This was using `errors.Wrap` when there was no error to wrap, meanwhile
we are supposed to be creating a new error.

Found this while investigating some log corruption issues and
unexpectedly getting a nil reader and a nil error from `getTailReader`.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 0a48d26fbc)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-04-09 10:20:24 +02:00
Sebastiaan van Stijn
f42f65b464
Merge pull request #47695 from cpuguy83/25_oci_tar_no_platform
[25.0] save: Remove platform from config descriptor
2024-04-09 09:46:07 +02:00
Brian Goff
935787c19c save: Remove platform from config descriptor
This was brought up by bmitch that its not expected to have a platform
object in the config descriptor.
Also checked with tianon who agreed, its not _wrong_ but is unexpected
and doesn't neccessarily make sense to have it there.

Also, while technically incorrect, ECR is throwing an error when it sees
this.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 9160b9fda6)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2024-04-08 17:12:31 +00:00
Paweł Gronowski
bd19301d9e
ci: Require changelog description
Any PR that is labeled with any `impact/*` label should have a
description for the changelog and an `area/*` label.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 1d473549e8)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-04-05 12:08:32 +02:00
Sebastiaan van Stijn
79c31c12fc
Merge pull request #47672 from vvoland/v25.0-47670
[25.0 backport] update to go1.21.9
2024-04-04 14:30:45 +02:00
Paweł Gronowski
50bd133ad3
update to go1.21.9
go1.21.9 (released 2024-04-03) includes a security fix to the net/http
package, as well as bug fixes to the linker, and the go/types and
net/http packages. See the [Go 1.21.9 milestone](https://github.com/golang/go/issues?q=milestone%3AGo1.21.9+label%3ACherryPickApproved)
for more details.

These minor releases include 1 security fixes following the security policy:

- http2: close connections when receiving too many headers

Maintaining HPACK state requires that we parse and process all HEADERS
and CONTINUATION frames on a connection. When a request's headers exceed
MaxHeaderBytes, we don't allocate memory to store the excess headers but
we do parse them. This permits an attacker to cause an HTTP/2 endpoint
to read arbitrary amounts of header data, all associated with a request
which is going to be rejected. These headers can include Huffman-encoded
data which is significantly more expensive for the receiver to decode
than for an attacker to send.

Set a limit on the amount of excess header frames we will process before
closing a connection.

Thanks to Bartek Nowotarski (https://nowotarski.info/) for reporting this issue.

This is CVE-2023-45288 and Go issue https://go.dev/issue/65051.

View the release notes for more information:
https://go.dev/doc/devel/release#go1.22.2

- https://github.com/golang/go/issues?q=milestone%3AGo1.21.9+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.8...go1.21.9

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 329d403e20)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-04-04 10:17:28 +02:00
Paweł Gronowski
e63daec867
Merge pull request #47589 from vvoland/v25.0-47538
[25.0 backport] libnet: Don't forward to upstream resolvers on internal nw
2024-03-19 15:12:29 +01:00
Paweł Gronowski
817bccb1c6
Merge pull request #47588 from vvoland/v25.0-47558
[25.0 backport] plugin: fix mounting /etc/hosts when running in UserNS
2024-03-19 15:12:15 +01:00
Bjorn Neergaard
2a0601e84e
Merge pull request #47587 from vvoland/v25.0-47559
[25.0 backport] rootless: fix `open /etc/docker/plugins: permission denied`
2024-03-19 07:24:36 -06:00
Sebastiaan van Stijn
9df9ccc06f
Merge pull request #47586 from vvoland/v25.0-47569
[25.0 backport] Makefile: generate-files: fix check for empty TMP_OUT
2024-03-19 12:45:46 +01:00
Albin Kerouanton
a987bc5ad0 libnet: Don't forward to upstream resolvers on internal nw
Commit cbc2a71c2 makes `connect` syscall fail fast when a container is
only attached to an internal network. Thanks to that, if such a
container tries to resolve an "external" domain, the embedded resolver
returns an error immediately instead of waiting for a timeout.

This commit makes sure the embedded resolver doesn't even try to forward
to upstream servers.

Co-authored-by: Albin Kerouanton <albinker@gmail.com>
Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 790c3039d0)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-19 11:21:46 +00:00
Rob Murray
20c205fd3a Environment variable to override resolv.conf path.
If env var DOCKER_TEST_RESOLV_CONF_PATH is set, treat it as an override
for the 'resolv.conf' path.

Added as part of resolv.conf refactoring, but needed by back-ported test
TestInternalNetworkDNS.

Signed-off-by: Rob Murray <rob.murray@docker.com>
2024-03-19 11:21:46 +00:00
Sebastiaan van Stijn
4be97233cc
daemon: move getUnprivilegedMountFlags to internal package
This code is currently only used in the daemon, but is also needed in other
places. We should consider moving this code to github.com/moby/sys, so that
BuildKit can also use the same implementation instead of maintaining a fork;
moving it to internal allows us to reuse this code inside the repository, but
does not allow external consumers to depend on it (which we don't want as
it's not a permanent location).

As our code only uses this in linux files, I did not add a stub for other
platforms (but we may decide to do that in the moby/sys repository).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7b414f5703)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-19 10:46:49 +01:00
Akihiro Suda
7ed7e6caf6
plugin: fix mounting /etc/hosts when running in UserNS
Fix `error mounting "/etc/hosts" to rootfs at "/etc/hosts": mount
/etc/hosts:/etc/hosts (via /proc/self/fd/6), flags: 0x5021: operation
not permitted`.

This error was introduced in 7d08d84b03
(`dockerd-rootless.sh: set rootlesskit --state-dir=DIR`) that changed
the filesystem of the state dir from /tmp to /run (in a typical setup).

Fix issue 47248

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 762ec4b60c)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-19 10:46:48 +01:00
Akihiro Suda
81ad7062f0
rootless: fix open /etc/docker/plugins: permission denied
Fix issue 47436

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit d742659877)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-19 10:12:38 +01:00
Sebastiaan van Stijn
02d4ee3f9a
Makefile: generate-files: fix check for empty TMP_OUT
commit c655b7dc78 added a check to make sure
the TMP_OUT variable was not set to an empty value, as such a situation would
perform an `rm -rf /**` during cleanup.

However, it was a bit too eager, because Makefile conditionals (`ifeq`) are
evaluated when parsing the Makefile, which happens _before_ the make target
is executed.

As a result `$@_TMP_OUT` was always empty when the `ifeq` was evaluated,
making it not possible to execute the `generate-files` target.

This patch changes the check to use a shell command to evaluate if the var
is set to an empty value.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 25c9e6e8df)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-19 10:11:47 +01:00
Sebastiaan van Stijn
5901652edd
Merge pull request #47533 from vvoland/v25.0-47530
[25.0 backport] volume: Don't decrement refcount below 0
2024-03-08 14:00:37 +01:00
Paweł Gronowski
478f6b097d
volume: Don't decrement refcount below 0
With both rootless and live restore enabled, there's some race condition
which causes the container to be `Unmount`ed before the refcount is
restored.

This makes sure we don't underflow the refcount (uint64) when
decrementing it.

The root cause of this race condition still needs to be investigated and
fixed, but at least this unflakies the `TestLiveRestore`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 294fc9762e)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-08 12:51:45 +01:00
Bjorn Neergaard
98b171fd4d
Merge pull request #47527 from vvoland/v25.0-47523
[25.0 backport] builder-next: fix missing lock in ensurelayer
2024-03-07 07:08:59 -07:00
Tonis Tiigi
d250e13945
builder-next: fix missing lock in ensurelayer
When this was called concurrently from the moby image
exporter there could be a data race where a layer was
written to the refs map when it was already there.

In that case the reference count got mixed up and on
release only one of these layers was actually released.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 37545cc644)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-07 12:18:07 +01:00
Paweł Gronowski
061aa95809
Merge pull request #47513 from vvoland/v25.0-47498
[25.0 backport] daemon: overlay2: remove world writable permission from the lower file
2024-03-06 14:58:50 +01:00
Jaroslav Jindrak
d0d85f6438
daemon: overlay2: remove world writable permission from the lower file
In de2447c, the creation of the 'lower' file was changed from using
os.Create to using ioutils.AtomicWriteFile, which ignores the system's
umask. This means that even though the requested permission in the
source code was always 0666, it was 0644 on systems with default
umask of 0022 prior to de2447c, so the move to AtomicFile potentially
increased the file's permissions.

This is not a security issue because the parent directory does not
allow writes into the file, but it can confuse security scanners on
Linux-based systems into giving false positives.

Signed-off-by: Jaroslav Jindrak <dzejrou@gmail.com>
(cherry picked from commit cadb124ab6)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-06 13:11:41 +01:00
Paweł Gronowski
5d6679345c
Merge pull request #47508 from vvoland/v25.0-47504
[25.0 backport] update RootlessKit to 2.0.2
2024-03-06 12:55:15 +01:00
Paweł Gronowski
ef1fa235cd
Merge pull request #47510 from akerouanton/25.0-47441_mac_addr_config_migration
[25.0 backport] Don't create endpoint config for MAC addr config migration
2024-03-06 12:15:50 +01:00
Rob Murray
0451b287dc Don't create endpoint config for MAC addr config migration
In a container-create API request, HostConfig.NetworkMode (the identity
of the "main" network) may be a name, id or short-id.

The configuration for that network, including preferred IP address etc,
may be keyed on network name or id - it need not match the NetworkMode.

So, when migrating the old container-wide MAC address to the new
per-endpoint field - it is not safe to create a new EndpointSettings
entry unless there is no possibility that it will duplicate settings
intended for the same network (because one of the duplicates will be
discarded later, dropping the settings it contains).

This change introduces a new API restriction, if the deprecated container
wide field is used in the new API, and EndpointsConfig is provided for
any network, the NetworkMode and key under which the EndpointsConfig is
store must be the same - no mixing of ids and names.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit a580544d82)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-06 11:20:10 +01:00
Akihiro Suda
d27fe2558d
dockerd-rootless-setuptool.sh: check RootlessKit functionality
RootlessKit will print hints if something is still unsatisfied.

e.g., `kernel.apparmor_restrict_unprivileged_userns` constraint
rootless-containers/rootlesskit@33c3e7ca6c

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit b32cfc3b3a)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-06 10:38:17 +01:00
Akihiro Suda
77de535364
Dockerfile: update RootlessKit to v2.0.2
https://github.com/rootless-containers/rootlesskit/compare/v2.0.1...v2.0.2

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 49fd8df9b9)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-06 10:38:15 +01:00
Paweł Gronowski
9e526bc394
Merge pull request #47503 from vvoland/v25.0-47502
[25.0 backport] update to go1.21.8
2024-03-05 21:58:50 +01:00
Paweł Gronowski
2d347024d1 update to go1.21.8
go1.21.8 (released 2024-03-05) includes 5 security fixes

- crypto/x509: Verify panics on certificates with an unknown public key algorithm (CVE-2024-24783, https://go.dev/issue/65390)
- net/http: memory exhaustion in Request.ParseMultipartForm (CVE-2023-45290, https://go.dev/issue/65383)
- net/http, net/http/cookiejar: incorrect forwarding of sensitive headers and cookies on HTTP redirect (CVE-2023-45289, https://go.dev/issue/65065)
- html/template: errors returned from MarshalJSON methods may break template escaping (CVE-2024-24785, https://go.dev/issue/65697)
- net/mail: comments in display names are incorrectly handled (CVE-2024-24784, https://go.dev/issue/65083)

View the release notes for more information:
https://go.dev/doc/devel/release#go1.22.1

- https://github.com/golang/go/issues?q=milestone%3AGo1.21.8+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.7...go1.21.8

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 57b7ffa7f6)
2024-03-05 19:23:00 +01:00
Albin Kerouanton
51e876cd96
Merge pull request #47493 from akerouanton/25.0-47370_windows_natnw_dns_test
[25.0 backport] Test DNS on Windows 'nat' networks
2024-03-01 17:02:36 +01:00
Sebastiaan van Stijn
3fa0cedce3
Merge pull request #47484 from akerouanton/25.0-c8d-pull-fslayer
[25.0 backport] c8d/pull: Progress fixes
2024-03-01 14:06:52 +01:00
Sebastiaan van Stijn
4e7d8531ed
Merge pull request #47491 from akerouanton/25.0-c8d-skip-last-windows-tests
[25.0 backport] c8d/windows: Temporarily skip two failing tests
2024-03-01 14:04:41 +01:00
Rob Murray
f66b5f642e Test DNS on Windows 'nat' networks
Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 9083c2f10d)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 13:51:20 +01:00
Albin Kerouanton
41fde13f64
Merge pull request #47490 from akerouanton/25.0-47370_windows_nat_network_dns
[25.0 backport] Set up DNS names for Windows default network
2024-03-01 13:13:02 +01:00
Sebastiaan van Stijn
0db1c6d8bb
Merge pull request #47488 from akerouanton/25.0-run_macvlan_ipvlan_tests
[25.0 backport] Run the macvlan/ipvlan integration tests
2024-03-01 12:59:32 +01:00
Paweł Gronowski
33a29c0135
Merge pull request #47489 from akerouanton/25.0-ci-codecov-token
[25.0 backport] ci: set codecov token
2024-03-01 12:44:46 +01:00
Albin Kerouanton
30545de83e
Merge pull request #47393 from vvoland/rro-backwards-compatible-25
[25.0 backport] api/pre-1.44: Default `ReadOnlyNonRecursive` to true
2024-03-01 12:15:21 +01:00
Paweł Gronowski
fa4ea308f0 c8d/windows: Temporarily skip two failing tests
They're failing the CI and we have a tracking ticket: #47107

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 44167988c3)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 12:07:21 +01:00
Rob Murray
d66e0fb7b1 Set up DNS names for Windows default network
DNS names were only set up for user-defined networks. On Linux, none
of the built-in networks (bridge/host/none) have built-in DNS, so they
don't need DNS names.

But, on Windows, the default network is "nat" and it does need the DNS
names.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 443f56efb0)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 12:03:16 +01:00
Sebastiaan van Stijn
30ecc0ea8a
Merge pull request #47482 from akerouanton/25.0-swarm-ipam-validation
[25.0 backport] Don't enforce new validation rules for existing networks
2024-03-01 11:58:18 +01:00
Sebastiaan van Stijn
06767446fe
Merge pull request #47481 from akerouanton/25.0-internal-bridge
[25.0 backport] Make 'internal' bridge networks accessible from host
2024-03-01 11:57:38 +01:00
Sebastiaan van Stijn
7048a63686
Merge pull request #47486 from akerouanton/25.0-go-1.21.7
[25.0 backport] update to go1.21.7
2024-03-01 11:56:23 +01:00
Paweł Gronowski
81fb7f9986
Merge pull request #47487 from akerouanton/25.0-integration-testdaemonproxy-reset-otel
[25.0 backport] integration: Reset `OTEL_EXPORTER_OTLP_ENDPOINT` for sub-daemons
2024-03-01 11:46:42 +01:00
Sebastiaan van Stijn
b77bb69f87
Merge pull request #47483 from akerouanton/25.0-best-effort-xattrs-classic-builder
[25.0 backport] builder/dockerfile: ADD with best-effort xattrs
2024-03-01 11:03:34 +01:00
CrazyMax
7a4abb8c77 ci: set codecov token
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 38827ba290)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:38:53 +01:00
Rob Murray
81a83f0544 Simplify macvlan/ipvlan integration test structure
Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 9faf4855d5)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:34:52 +01:00
Rob Murray
abcd6f8a46 Run the macvlan/ipvlan integration tests
The problem was accidentally introduced in:
  e8dc902781

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 4eb95d01bc)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:34:52 +01:00
Paweł Gronowski
f7be6dcba6 integration: Reset OTEL_EXPORTER_OTLP_ENDPOINT for sub-daemons
When creating a new daemon in the `TestDaemonProxy`, reset the
`OTEL_EXPORTER_OTLP_ENDPOINT` to an empty value to disable OTEL
collection to avoid it hitting the proxy.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 5fe96e234d)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:30:13 +01:00
Sebastiaan van Stijn
10609544e5 update to go1.21.7
go1.21.7 (released 2024-02-06) includes fixes to the compiler, the go command,
the runtime, and the crypto/x509 package. See the Go 1.21.7 milestone on our
issue tracker for details:

- https://github.com/golang/go/issues?q=milestone%3AGo1.21.7+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.6...go1.21.7

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7c2975d2df)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:20:49 +01:00
Paweł Gronowski
be59afce2d c8d/pull: Output truncated id for Pulling fs layer
All other progress updates are emitted with truncated id.

```diff
$ docker pull --platform linux/amd64 alpine
Using default tag: latest
latest: Pulling from library/alpine
-sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8: Pulling fs layer
+4abcf2066143: Download complete
Digest: sha256:c5b1261d6d3e43071626931fc004f70149baeba2c8ec672bd4f27761f8e1ad6b
Status: Image is up to date for alpine:latest
docker.io/library/alpine:latest
```

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 16aa7dd67f)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:10:04 +01:00
Paweł Gronowski
97951c39fb c8d/pull: Don't emit Downloading with 0 progress
To align with the graphdrivers behavior and don't send unnecessary
progress messages.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 14df52b709)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:09:41 +01:00
Paweł Gronowski
2001813571 c8d/pull: Emit Pulling fs layer
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit ff5f780f2b)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:09:37 +01:00
Paweł Gronowski
8e3bcf1974 pkg/streamformatter: Make progressOutput concurrency safe
Sync access to the underlying `io.Writer` with a mutex.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 5689dabfb3)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:09:32 +01:00
Cory Snider
27f36f42a4 builder/dockerfile: ADD with best-effort xattrs
Archives being unpacked by Dockerfiles may have been created on other
OSes with different conventions and semantics for xattrs, making them
impossible to apply when extracting. Restore the old best-effort xattr
behaviour users have come to depend on in the classic builder.

The (archive.Archiver).UntarPath function does not allow the options
passed to Untar to be customized. It also happens to be a trivial
wrapper around the Untar function. Inline the function body and add the
option.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 5bcd2f6860)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 10:01:09 +01:00
Rob Murray
1ae019fca2 Don't enforce new validation rules for existing networks
Non-swarm networks created before network-creation-time validation
was added in 25.0.0 continued working, because the checks are not
re-run.

But, swarm creates networks when needed (with 'agent=true'), to
ensure they exist on each agent - ignoring the NetworkNameError
that says the network already existed.

By ignoring validation errors on creation of a network with
agent=true, pre-existing swarm networks with IPAM config that would
fail the new checks will continue to work too.

New swarm (overlay) networks are still validated, because they are
initially created with 'agent=false'.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 571af915d5)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 09:54:46 +01:00
Rob Murray
c761353e7c Make 'internal' bridge networks accessible from host
Prior to release 25.0.0, the bridge in an internal network was assigned
an IP address - making the internal network accessible from the host,
giving containers on the network access to anything listening on the
bridge's address (or INADDR_ANY on the host).

This change restores that behaviour. It does not restore the default
route that was configured in the container, because packets sent outside
the internal network's subnet have always been dropped. So, a 'connect()'
to an address outside the subnet will still fail fast.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 419f5a6372)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-03-01 09:29:41 +01:00
Paweł Gronowski
00b2e1072b
Merge pull request #47476 from vvoland/ci-report-timeout-25
[25.0 backport] ci: Update `teststat` to v0.1.25
2024-02-29 20:40:06 +01:00
Paweł Gronowski
10bc347b03
ci: Update teststat to v0.1.25
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit fc0e5401f2)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-29 16:01:20 +01:00
Sebastiaan van Stijn
6d675b429e
Merge pull request #47471 from vvoland/ci-reports-better-find-25
[25.0 backport] ci: Make `find` for test reports more specific
2024-02-29 15:56:26 +01:00
Paweł Gronowski
9f1b47c597
Merge pull request #47470 from neersighted/backport/47440/25.0
[25.0 backport] client: fix connection-errors being shadowed by API version errors
2024-02-29 15:53:41 +01:00
Sebastiaan van Stijn
94137f6df5
client: fix connection-errors being shadowed by API version mismatch errors
Commit e6907243af applied a fix for situations
where the client was configured with API-version negotiation, but did not yet
negotiate a version.

However, the checkVersion() function that was implemented copied the semantics
of cli.NegotiateAPIVersion, which ignored connection failures with the
assumption that connection errors would still surface further down.

However, when using the result of a failed negotiation for NewVersionError,
an API version mismatch error would be produced, masking the actual connection
error.

This patch changes the signature of checkVersion to return unexpected errors,
including failures to connect to the API.

Before this patch:

    docker -H unix:///no/such/socket.sock secret ls
    "secret list" requires API version 1.25, but the Docker daemon API version is 1.24

With this patch applied:

    docker -H unix:///no/such/socket.sock secret ls
    Cannot connect to the Docker daemon at unix:///no/such/socket.sock. Is the docker daemon running?

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6aea26b431)
Conflicts: client/image_list.go
    client/image_list_test.go
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-29 06:02:06 -07:00
Paweł Gronowski
dd5faa9d4f
ci: Make find for test reports more specific
Don't use all `*.json` files blindly, take only these that are likely to
be reports from go test.
Also, use `find ... -exec` instead of piping results to `xargs`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit e4de4dea5c)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-29 10:07:44 +01:00
Sebastiaan van Stijn
012bfd33e5
client: doRequest: make sure we return a connection-error
This function has various errors that are returned when failing to make a
connection (due to permission issues, TLS mis-configuration, or failing to
resolve the TCP address).

The errConnectionFailed error is currently used as a special case when
processing Ping responses. The current code did not consistently treat
connection errors, and because of that could either absorb the error,
or process the empty response.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 913478b428)
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-29 01:17:13 -07:00
Sebastiaan van Stijn
3ec1946ce1
client: NegotiateAPIVersion: do not ignore (connection) errors from Ping
NegotiateAPIVersion was ignoring errors returned by Ping. The intent here
was to handle API responses from a daemon that may be in an unhealthy state,
however this case is already handled by Ping itself.

Ping only returns an error when either failing to connect to the API (daemon
not running or permissions errors), or when failing to parse the API response.

Neither of those should be ignored in this code, or considered a successful
"ping", so update the code to return

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 901b90593d)
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-29 01:17:13 -07:00
Sebastiaan van Stijn
200a2c3576
client: fix TestPingWithError
This test was added in 27ef09a46f, which changed
the Ping handling to ignore internal server errors. That case is tested in
TestPingFail, which verifies that we accept the Ping response if a 500
status code was received.

The TestPingWithError test was added to verify behavior if a protocol
(connection) error occurred; however the mock-client returned both a
response, and an error; the error returned would only happen if a connection
error occurred, which means that the server would not provide a reply.

Running the test also shows that returning a response is unexpected, and
ignored:

    === RUN   TestPingWithError
    2024/02/23 14:16:49 RoundTripper returned a response & error; ignoring response
    2024/02/23 14:16:49 RoundTripper returned a response & error; ignoring response
    --- PASS: TestPingWithError (0.00s)
    PASS

This patch updates the test to remove the response.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 349abc64ed)
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-29 01:17:09 -07:00
Paweł Gronowski
cb66214dfd
Merge pull request #47466 from huang-jl/25.0_backport_fix_restore_digest
[25.0 backport] libcontainerd: change the digest used when restoring
2024-02-28 17:42:18 +01:00
huang-jl
70c05fe10c libcontainerd: change the digest used when restoring
For current implementation of Checkpoint Restore (C/R) in docker, it
will write the checkpoint to content store. However, when restoring
libcontainerd uses .Digest().Encoded(), which will remove the info
of alg, leading to error.

Signed-off-by: huang-jl <1046678590@qq.com>
(cherry picked from commit da643c0b8a)
Signed-off-by: huang-jl <1046678590@qq.com>
2024-02-28 23:27:17 +08:00
Paweł Gronowski
e85cef89fa
api/pre-1.44: Default ReadOnlyNonRecursive to true
Don't change the behavior for older clients and keep the same behavior.
Otherwise client can't opt-out (because `ReadOnlyNonRecursive` is
unsupported before 1.44).

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 432390320e)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-28 10:00:22 +01:00
Paweł Gronowski
a72294a668
mounts/validate: Don't check source exists with CreateMountpoint
Don't error out when mount source doesn't exist and mounts has
`CreateMountpoint` option enabled.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 05b883bdc8)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-27 23:31:36 +01:00
Paweł Gronowski
9ee331235a
integration: Add container.Output utility
Extracted from bfb810445c

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-27 23:31:35 +01:00
Bjorn Neergaard
6fb71a9764
Merge pull request #47451 from neersighted/image_created_omitempty_25.0
[25.0 backport] api: omit missing Created field from ImageInspect response
2024-02-26 15:09:06 -07:00
Bjorn Neergaard
5d9e13bc84
api: omit missing Created field from ImageInspect response
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-26 10:44:35 -07:00
Bjorn Neergaard
36d02bf488
Merge pull request #47387 from neersighted/backport/47374/25.0
[25.0 backport] Set `Created` to `0001-01-01T00:00:00Z` on older API versions
2024-02-14 11:43:13 -07:00
Paweł Gronowski
bb66c3ca04
api/history: Mention empty Created
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 903412d0fc)
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-14 09:11:28 -07:00
Tianon Gravi
fa3a64f2bc
Set Created to 0001-01-01T00:00:00Z on older API versions
This matches the prior behavior before 2a6ff3c24f.

This also updates the Swagger documentation for the current version to note that the field might be the empty string and what that means.

Signed-off-by: Tianon Gravi <admwiggin@gmail.com>
(cherry picked from commit b4fbe226e8)
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
2024-02-14 09:11:21 -07:00
Sebastiaan van Stijn
f417435e5f
Merge pull request #47348 from rumpl/25.0_backport-history-config
[25.0 backport]  c8d: Use the same logic to get the present images
2024-02-06 19:43:34 +01:00
Djordje Lukic
acd023d42b
c8d: Use the same logic to get the present images
Inspect and history used two different ways to find the present images.
This made history fail in some cases where image inspect would work (if
a configuration of a manifest wasn't found in the content store).

With this change we now use the same logic for both inspect and history.

Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
2024-02-06 17:17:51 +01:00
Sebastiaan van Stijn
7a075cacf9
Merge pull request #47344 from thaJeztah/25.0_backport_seccomp_updates
[25.0 backport] profiles/seccomp: add syscalls for kernel v5.17 - v6.6, match containerd's profile
2024-02-06 16:46:41 +01:00
Sebastiaan van Stijn
aff7177ee7
Merge pull request #47337 from vvoland/cache-fix-older-windows-25
[25.0 backport] image/cache: Ignore Build and Revision on Windows
2024-02-06 16:18:02 +01:00
Sebastiaan van Stijn
ed7c26339e
seccomp: add futex_wake syscall (kernel v6.7, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: 9f6c532f59

    futex: Add sys_futex_wake()

    To complement sys_futex_waitv() add sys_futex_wake(). This syscall
    implements what was previously known as FUTEX_WAKE_BITSET except it
    uses 'unsigned long' for the bitmask and takes FUTEX2 flags.

    The 'unsigned long' allows FUTEX2_SIZE_U64 on 64bit platforms.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d69729e053)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:59 +01:00
Sebastiaan van Stijn
74e3b4fb2e
seccomp: add futex_wait syscall (kernel v6.7, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: cb8c4312af

    futex: Add sys_futex_wait()

    To complement sys_futex_waitv()/wake(), add sys_futex_wait(). This
    syscall implements what was previously known as FUTEX_WAIT_BITSET
    except it uses 'unsigned long' for the value and bitmask arguments,
    takes timespec and clockid_t arguments for the absolute timeout and
    uses FUTEX2 flags.

    The 'unsigned long' allows FUTEX2_SIZE_U64 on 64bit platforms.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 10d344d176)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:59 +01:00
Sebastiaan van Stijn
4cc0416534
seccomp: add futex_requeue syscall (kernel v6.7, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: 0f4b5f9722

    futex: Add sys_futex_requeue()

    Finish off the 'simple' futex2 syscall group by adding
    sys_futex_requeue(). Unlike sys_futex_{wait,wake}() its arguments are
    too numerous to fit into a regular syscall. As such, use struct
    futex_waitv to pass the 'source' and 'destination' futexes to the
    syscall.

    This syscall implements what was previously known as FUTEX_CMP_REQUEUE
    and uses {val, uaddr, flags} for source and {uaddr, flags} for
    destination.

    This design explicitly allows requeueing between different types of
    futex by having a different flags word per uaddr.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit df57a080b6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:59 +01:00
Sebastiaan van Stijn
f9f9e7ff9a
seccomp: add map_shadow_stack syscall (kernel v6.6, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: c35559f94e

    x86/shstk: Introduce map_shadow_stack syscall

    When operating with shadow stacks enabled, the kernel will automatically
    allocate shadow stacks for new threads, however in some cases userspace
    will need additional shadow stacks. The main example of this is the
    ucontext family of functions, which require userspace allocating and
    pivoting to userspace managed stacks.

    Unlike most other user memory permissions, shadow stacks need to be
    provisioned with special data in order to be useful. They need to be setup
    with a restore token so that userspace can pivot to them via the RSTORSSP
    instruction. But, the security design of shadow stacks is that they
    should not be written to except in limited circumstances. This presents a
    problem for userspace, as to how userspace can provision this special
    data, without allowing for the shadow stack to be generally writable.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8826f402f9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:59 +01:00
Sebastiaan van Stijn
5fb4eb941d
seccomp: add fchmodat2 syscall (kernel v6.6, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: 09da082b07

    fs: Add fchmodat2()

    On the userspace side fchmodat(3) is implemented as a wrapper
    function which implements the POSIX-specified interface. This
    interface differs from the underlying kernel system call, which does not
    have a flags argument. Most implementations require procfs [1][2].

    There doesn't appear to be a good userspace workaround for this issue
    but the implementation in the kernel is pretty straight-forward.

    The new fchmodat2() syscall allows to pass the AT_SYMLINK_NOFOLLOW flag,
    unlike existing fchmodat.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6f242f1a28)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:59 +01:00
Sebastiaan van Stijn
67e9aa6d4d
seccomp: add cachestat syscall (kernel v6.5, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: cf264e1329

    NAME
        cachestat - query the page cache statistics of a file.

    SYNOPSIS
        #include <sys/mman.h>

        struct cachestat_range {
            __u64 off;
            __u64 len;
        };

        struct cachestat {
            __u64 nr_cache;
            __u64 nr_dirty;
            __u64 nr_writeback;
            __u64 nr_evicted;
            __u64 nr_recently_evicted;
        };

        int cachestat(unsigned int fd, struct cachestat_range *cstat_range,
            struct cachestat *cstat, unsigned int flags);

    DESCRIPTION
        cachestat() queries the number of cached pages, number of dirty
        pages, number of pages marked for writeback, number of evicted
        pages, number of recently evicted pages, in the bytes range given by
        `off` and `len`.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4d0d5ee10d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:58 +01:00
Sebastiaan van Stijn
61b82be580
seccomp: add set_mempolicy_home_node syscall (kernel v5.17, libseccomp v2.5.4)
This syscall is gated by CAP_SYS_NICE, matching the profile in containerd.

containerd: a6e52c74fa
libseccomp: d83cb7ac25
kernel: c6018b4b25

    mm/mempolicy: add set_mempolicy_home_node syscall
    This syscall can be used to set a home node for the MPOL_BIND and
    MPOL_PREFERRED_MANY memory policy.  Users should use this syscall after
    setting up a memory policy for the specified range as shown below.

      mbind(p, nr_pages * page_size, MPOL_BIND, new_nodes->maskp,
            new_nodes->size + 1, 0);
      sys_set_mempolicy_home_node((unsigned long)p, nr_pages * page_size,
                    home_node, 0);

    The syscall allows specifying a home node/preferred node from which
    kernel will fulfill memory allocation requests first.
    ...

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1251982cf7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:25:56 +01:00
Paweł Gronowski
0227d95f99
image/cache: Use Platform from ocispec
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 2c01d53d96)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-06 14:29:54 +01:00
Paweł Gronowski
fa9c5c55e1
image/cache: Ignore Build and Revision on Windows
The compatibility depends on whether `hyperv` or `process` container
isolation is used.
This fixes cache not being used when building images based on older
Windows versions on a newer Windows host.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 91ea04089b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-06 13:13:37 +01:00
Sebastiaan van Stijn
df96d8d0bd
Merge pull request #47334 from thaJeztah/25.0_backport_rootlesskit_binary_2.0.1
[25.0 backport] Dockerfile: update RootlessKit to v2.0.1
2024-02-06 13:00:20 +01:00
Akihiro Suda
1652559be4
Dockerfile: update RootlessKit to v2.0.1
https://github.com/rootless-containers/rootlesskit/releases/tag/v2.0.1

Fix issue 47327 (`rootless lxc-user-nic: /etc/resolv.conf missing ip`)

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 7f1b700227)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 09:09:01 +01:00
Sebastiaan van Stijn
ab29279200
Merge pull request #47294 from vvoland/fix-save-manifests-25
[25.0 backport] image/save: Fix untagged images not present in index.json
2024-02-05 19:59:57 +01:00
Paweł Gronowski
147b5388dd
integration/save: Add tests checking OCI archive output
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 2ef0b53e51)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-05 14:28:02 +01:00
Sebastiaan van Stijn
60103717bc
Merge pull request #47316 from thaJeztah/25.0_backport_update_dev_cli_compose
[25.0 backport] Dockerfile: update docker-cli to v25.0.2, docker compose v2.24.5
2024-02-05 11:09:34 +01:00
Sebastiaan van Stijn
45dede440e
Merge pull request #47323 from thaJeztah/25.0_backport_plugin-install-digest
[25.0 backport] plugins: Fix panic when fetching by digest
2024-02-05 10:42:20 +01:00
Laura Brehm
ba4a2dab16
plugins: fix panic installing from repo w/ digest
Only print the tag when the received reference has a tag, if
we can't cast the received tag to a `reference.Tagged` then
skip printing the tag as it's likely a digest.

Fixes panic when trying to install a plugin from a reference
with a digest such as
`vieux/sshfs@sha256:1d3c3e42c12138da5ef7873b97f7f32cf99fb6edde75fa4f0bcf9ed277855811`

Signed-off-by: Laura Brehm <laurabrehm@hey.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-05 09:40:57 +01:00
Laura Brehm
51133117fb
tests: add plugin install test w/ digest
Adds a test case for installing a plugin from a remote in the form
of `plugin-content-trust@sha256:d98f2f8061...`, which is currently
causing the daemon to panic, as we found while running the CLI e2e
tests:

```
docker plugin install registry:5000/plugin-content-trust@sha256:d98f2f806144bf4ba62d4ecaf78fec2f2fe350df5a001f6e3b491c393326aedb
```

Signed-off-by: Laura Brehm <laurabrehm@hey.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-05 09:40:54 +01:00
Sebastiaan van Stijn
341a7978a5
Merge pull request #47313 from thaJeztah/25.0_backport_libc8d_fix_startup_data_race
[25.0 backport] libcontainerd/supervisor: fix data race
2024-02-03 14:37:57 +01:00
Sebastiaan van Stijn
10e3bfd0ac
Merge pull request #47220 from thaJeztah/25.0_backport_more_gocompat
[25.0 backport] add more //go:build directives to prevent downgrading to go1.16 language
2024-02-03 14:36:55 +01:00
Sebastiaan van Stijn
269a0d8feb
Dockerfile: update docker compose to v2.24.5
Update the version of compose used in CI to the latest version.

- full diff: https://github.com/docker/compose/compare/v2.24.3...v2.24.5
- release notes: https://github.com/docker/compose/releases/tag/v2.24.5

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 10d6f5213a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-03 13:54:14 +01:00
Sebastiaan van Stijn
876b1d1dcd
Dockerfile: update dev-shell version of the cli to v25.0.2
Update the docker CLI that's available for debugging in the dev-shell
to the v25 release.

full diff: https://github.com/docker/cli/compare/v25.0.1...v25.0.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9c92c07acf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-03 13:54:13 +01:00
Sebastiaan van Stijn
0bcd64689b
Dockerfile: update docker compose to v2.24.3
Update the version of compose used in CI to the latest version.

- full diff: https://github.com/docker/compose/compare/v2.24.2...v2.24.3
- release notes: https://github.com/docker/compose/releases/tag/v2.24.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 388ba9a69c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-03 13:54:13 +01:00
Sebastiaan van Stijn
8d454710cd
Dockerfile: update dev-shell version of the cli to v25.0.1
Update the docker CLI that's available for debugging in the dev-shell
to the v25 release.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3eb1527fdb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-03 13:54:13 +01:00
Sebastiaan van Stijn
6cf694fe70
Merge pull request #47243 from corhere/backport-25.0/fix-journald-logs-systemd-255
[25.0 backport] logger/journald: fix tailing logs with systemd 255
2024-02-03 13:29:42 +01:00
Cory Snider
c12bbf549b
libcontainerd/supervisor: fix data race
The monitorDaemon() goroutine calls startContainerd() then blocks on
<-daemonWaitCh to wait for it to exit. The startContainerd() function
would (re)initialize the daemonWaitCh so a restarted containerd could be
waited on. This implementation was race-free because startContainerd()
would synchronously initialize the daemonWaitCh before returning. When
the call to start the managed containerd process was moved into the
waiter goroutine, the code to initialize the daemonWaitCh struct field
was also moved into the goroutine. This introduced a race condition.

Move the daemonWaitCh initialization to guarantee that it happens before
the startContainerd() call returns.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit dd20bf4862)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-03 11:40:54 +01:00
Sebastiaan van Stijn
1ae115175c
Merge pull request #47311 from akerouanton/25.0-libnet-bridge-mtu-ignore-einval
[25.0 backport] libnet: bridge: ignore EINVAL when configuring bridge MTU
2024-02-03 11:36:47 +01:00
Sebastiaan van Stijn
a7f9907f5f
Merge pull request #47310 from akerouanton/25.0-revert-automatically-enable-ipv6
[25.0 backport] Revert "daemon: automatically set network EnableIPv6 if needed"
2024-02-03 11:31:11 +01:00
Cory Snider
9150d0115e d/logger/journald: quit waiting when logger closes
If a reader has caught up to the logger and is waiting for the next
message, it should stop waiting when the logger is closed. Otherwise
the reader will unnecessarily wait the full closedDrainTimeout for no
log messages to arrive.

This case was overlooked when the journald reader was recently
overhauled to be compatible with systemd 255, and the reader tests only
failed when a logical race happened to settle in such a way to exercise
the bugged code path. It was only after implicit flushing on close was
added to the journald test harness that the Follow tests would
repeatably fail due to this bug. (No new regression tests are needed.)

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 987fe37ed1)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-02-02 19:10:41 -05:00
Cory Snider
9af7c8ec0a d/logger/journald: sync logger on close in tests
The journald reader test harness injects an artificial asynchronous
delay into the logging pipeline: a logged message won't be written to
the journal until at least 150ms after the Log() call returns. If a test
returns while log messages are still in flight to be written, the logs
may attempt to be written after the TempDir has been cleaned up, leading
to spurious errors.

The logger read tests which interleave writing and reading have to
include explicit synchronization points to work reliably with this delay
in place. On the other hand, tests should not be required to sync the
logger explicitly before returning. Override the Close() method in the
test harness wrapper to wait for in-flight logs to be flushed to disk.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit d53b7d7e46)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-02-02 19:10:41 -05:00
Cory Snider
3344c502da d/logger/loggertest: improve TestConcurrent
- Check the return value when logging messages
- Log the stream (stdout/stderr) and list of messages that were not read
- Wait until the logger is closed before returning early (panic/fatal)

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 39c5c16521)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-02-02 19:10:41 -05:00
Cory Snider
6c9fafdda7 d/logger/journald: log journal-remote cmd output
Writing the systemd-journal-remote command output directly to os.Stdout
and os.Stderr makes it nearly impossible to tell which test case the
output is related to when the tests are not run in verbose mode. Extend
the journald sender fake to redirect output to the test log so they
interleave with the rest of the test output.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 5792bf7ab3)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-02-02 19:10:41 -05:00
Cory Snider
f8a8cdaf9e d/logger/journald: fix data race in test harness
The Go race detector was detecting a data race when running the
TestLogRead/Follow/Concurrent test against the journald logging driver.
The race was in the test harness, specifically syncLogger. The waitOn
field would be reassigned each time a log entry is sent to the journal,
which is not concurrency-safe. Make it concurrency-safe using the same
patterns that are used in the log follower implementation to synchronize
with the logger.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 982e777d49)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-02-02 19:10:41 -05:00
Albin Kerouanton
7a659049b8 libnet: bridge: ignore EINVAL when configuring bridge MTU
Since 964ab7158c, we explicitly set the bridge MTU if it was specified.
Unfortunately, kernel <v4.17 have a check preventing us to manually set
the MTU to anything greater than 1500 if no links is attached to the
bridge, which is how we do things -- create the bridge, set its MTU and
later on, attach veths to it.

Relevant kernel commit: 804b854d37

As we still have to support CentOS/RHEL 7 (and their old v3.10 kernels)
for a few more months, we need to ignore EINVAL if the MTU is > 1500
(but <= 65535).

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit 89470a7114)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-02-02 19:51:11 +01:00
Albin Kerouanton
0ccf1c2a93 api/t/network: ValidateIPAM: ignore v6 subnet when IPv6 is disabled
Commit 4f47013feb introduced a new validation step to make sure no
IPv6 subnet is configured on a network which has EnableIPv6=false.

Commit 5d5eeac310 then removed that validation step and automatically
enabled IPv6 for networks with a v6 subnet. But this specific commit
was reverted in c59e93a67b and now the error introduced by 4f47013feb
is re-introduced.

But it turns out some users expect a network created with an IPv6
subnet and EnableIPv6=false to actually have no IPv6 connectivity.
This restores that behavior.

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit e37172c613)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-02-02 19:37:27 +01:00
Albin Kerouanton
28c1a8bc2b Revert "daemon: automatically set network EnableIPv6 if needed"
This reverts commit 5d5eeac310.

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit c59e93a67b)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-02-02 19:37:27 +01:00
Sebastiaan van Stijn
5b5a58b2cd
Merge pull request #47304 from akerouanton/25.0-duplicate_mac_addrs2
[25.0 backport] Only restore a configured MAC addr on restart.
2024-02-02 19:04:09 +01:00
Sebastiaan van Stijn
282891f70c
Merge pull request #47303 from akerouanton/25.0-backport-internal-bridge-firewalld
[25.0 backport] Add internal n/w bridge to firewalld docker zone
2024-02-02 19:02:57 +01:00
Rob Murray
bbe6f09afc No inspect 'Config.MacAddress' unless configured.
Do not set 'Config.MacAddress' in inspect output unless the MAC address
is configured.

Also, make sure it is filled in for a configured address on the default
network before the container is started (by translating the network name
from 'default' to 'config' so that the address lookup works).

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 8c64b85fb9)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-02-02 09:10:04 +01:00
Rob Murray
5b13a38144 Only restore a configured MAC addr on restart.
The API's EndpointConfig struct has a MacAddress field that's used for
both the configured address, and the current address (which may be generated).

A configured address must be restored when a container is restarted, but a
generated address must not.

The previous attempt to differentiate between the two, without adding a field
to the API's EndpointConfig that would show up in 'inspect' output, was a
field in the daemon's version of EndpointSettings, MACOperational. It did
not work, MACOperational was set to true when a configured address was
used. So, while it ensured addresses were regenerated, it failed to preserve
a configured address.

So, this change removes that code, and adds DesiredMacAddress to the wrapped
version of EndpointSettings, where it is persisted but does not appear in
'inspect' results. Its value is copied from MacAddress (the API field) when
a container is created.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit dae33031e0)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-02-02 09:10:04 +01:00
Rob Murray
990e95dcf0 Add internal n/w bridge to firewalld docker zone
Containers attached to an 'internal' bridge network are unable to
communicate when the host is running firewalld.

Non-internal bridges are added to a trusted 'docker' firewalld zone, but
internal bridges were not.

DOCKER-ISOLATION iptables rules are still configured for an internal
network, they block traffic to/from addresses outside the network's subnet.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 2cc627932a)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-02-02 08:38:44 +01:00
Sebastiaan van Stijn
a140d0d95f
Merge pull request #47201 from thaJeztah/25.0_backport_swarm_rotate_key_flake
[25.0 backport] De-flake TestSwarmClusterRotateUnlockKey... again... maybe?
2024-02-02 00:13:42 +01:00
Sebastiaan van Stijn
91a8312fb7
Merge pull request #47295 from vvoland/api-build-version-25
[25.0 backport] api: Document `version` in `/build`
2024-02-01 21:07:57 +01:00
Sebastiaan van Stijn
cf03e96354
Merge pull request #47298 from thaJeztah/25.0_backport_rm_dash_rf
[25.0 backport] Assert temp output directory is not an empty string
2024-02-01 19:13:50 +01:00
Paweł Gronowski
c48b67160d
api: Document version in /build
It was introduced in API v1.38 but wasn't documented.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 0c3b8ccda7)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-01 17:00:31 +01:00
Paweł Gronowski
225e043196
c8d/save: Handle digested reference same as ID
When saving an image treat `image@sha256:abcdef...` the same as
`abcdef...`, this makes it:

- Not export the digested tag as the image name
- Not try to export all tags from the image repository

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 5e13f54f57)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-01 16:48:36 +01:00
Paweł Gronowski
78174d2e74
image/save: Fix untagged images not present in index.json
Saving an image via digested reference, ID or truncated ID doesn't store
the image reference in the archive. This also causes the save code to
not add the image's manifest to the index.json.
This commit explicitly adds the untagged manifests to the index.json if
no tagged manifests were added.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit d131f00fff)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-01 16:48:34 +01:00
Sebastiaan van Stijn
622e66684a
Merge pull request #47296 from dvdksn/25.0_backport_api_docs_broken_links
[25.0 backport] docs: remove dead links from api verison history
2024-02-01 16:09:01 +01:00
voloder
85f4e6151a
Assert temp output directory is not an empty string
Signed-off-by: voloder <110066198+voloder@users.noreply.github.com>
(cherry picked from commit c655b7dc78)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 16:04:29 +01:00
Sebastiaan van Stijn
3e358447f5
Merge pull request #47291 from thaJeztah/25.0_backport_update_actions
[25.0 backport] gha: update actions to account for node 16 deprecation
2024-02-01 15:49:06 +01:00
David Karlsson
dd4de8f388 docs: remove dead links from api verison history
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
(cherry picked from commit 7f94acb6ab)
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-02-01 15:02:46 +01:00
CrazyMax
f5ef4e76b3
ci: update to docker/bake-action@v4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit a2026ee442)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:42 +01:00
CrazyMax
6c5e5271c1
ci: update to codecov/codecov-action@v4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 5a3c463a37)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:42 +01:00
CrazyMax
693fca6199
ci: update to actions/download-artifact@v4 and actions/upload-artifact@v4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 9babc02283)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:42 +01:00
CrazyMax
49487e996a
ci: update to actions/cache@v3
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit a83557d747)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:41 +01:00
Sebastiaan van Stijn
0358f31dc2
gha: update to crazy-max/ghaction-github-runtime@v3
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff: https://github.com/crazy-max/ghaction-github-runtime/compare/v2.2.0...v3.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3a8191225a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:13 +01:00
Sebastiaan van Stijn
081cffb3fa
gha: update to docker/login-action@v3
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff https://github.com/docker/login-action/compare/v2.2.0...v3.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 08251978a8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:13 +01:00
Sebastiaan van Stijn
9de19554c7
gha: update to docker/setup-qemu-action@v3
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff https://github.com/docker/setup-qemu-action/compare/v2.2.0...v3.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5d396e0533)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:13 +01:00
Sebastiaan van Stijn
2a80b8a7b2
gha: update to docker/bake-action@v4
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff https://github.com/docker/bake-action/compare/v2.3.0...v4.1.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4a1839ef1d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:13 +01:00
Sebastiaan van Stijn
61ffecfa3b
gha: update to docker/setup-buildx-action@v3
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff: https://github.com/docker/setup-buildx-action/compare/v2.10.0...v3.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b7fd571b0a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:13 +01:00
Sebastiaan van Stijn
02cd8dec03
gha: update to docker/metadata-action@v5
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff: https://github.com/docker/metadata-action/compare/v4.6.0...v5.5.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 00a2626b56)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:12 +01:00
Sebastiaan van Stijn
1d7df5ecc0
gha: update to actions/setup-go@v5
- full diff: https://github.com/actions/setup-go/compare/v3.5.0...v5.0.0

v5

In scope of this release, we change Nodejs runtime from node16 to node20.
Moreover, we update some dependencies to the latest versions.

Besides, this release contains such changes as:

- Fix hosted tool cache usage on windows
- Improve documentation regarding dependencies caching

V4

The V4 edition of the action offers:

- Enabled caching by default
- The action will try to enable caching unless the cache input is explicitly
  set to false.

Please see "Caching dependency files and build outputs" for more information:
https://github.com/actions/setup-go#caching-dependency-files-and-build-outputs

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e27a785f43)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:12 +01:00
Sebastiaan van Stijn
4e68a265ed
gha: update to actions/github-script@v7
- full diff: https://github.com/actions/github-script/compare/v6.4.1...v7.0.1

breaking changes: https://github.com/actions/github-script?tab=readme-ov-file#v7

> Version 7 of this action updated the runtime to Node 20
> https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#runs-for-javascript-actions
>
> All scripts are now run with Node 20 instead of Node 16 and are affected
> by any breaking changes between Node 16 and 20
>
> The previews input now only applies to GraphQL API calls as REST API previews
> are no longer necessary
> https://github.blog/changelog/2021-10-14-rest-api-preview-promotions/.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fb53ee6ba3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:12 +01:00
Sebastiaan van Stijn
e437f890ba
gha: update to actions/checkout@v4
Release notes:

- https://github.com/actions/checkout/compare/v3.6.0...v4.1.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0ffddc6bb8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-01 12:33:09 +01:00
Sebastiaan van Stijn
5a0015f72c
Merge pull request #47225 from s4ke/bump-swarmkit-generic-resources#25.0
[25.0 backport] Fix HasResource inverted boolean error - vendor swarmkit v2.0.0-20240125134710-dcda100a8261
2024-02-01 04:20:24 +01:00
Sebastiaan van Stijn
5babfee371
Merge pull request #47222 from vvoland/pkg-pools-close-noop-25
[25.0 backport] pkg/ioutils: Make subsequent Close attempts noop
2024-02-01 04:19:12 +01:00
Sebastiaan van Stijn
fce6e0ca9b
Merge pull request from GHSA-xw73-rw38-6vjc
[25.0 backport] image/cache: Restrict cache candidates to locally built images
2024-02-01 01:12:24 +01:00
Sebastiaan van Stijn
d838e68300
Merge pull request #47269 from thaJeztah/25.0_backport_bump_runc_binary_1.1.12
[25.0 backport] update runc binary to v1.1.12
2024-02-01 00:05:10 +01:00
Sebastiaan van Stijn
fa0d4159c7
Merge pull request #47280 from thaJeztah/25.0_backport_bump_containerd_binary_1.7.13
[25.0 backport] update containerd binary to v1.7.13
2024-01-31 23:53:46 +01:00
Sebastiaan van Stijn
06e22dce46
Merge pull request #47275 from vvoland/vendor-bk-0.12.5-25
[25.0 backport] vendor: github.com/moby/buildkit v0.12.5
2024-01-31 22:47:49 +01:00
Sebastiaan van Stijn
b73ee94289
Merge pull request #47274 from thaJeztah/25.0_backport_bump_runc_1.1.12
[25.0 backport] vendor: github.com/opencontainers/runc v1.1.12
2024-01-31 22:46:35 +01:00
Sebastiaan van Stijn
fd6a419ad5
update containerd binary to v1.7.13
Update the containerd binary that's used in CI

- full diff: https://github.com/containerd/containerd/compare/v1.7.12...v1.7.13
- release notes: https://github.com/containerd/containerd/releases/tag/v1.7.13

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 835cdcac95)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-31 22:03:56 +01:00
Paweł Gronowski
13ce91825f
vendor: github.com/moby/buildkit v0.12.5
full diff: https://github.com/moby/buildkit/compare/v0.12.4...v0.12.5

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-31 21:24:39 +01:00
Sebastiaan van Stijn
4b63c47c1e
vendor: github.com/opencontainers/runc v1.1.12
- release notes: https://github.com/opencontainers/runc/releases/tag/v1.1.12
- full diff: https://github.com/opencontainers/runc/compare/v1.1.11...v1.1.12

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b20dccba5e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-31 21:19:02 +01:00
Sebastiaan van Stijn
4edb71bb83
update runc binary to v1.1.12
Update the runc binary that's used in CI and for the static packages, which
includes a fix for [CVE-2024-21626].

- release notes: https://github.com/opencontainers/runc/releases/tag/v1.1.12
- full diff: https://github.com/opencontainers/runc/compare/v1.1.11...v1.1.12

[CVE-2024-21626]: https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 44bf407d4d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-31 21:06:10 +01:00
Sebastiaan van Stijn
667bc3f803
Merge pull request #47265 from vvoland/ci-fix-makeps1-templatefail-25
[25.0 backport] hack/make.ps1: Fix go list pattern
2024-01-31 21:01:57 +01:00
Paweł Gronowski
1b47bfac02 hack/make.ps1: Fix go list pattern
The double quotes inside a single quoted string don't need to be
escaped.
Looks like different Powershell versions are treating this differently
and it started failing unexpectedly without any changes on our side.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit ecb217cf69)
2024-01-31 19:55:37 +01:00
Cory Snider
f2d0d87c46 logger/journald: drop errDrainDone sentinel
errDrainDone is a sentinel error which is never supposed to escape the
package. Consequently, it needs to be filtered out of returns all over
the place, adding boilerplate. Forgetting to filter out these errors
would be a logic bug which the compiler would not help us catch. Replace
it with boolean multi-valued returns as they can't be accidentally
ignored or propagated.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 905477c8ae)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-01-29 11:28:34 -05:00
Cory Snider
6ac38cdbeb logger/journald: wait no longer than the deadline
While it doesn't really matter if the reader waits for an extra
arbitrary period beyond an arbitrary hardcoded timeout, it's also
trivial and cheap to implement, and nice to have.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit d70fe8803c)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-01-29 11:28:34 -05:00
Cory Snider
d7bf237e29 logger/journald: use deadline for drain timeout
The journald reader uses a timer to set an upper bound on how long to
wait for the final log message of a stopped container. However, the
timer channel is only received from in non-blocking select statements!
There isn't enough benefit of using a timer to offset the cost of having
to manage the timer resource. Setting a deadline and comparing the
current time is just as effective, without having to manage the
lifecycle of any runtime resources.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit e94ec8068d)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-01-29 11:28:33 -05:00
Cory Snider
f41b342cbe l/journald: make tests compatible with systemd 255
Synthesize a boot ID for journal entries fed into
systemd-journal-remote, as required by systemd 255.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 71bfffdad1)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-01-29 11:28:33 -05:00
Cory Snider
f413ba6fdb daemon/logger/loggertest: expand log-follow tests
Following logs with a non-negative tail when the container log is empty
is broken on the journald driver when used with systemd 255. Add tests
which cover this edge case to our loggertest suite.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 931568032a)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-01-29 11:28:33 -05:00
Martin Braun
c2ef38f790 vendor swarmkit v2.0.0-20240125134710-dcda100a8261
Signed-off-by: Martin Braun <braun@neuroforge.de>
2024-01-25 16:28:59 +01:00
Paweł Gronowski
d5eebf9e19
builder/windows: Don't set ArgsEscaped for RUN cache probe
Previously this was done indirectly - the `compare` function didn't
check the `ArgsEscaped`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 96d461d27e)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:07:14 +01:00
Paweł Gronowski
f3f5327b48
image/cache: Check image platform
Make sure the cache candidate platform matches the requested.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 877ebbe038)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:07:13 +01:00
Paweł Gronowski
05a370f52f
image/cache: Restrict cache candidates to locally built images
Restrict cache candidates only to images that were built locally.
This doesn't affect builds using `--cache-from`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 96ac22768a)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:07:11 +01:00
Paweł Gronowski
be7b60ef05
daemon/imageStore: Mark images built locally
Store additional image property which makes it possible to distinguish
if image was built locally.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit c6156dc51b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:07:10 +01:00
Paweł Gronowski
6d05b9b65b
image/cache: Compare all config fields
Add checks for some image config fields that were missing.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 537348763f)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:07:08 +01:00
Paweł Gronowski
c01bbbddeb
pkg/ioutils: Make subsequent Close attempts noop
Turn subsequent `Close` calls into a no-op and produce a warning with an
optional stack trace (if debug mode is enabled).

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 585d74bad1)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 15:15:01 +01:00
Sebastiaan van Stijn
32635850ed
add more //go:build directives to prevent downgrading to go1.16 language
This is a follow-up to 2cf230951f, adding
more directives to adjust for some new code added since:

Before this patch:

    make -C ./internal/gocompat/
    GO111MODULE=off go generate .
    GO111MODULE=on go mod tidy
    GO111MODULE=on go test -v

    # github.com/docker/docker/internal/sliceutil
    internal/sliceutil/sliceutil.go:3:12: type parameter requires go1.18 or later (-lang was set to go1.16; check go.mod)
    internal/sliceutil/sliceutil.go:3:14: predeclared comparable requires go1.18 or later (-lang was set to go1.16; check go.mod)
    internal/sliceutil/sliceutil.go:4:19: invalid map key type T (missing comparable constraint)

    # github.com/docker/docker/libnetwork
    libnetwork/endpoint.go:252:17: implicit function instantiation requires go1.18 or later (-lang was set to go1.16; check go.mod)

    # github.com/docker/docker/daemon
    daemon/container_operations.go:682:9: implicit function instantiation requires go1.18 or later (-lang was set to go1.16; check go.mod)
    daemon/inspect.go:42:18: implicit function instantiation requires go1.18 or later (-lang was set to go1.16; check go.mod)

With this patch:

    make -C ./internal/gocompat/
    GO111MODULE=off go generate .
    GO111MODULE=on go mod tidy
    GO111MODULE=on go test -v
    === RUN   TestModuleCompatibllity
        main_test.go:321: all packages have the correct go version specified through //go:build
    --- PASS: TestModuleCompatibllity (0.00s)
    PASS
    ok  	gocompat	0.031s
    make: Leaving directory '/go/src/github.com/docker/docker/internal/gocompat'

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit bd4ff31775)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-25 14:57:05 +01:00
Brian Goff
2cf1c762f8
De-flake TestSwarmClusterRotateUnlockKey... again... maybe?
This hopefully makes the test less flakey (or removes any flake that
would be caused by the test itself).

1. Adds tail of cluster daemon logs when there is a test failure so we
   can more easily see what may be happening
2. Scans the daemon logs to check if the key is rotated before
   restarting the daemon. This is a little hacky but a little better
   than assuming it is done after a hard-coded 3 seconds.
3. Cleans up the `node ls` check such that it uses a poll function

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit fbdc02534a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-24 10:06:10 +01:00
Sebastiaan van Stijn
71fa3ab079
Merge pull request #47196 from akerouanton/25.0-fix-multiple-rename-error
[25.0] daemon: rename: don't reload endpoint from datastore
2024-01-23 23:56:39 +01:00
Albin Kerouanton
5295e88ceb daemon: rename: don't reload endpoint from datastore
Commit 8b7af1d0f added some code to update the DNSNames of all
endpoints attached to a sandbox by loading a new instance of each
affected endpoints from the datastore through a call to
`Network.EndpointByID()`.

This method then calls `Network.getEndpointFromStore()`, that in
turn calls `store.GetObject()`, which then calls `cache.get()`,
which calls `o.CopyTo(kvObject)`. This effectively creates a fresh
new instance of an Endpoint. However, endpoints are already kept in
memory by Sandbox, meaning we now have two in-memory instances of
the same Endpoint.

As it turns out, libnetwork is built around the idea that no two objects
representing the same thing should leave in-memory, otherwise breaking
mutex locking and optimistic locking (as both instances will have a drifting
version tracking ID -- dbIndex in libnetwork parliance).

In this specific case, this bug materializes by container rename failing
when applied a second time for a given container. An integration test is
added to make sure this won't happen again.

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit 80c44b4b2e)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-01-23 22:53:43 +01:00
Paweł Gronowski
6eef840b8a
Merge pull request #47191 from vvoland/volume-cifs-resolve-optout-25
[25.0 backport] volume/local: Make host resolution backwards compatible
2024-01-23 19:13:39 +01:00
Sebastiaan van Stijn
e2ab4718c8
Merge pull request #47182 from akerouanton/25.0-fix-aliases-on-default-bridge
[25.0] daemon: only add short cid to aliases for custom networks
2024-01-23 18:28:58 +01:00
Paweł Gronowski
3de920a0b1
volume/local: Fix cifs url containing spaces
Unescapes the URL to avoid passing an URL encoded address to the kernel.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 250886741b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-23 17:46:23 +01:00
Paweł Gronowski
a445aa95e5
volume/local: Add tests for parsing nfs/cifs mounts
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit f4beb130b0)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-23 17:46:18 +01:00
Paweł Gronowski
cb77e48229
volume/local: Break early if addr was specified
I made a mistake in the last commit - after resolving the IP from the
passed `addr` for CIFS it would still resolve the `device` part.

Apply only one name resolution

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit df43311f3d)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-23 17:45:09 +01:00
Albin Kerouanton
e8801fbe26 daemon: only add short cid to aliases for custom networks
Prior to 7a9b680a, the container short ID was added to the network
aliases only for custom networks. However, this logic wasn't preserved
in 6a2542d and now the cid is always added to the list of network
aliases.

This commit reintroduces the old logic.

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit 9f37672ca8)
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
2024-01-23 17:09:11 +01:00
Sebastiaan van Stijn
613b6a12c1
Merge pull request #47192 from thaJeztah/25.0_backport_fix_gateway_ip
[25.0 backport] fix "host-gateway-ip" label not set for builder workers
2024-01-23 17:05:02 +01:00
Sebastiaan van Stijn
1b6738369f
Merge pull request #47189 from vvoland/c8d-prefer-default-platform-snapshot-25
[25.0 release] c8d/snapshot: Create any platform if not specified
2024-01-23 16:17:16 +01:00
Sebastiaan van Stijn
b8cc2e8c66
fix "host-gateway-ip" label not set for builder workers
Commit 21e50b89c9 added a label on the buildkit
worker to advertise the host-gateway-ip. This option can be either set by the
user in the daemon config, or otherwise defaults to the gateway-ip.

If no value is set by the user, discovery of the gateway-ip happens when
initializing the network-controller (`NewDaemon`, `daemon.restore()`).

However d222bf097c changed how we handle the
daemon config. As a result, the `cli.Config` used when initializing the
builder only holds configuration information form the daemon config
(user-specified or defaults), but is not updated with information set
by `NewDaemon`.

This patch adds an accessor on the daemon to get the current daemon config.
An alternative could be to return the config by `NewDaemon` (which should
likely be a _copy_ of the config).

Before this patch:

    docker buildx inspect default
    Name:   default
    Driver: docker

    Nodes:
    Name:      default
    Endpoint:  default
    Status:    running
    Buildkit:  v0.12.4+3b6880d2a00f
    Platforms: linux/arm64, linux/amd64, linux/amd64/v2, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
    Labels:
     org.mobyproject.buildkit.worker.moby.host-gateway-ip: <nil>

After this patch:

    docker buildx inspect default
    Name:   default
    Driver: docker

    Nodes:
    Name:      default
    Endpoint:  default
    Status:    running
    Buildkit:  v0.12.4+3b6880d2a00f
    Platforms: linux/arm64, linux/amd64, linux/amd64/v2, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
    Labels:
     org.mobyproject.buildkit.worker.moby.host-gateway-ip: 172.18.0.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 00c9785e2e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-23 15:59:09 +01:00
Paweł Gronowski
fcccfeb811
c8d/snapshot: Create any platform if not specified
With containerd snapshotters enabled `docker run` currently fails when
creating a container from an image that doesn't have the default host
platform without an explicit `--platform` selection:

```
$ docker run image:amd64
Unable to find image 'asdf:amd64' locally
docker: Error response from daemon: pull access denied for asdf, repository does not exist or may require 'docker login'.
See 'docker run --help'.
```

This is confusing and the graphdriver behavior is much better here,
because it runs whatever platform the image has, but prints a warning:

```
$ docker run image:amd64
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```

This commits changes the containerd snapshotter behavior to be the same
as the graphdriver. This doesn't affect container creation when platform
is specified explicitly.

```
$ docker run --rm --platform linux/arm64 asdf:amd64
Unable to find image 'asdf:amd64' locally
docker: Error response from daemon: pull access denied for asdf, repository does not exist or may require 'docker login'.
See 'docker run --help'.
```

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit e438db19d56bef55f9676af9db46cc04caa6330b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-23 15:05:54 +01:00
Sebastiaan van Stijn
f8eaa14a18
pkg/platforms: internalize in daemon/containerd
This matcher was only used internally in the containerd implementation of
the image store. Un-export it, and make it a local utility in that package
to prevent external use.

This package was introduced in 1616a09b61
(v24.0), and there are no known external consumers of this package, so there
should be no need to deprecate / alias the old location.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 94b4765363)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-23 15:05:50 +01:00
Paweł Gronowski
ac76925ff2
volume/local: Make host resolution backwards compatible
Commit 8ae94cafa5 added a DNS resolution
of the `device` part of the volume option.

The previous way to resolve the passed hostname was to use `addr`
option, which was handled by the same code path as the `nfs` mount type.

The issue is that `addr` is also an SMB module option handled by kernel
and passing a hostname as `addr` produces an invalid argument error.

To fix that, restore the old behavior to handle `addr` the same way as
before, and only perform the new DNS resolution of `device` if there is
no `addr` passed.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 0d51cf9db8)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-23 15:01:44 +01:00
Akihiro Suda
c7a1d928c0
Merge pull request #47180 from thaJeztah/25.0_backport_update_compose
[25.0 backport] Dockerfile: update docker compose to v2.24.2
2024-01-23 22:08:15 +09:00
Sebastiaan van Stijn
2672baefd7
Merge pull request #47178 from thaJeztah/25.0_backport_richer_xattr_errors
[25.0 backport] pkg/system: return even richer xattr errors
2024-01-23 10:56:46 +01:00
Sebastiaan van Stijn
ff15b49b47
Dockerfile: update docker compose to v2.24.2
Update the version of compose used in CI to the latest version.

- full diff: docker/compose@v2.24.1...v2.24.2
- release notes: https://github.com/docker/compose/releases/tag/v2.24.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 05d952b246)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-23 10:25:49 +01:00
Cory Snider
c0573b133f
pkg/system: return even richer xattr errors
The names of extended attributes are not completely freeform. Attributes
are namespaced, and the kernel enforces (among other things) that only
attributes whose names are prefixed with a valid namespace are
permitted. The name of the attribute therefore needs to be known in
order to diagnose issues with lsetxattr. Include the name of the
extended attribute in the errors returned from the Lsetxattr and
Lgetxattr so users and us can more easily troubleshoot xattr-related
issues. Include the name in a separate rich-error field to provide code
handling the error enough information to determine whether or not the
failure can be ignored.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 43bf65c174)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-23 09:27:49 +01:00
Sebastiaan van Stijn
c7466c0b52
Merge pull request #47170 from thaJeztah/25.0_backport_remove_deprecated_api_docs
[25.0 backport] docs: remove documentation for deprecated API versions (v1.23 and before)
2024-01-22 21:45:09 +01:00
Sebastiaan van Stijn
dde33d0dfe
Merge pull request #47172 from thaJeztah/25.0_backport_fix-bad-http-code
[25.0 backport] daemon: return an InvalidParameter error when ep settings are wrong
2024-01-22 21:44:38 +01:00
Sebastiaan van Stijn
39fedb254b
Merge pull request #47169 from thaJeztah/25.0_backport_test_fixes
[25.0 backport] backport test-fixes
2024-01-22 21:20:37 +01:00
Sebastiaan van Stijn
f0f5fc974a
Merge pull request #47171 from thaJeztah/25.0_backport_47146-duplicate_mac_addrs
[25.0 backport] Remove generated MAC addresses on restart.
2024-01-22 20:53:52 +01:00
Albin Kerouanton
7c185a1e40
daemon: return an InvalidParameter error when ep settings are wrong
Since v25.0 (commit ff50388), we validate endpoint settings when
containers are created, instead of doing so when containers are started.
However, a container created prior to that release would still trigger
validation error at start-time. In such case, the API returns a 500
status code because the Go error isn't wrapped into an InvalidParameter
error. This is now fixed.

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit fcc651972e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 20:50:10 +01:00
Rob Murray
2b036fb1da
Remove generated MAC addresses on restart.
The MAC address of a running container was stored in the same place as
the configured address for a container.

When starting a stopped container, a generated address was treated as a
configured address. If that generated address (based on an IPAM-assigned
IP address) had been reused, the containers ended up with duplicate MAC
addresses.

So, remember whether the MAC address was explicitly configured, and
clear it if not.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit cd53b7380c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:51:54 +01:00
Sebastiaan van Stijn
1f24da70d8
docs/api: remove version matrices from swagger files
These tables linked to deprecated API versions, and an up-to-date version of
the matrix is already included at https://docs.docker.com/engine/api/#api-version-matrix

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 521123944a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:44:48 +01:00
Sebastiaan van Stijn
358fecb566
docs: remove documentation for deprecated API versions < v1.23
These versions are deprecated in v25.0.0, and disabled by default,
see 08e4e88482.

Users that need to refer to documentation for older API versions,
can use archived versions of the documentation on GitHub:

- API v1.23 and before: https://github.com/moby/moby/tree/v25.0.0/docs/api
- API v1.17 and before: https://github.com/moby/moby/tree/v1.9.1/docs/reference/api

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d54be2ee6d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:44:48 +01:00
Sebastiaan van Stijn
f030b25770
Dockerfile: update docker compose to v2.24.1
Update the version of compose used in CI to the latest version.

- full diff: https://github.com/docker/compose/compare/v2.24.0...v2.24.1
- release notes: https://github.com/docker/compose/releases/tag/v2.24.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 307fe9c716)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:41:15 +01:00
Sebastiaan van Stijn
e07aed0f77
Dockerfile: update dev-shell version of the cli to v25.0.0
Update the docker CLI that's available for debugging in the dev-shell
to the v25 release.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit dfced4b557)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:41:15 +01:00
Sebastiaan van Stijn
cdf3611cff
integration-cli: TestInspectAPIMultipleNetworks: use current version
This test was added in f301c5765a to test
inspect output for API > v1.21, however, it was pinned to API v1.21,
which is now deprecated.

Remove the fixed version, as the intent was to test "current" API versions
(API v1.21 and up),

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a0466ca8e1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:40:57 +01:00
Sebastiaan van Stijn
05267e9e8c
integration-cli: TestInspectAPIBridgeNetworkSettings121: use current version
This test was added in f301c5765a to test
inspect output for API > v1.21, however, it was pinned to API v1.21,
which is now deprecated.

Remove the fixed version, as the intent was to test "current" API versions
(API v1.21 and up),

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 13a384a6fa)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:40:56 +01:00
Sebastiaan van Stijn
e5edf62bca
integration-cli: TestPutContainerArchiveErrSymlinkInVolumeToReadOnlyRootfs: use current API
This test was added in 75f6929b44, but pinned
to the API version that was current at the time (v1.20), which is now
deprecated.

Update the test to use the current API version.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 52e3fff828)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-22 19:40:42 +01:00
Sebastiaan van Stijn
e14d121d49
Merge pull request #47163 from vvoland/25-fix-swarm-startinterval-25
[25.0 backport] daemon/cluster/executer: Add missing `StartInterval`
2024-01-22 18:39:43 +01:00
Paweł Gronowski
e0acf1cd70
daemon/cluster/executer: Add missing StartInterval
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 6100190e5c)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-22 15:23:27 +01:00
Sebastiaan van Stijn
c2847b2eb2
Merge pull request #47136 from thaJeztah/25.0_backport_git-url-regex
[25.0 backport] Fix isGitURL regular expression
2024-01-22 15:14:04 +01:00
Sebastiaan van Stijn
0894f7fe69
Merge pull request #47161 from vvoland/save-fix-oci-diffids-25
[25.0 backport] image/save: Fix layers order in OCI manifest
2024-01-22 15:05:20 +01:00
Sebastiaan van Stijn
d25aa32c21
Merge pull request #47135 from thaJeztah/25.0_backport_allow-container-ip-outside-subpool
[25.0 backport] libnetwork: loosen container IPAM validation
2024-01-22 14:05:17 +01:00
Paweł Gronowski
1e335cfa74
image/save: Fix layers order in OCI manifest
Order the layers in OCI manifest by their actual apply order. This is
required by the OCI image spec.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 17fd6562bf)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-22 13:53:02 +01:00
Paweł Gronowski
4d287e9267
image/save: Change layers type to DiffID
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 4979605212)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-22 13:52:57 +01:00
David Dooling
0240f5675b
Fix isGitURL regular expression
Escape period (.) so regular expression does not match any character before "git".

Signed-off-by: David Dooling <david.dooling@docker.com>
(cherry picked from commit 768146b1b0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-20 12:19:07 +01:00
Cory Snider
13964248f1
libnetwork: loosen container IPAM validation
Permit container network attachments to set any static IP address within
the network's IPAM master pool, including when a subpool is configured.
Users have come to depend on being able to statically assign container
IP addresses which are guaranteed not to collide with automatically-
assigned container addresses.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 058b30023f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-20 11:33:04 +01:00
1743 changed files with 35660 additions and 131941 deletions

View file

@ -3,19 +3,11 @@
"build": {
"context": "..",
"dockerfile": "../Dockerfile",
"target": "devcontainer"
"target": "dev"
},
"workspaceFolder": "/go/src/github.com/docker/docker",
"workspaceMount": "source=${localWorkspaceFolder},target=/go/src/github.com/docker/docker,type=bind,consistency=cached",
"remoteUser": "root",
"runArgs": ["--privileged"],
"customizations": {
"vscode": {
"extensions": [
"golang.go"
]
}
}
"runArgs": ["--privileged"]
}

View file

@ -70,7 +70,6 @@ jobs:
with:
name: test-reports-unit-${{ inputs.storage }}
path: /tmp/reports/*
retention-days: 1
unit-report:
runs-on: ubuntu-20.04
@ -151,7 +150,6 @@ jobs:
with:
name: test-reports-docker-py-${{ inputs.storage }}
path: /tmp/reports/*
retention-days: 1
integration-flaky:
runs-on: ubuntu-20.04
@ -273,7 +271,6 @@ jobs:
with:
name: test-reports-integration-${{ inputs.storage }}-${{ env.TESTREPORTS_NAME }}
path: /tmp/reports/*
retention-days: 1
integration-report:
runs-on: ubuntu-20.04
@ -413,7 +410,6 @@ jobs:
with:
name: test-reports-integration-cli-${{ inputs.storage }}-${{ env.TESTREPORTS_NAME }}
path: /tmp/reports/*
retention-days: 1
integration-cli-report:
runs-on: ubuntu-20.04

View file

@ -190,7 +190,6 @@ jobs:
with:
name: ${{ inputs.os }}-${{ inputs.storage }}-unit-reports
path: ${{ env.GOPATH }}\src\github.com\docker\docker\bundles\*
retention-days: 1
unit-test-report:
runs-on: ubuntu-latest
@ -509,7 +508,6 @@ jobs:
with:
name: ${{ inputs.os }}-${{ inputs.storage }}-integration-reports-${{ matrix.runtime }}-${{ env.TESTREPORTS_NAME }}
path: ${{ env.GOPATH }}\src\github.com\docker\docker\bundles\*
retention-days: 1
integration-test-report:
runs-on: ubuntu-latest

View file

@ -50,9 +50,6 @@ jobs:
timeout-minutes: 120
needs:
- build
env:
TEST_IMAGE_BUILD: "0"
TEST_IMAGE_ID: "buildkit-tests"
strategy:
fail-fast: false
matrix:
@ -118,14 +115,6 @@ jobs:
sudo service docker restart
docker version
docker info
-
name: Build test image
uses: docker/bake-action@v4
with:
workdir: ./buildkit
targets: integration-tests
set: |
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
-
name: Test
run: |

View file

@ -51,6 +51,14 @@ jobs:
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
-
name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.target }}
path: ${{ env.DESTDIR }}
if-no-files-found: error
retention-days: 7
prepare-cross:
runs-on: ubuntu-latest
@ -111,3 +119,11 @@ jobs:
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
-
name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: cross-${{ env.PLATFORM_PAIR }}
path: ${{ env.DESTDIR }}
if-no-files-found: error
retention-days: 7

View file

@ -14,8 +14,6 @@ on:
env:
GO_VERSION: "1.21.9"
GIT_PAGER: "cat"
PAGER: "cat"
jobs:
validate-dco:

View file

@ -11,7 +11,7 @@ jobs:
- name: Missing `area/` label
if: contains(join(github.event.pull_request.labels.*.name, ','), 'impact/') && !contains(join(github.event.pull_request.labels.*.name, ','), 'area/')
run: |
echo "::error::Every PR with an 'impact/*' label should also have an 'area/*' label"
echo "Every PR with an \`impact/*\` label should also have an \`area/*\` label"
exit 1
- name: OK
run: exit 0
@ -32,31 +32,15 @@ jobs:
desc=$(echo "$block" | awk NF)
if [ -z "$desc" ]; then
echo "::error::Changelog section is empty. Please provide a description for the changelog."
echo "Changelog section is empty. Please provide a description for the changelog."
exit 1
fi
len=$(echo -n "$desc" | wc -c)
if [[ $len -le 6 ]]; then
echo "::error::Description looks too short: $desc"
echo "Description looks too short: $desc"
exit 1
fi
echo "This PR will be included in the release notes with the following note:"
echo "$desc"
check-pr-branch:
runs-on: ubuntu-20.04
env:
PR_TITLE: ${{ github.event.pull_request.title }}
steps:
# Backports or PR that target a release branch directly should mention the target branch in the title, for example:
# [X.Y backport] Some change that needs backporting to X.Y
# [X.Y] Change directly targeting the X.Y branch
- name: Get branch from PR title
id: title_branch
run: echo "$PR_TITLE" | sed -n 's/^\[\([0-9]*\.[0-9]*\)[^]]*\].*/branch=\1/p' >> $GITHUB_OUTPUT
- name: Check release branch
if: github.event.pull_request.base.ref != steps.title_branch.outputs.branch && !(github.event.pull_request.base.ref == 'master' && steps.title_branch.outputs.branch == '')
run: echo "::error::PR title suggests targetting the ${{ steps.title_branch.outputs.branch }} branch, but is opened against ${{ github.event.pull_request.base.ref }}" && exit 1

View file

@ -1,7 +1,6 @@
linters:
enable:
- depguard
- dupword # Checks for duplicate words in the source code.
- goimports
- gosec
- gosimple
@ -26,11 +25,6 @@ linters:
- docs
linters-settings:
dupword:
ignore:
- "true" # some tests use this as expected output
- "false" # some tests use this as expected output
- "root" # for tests using "ls" output with files owned by "root:root"
importas:
# Do not allow unaliased imports of aliased packages.
no-unaliased: true
@ -51,12 +45,6 @@ linters-settings:
deny:
- pkg: io/ioutil
desc: The io/ioutil package has been deprecated, see https://go.dev/doc/go1.16#ioutil
- pkg: "github.com/stretchr/testify/assert"
desc: Use "gotest.tools/v3/assert" instead
- pkg: "github.com/stretchr/testify/require"
desc: Use "gotest.tools/v3/assert" instead
- pkg: "github.com/stretchr/testify/suite"
desc: Do not use
revive:
rules:
# FIXME make sure all packages have a description. Currently, there's many packages without.

View file

@ -173,8 +173,6 @@ Dattatraya Kumbhar <dattatraya.kumbhar@gslab.com>
Dave Goodchild <buddhamagnet@gmail.com>
Dave Henderson <dhenderson@gmail.com> <Dave.Henderson@ca.ibm.com>
Dave Tucker <dt@docker.com> <dave@dtucker.co.uk>
David Dooling <dooling@gmail.com>
David Dooling <dooling@gmail.com> <david.dooling@docker.com>
David M. Karr <davidmichaelkarr@gmail.com>
David Sheets <dsheets@docker.com> <sheets@alum.mit.edu>
David Sissitka <me@dsissitka.com>
@ -221,8 +219,6 @@ Felix Hupfeld <felix@quobyte.com> <quofelix@users.noreply.github.com>
Felix Ruess <felix.ruess@gmail.com> <felix.ruess@roboception.de>
Feng Yan <fy2462@gmail.com>
Fengtu Wang <wangfengtu@huawei.com> <wangfengtu@huawei.com>
Filipe Pina <hzlu1ot0@duck.com>
Filipe Pina <hzlu1ot0@duck.com> <636320+fopina@users.noreply.github.com>
Francisco Carriedo <fcarriedo@gmail.com>
Frank Rosquin <frank.rosquin+github@gmail.com> <frank.rosquin@gmail.com>
Frank Yang <yyb196@gmail.com>
@ -274,7 +270,6 @@ Hollie Teal <hollie@docker.com> <hollie.teal@docker.com>
Hollie Teal <hollie@docker.com> <hollietealok@users.noreply.github.com>
hsinko <21551195@zju.edu.cn> <hsinko@users.noreply.github.com>
Hu Keping <hukeping@huawei.com>
Huajin Tong <fliterdashen@gmail.com>
Hui Kang <hkang.sunysb@gmail.com>
Hui Kang <hkang.sunysb@gmail.com> <kangh@us.ibm.com>
Huu Nguyen <huu@prismskylabs.com> <whoshuu@gmail.com>
@ -568,7 +563,6 @@ Sebastiaan van Stijn <github@gone.nl> <sebastiaan@ws-key-sebas3.dpi1.dpi>
Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com>
Sebastian Thomschke <sebthom@users.noreply.github.com>
Seongyeol Lim <seongyeol37@gmail.com>
Serhii Nakon <serhii.n@thescimus.com>
Shaun Kaasten <shaunk@gmail.com>
Shawn Landden <shawn@churchofgit.com> <shawnlandden@gmail.com>
Shengbo Song <thomassong@tencent.com>

View file

@ -669,7 +669,6 @@ Erik Hollensbe <github@hollensbe.org>
Erik Inge Bolsø <knan@redpill-linpro.com>
Erik Kristensen <erik@erikkristensen.com>
Erik Sipsma <erik@sipsma.dev>
Erik Sjölund <erik.sjolund@gmail.com>
Erik St. Martin <alakriti@gmail.com>
Erik Weathers <erikdw@gmail.com>
Erno Hopearuoho <erno.hopearuoho@gmail.com>
@ -732,7 +731,6 @@ Feroz Salam <feroz.salam@sourcegraph.com>
Ferran Rodenas <frodenas@gmail.com>
Filipe Brandenburger <filbranden@google.com>
Filipe Oliveira <contato@fmoliveira.com.br>
Filipe Pina <hzlu1ot0@duck.com>
Flavio Castelli <fcastelli@suse.com>
Flavio Crisciani <flavio.crisciani@docker.com>
Florian <FWirtz@users.noreply.github.com>
@ -877,8 +875,6 @@ Hsing-Yu (David) Chen <davidhsingyuchen@gmail.com>
hsinko <21551195@zju.edu.cn>
Hu Keping <hukeping@huawei.com>
Hu Tao <hutao@cn.fujitsu.com>
Huajin Tong <fliterdashen@gmail.com>
huang-jl <1046678590@qq.com>
HuanHuan Ye <logindaveye@gmail.com>
Huanzhong Zhang <zhanghuanzhong90@gmail.com>
Huayi Zhang <irachex@gmail.com>
@ -973,7 +969,6 @@ Jannick Fahlbusch <git@jf-projects.de>
Januar Wayong <januar@gmail.com>
Jared Biel <jared.biel@bolderthinking.com>
Jared Hocutt <jaredh@netapp.com>
Jaroslav Jindrak <dzejrou@gmail.com>
Jaroslaw Zabiello <hipertracker@gmail.com>
Jasmine Hegman <jasmine@jhegman.com>
Jason A. Donenfeld <Jason@zx2c4.com>
@ -1017,7 +1012,6 @@ Jeffrey Bolle <jeffreybolle@gmail.com>
Jeffrey Morgan <jmorganca@gmail.com>
Jeffrey van Gogh <jvg@google.com>
Jenny Gebske <jennifer@gebske.de>
Jeongseok Kang <piono623@naver.com>
Jeremy Chambers <jeremy@thehipbot.com>
Jeremy Grosser <jeremy@synack.me>
Jeremy Huntwork <jhuntwork@lightcubesolutions.com>
@ -1035,7 +1029,6 @@ Jezeniel Zapanta <jpzapanta22@gmail.com>
Jhon Honce <jhonce@redhat.com>
Ji.Zhilong <zhilongji@gmail.com>
Jian Liao <jliao@alauda.io>
Jian Zeng <anonymousknight96@gmail.com>
Jian Zhang <zhangjian.fnst@cn.fujitsu.com>
Jiang Jinyang <jjyruby@gmail.com>
Jianyong Wu <jianyong.wu@arm.com>
@ -1974,7 +1967,6 @@ Sergey Evstifeev <sergey.evstifeev@gmail.com>
Sergii Kabashniuk <skabashnyuk@codenvy.com>
Sergio Lopez <slp@redhat.com>
Serhat Gülçiçek <serhat25@gmail.com>
Serhii Nakon <serhii.n@thescimus.com>
SeungUkLee <lsy931106@gmail.com>
Sevki Hasirci <s@sevki.org>
Shane Canon <scanon@lbl.gov>
@ -2261,7 +2253,6 @@ VladimirAus <v_roudakov@yahoo.com>
Vladislav Kolesnikov <vkolesnikov@beget.ru>
Vlastimil Zeman <vlastimil.zeman@diffblue.com>
Vojtech Vitek (V-Teq) <vvitek@redhat.com>
voloder <110066198+voloder@users.noreply.github.com>
Walter Leibbrandt <github@wrl.co.za>
Walter Stanish <walter@pratyeka.org>
Wang Chao <chao.wang@ucloud.cn>

View file

@ -101,7 +101,7 @@ the contributors guide.
<td>
<p>
Register for the Docker Community Slack at
<a href="https://dockr.ly/comm-slack" target="_blank">https://dockr.ly/comm-slack</a>.
<a href="https://dockr.ly/slack" target="_blank">https://dockr.ly/slack</a>.
We use the #moby-project channel for general discussion, and there are separate channels for other Moby projects such as #containerd.
</p>
</td>

View file

@ -1,19 +1,19 @@
# syntax=docker/dockerfile:1.7
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.21.9
ARG BASE_DEBIAN_DISTRO="bookworm"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
ARG XX_VERSION=1.4.0
ARG XX_VERSION=1.2.1
ARG VPNKIT_VERSION=0.5.0
ARG DOCKERCLI_REPOSITORY="https://github.com/docker/cli.git"
ARG DOCKERCLI_VERSION=v26.0.0
ARG DOCKERCLI_VERSION=v25.0.2
# cli version used for integration-cli tests
ARG DOCKERCLI_INTEGRATION_REPOSITORY="https://github.com/docker/cli.git"
ARG DOCKERCLI_INTEGRATION_VERSION=v17.06.2-ce
ARG BUILDX_VERSION=0.13.1
ARG COMPOSE_VERSION=v2.25.0
ARG BUILDX_VERSION=0.12.1
ARG COMPOSE_VERSION=v2.24.5
ARG SYSTEMD="false"
ARG DOCKER_STATIC=1
@ -24,12 +24,6 @@ ARG DOCKER_STATIC=1
# specified here should match a current release.
ARG REGISTRY_VERSION=2.8.3
# delve is currently only supported on linux/amd64 and linux/arm64;
# https://github.com/go-delve/delve/blob/v1.8.1/pkg/proc/native/support_sentinel.go#L1-L6
ARG DELVE_SUPPORTED=${TARGETPLATFORM#linux/amd64} DELVE_SUPPORTED=${DELVE_SUPPORTED#linux/arm64}
ARG DELVE_SUPPORTED=${DELVE_SUPPORTED:+"unsupported"}
ARG DELVE_SUPPORTED=${DELVE_SUPPORTED:-"supported"}
# cross compilation helper
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
@ -150,7 +144,7 @@ RUN git init . && git remote add origin "https://github.com/go-delve/delve.git"
ARG DELVE_VERSION=v1.21.1
RUN git fetch -q --depth 1 origin "${DELVE_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS delve-supported
FROM base AS delve-build
WORKDIR /usr/src/delve
ARG TARGETPLATFORM
RUN --mount=from=delve-src,src=/usr/src/delve,rw \
@ -161,8 +155,16 @@ RUN --mount=from=delve-src,src=/usr/src/delve,rw \
xx-verify /build/dlv
EOT
FROM binary-dummy AS delve-unsupported
FROM delve-${DELVE_SUPPORTED} AS delve
# delve is currently only supported on linux/amd64 and linux/arm64;
# https://github.com/go-delve/delve/blob/v1.8.1/pkg/proc/native/support_sentinel.go#L1-L6
FROM binary-dummy AS delve-windows
FROM binary-dummy AS delve-linux-arm
FROM binary-dummy AS delve-linux-ppc64le
FROM binary-dummy AS delve-linux-s390x
FROM delve-build AS delve-linux-amd64
FROM delve-build AS delve-linux-arm64
FROM delve-linux-${TARGETARCH} AS delve-linux
FROM delve-${TARGETOS} AS delve
FROM base AS tomll
# GOTOML_VERSION specifies the version of the tomll binary to build and install
@ -196,7 +198,7 @@ RUN git init . && git remote add origin "https://github.com/containerd/container
# When updating the binary version you may also need to update the vendor
# version to pick up bug fixes or new APIs, however, usually the Go packages
# are built from a commit from the master branch.
ARG CONTAINERD_VERSION=v1.7.15
ARG CONTAINERD_VERSION=v1.7.13
RUN git fetch -q --depth 1 origin "${CONTAINERD_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS containerd-build
@ -243,18 +245,12 @@ RUN --mount=type=cache,target=/root/.cache/go-build \
&& /build/gotestsum --version
FROM base AS shfmt
ARG SHFMT_VERSION=v3.8.0
ARG SHFMT_VERSION=v3.6.0
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "mvdan.cc/sh/v3/cmd/shfmt@${SHFMT_VERSION}" \
&& /build/shfmt --version
FROM base AS gopls
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "golang.org/x/tools/gopls@latest" \
&& /build/gopls version
FROM base AS dockercli
WORKDIR /go/src/github.com/docker/cli
ARG DOCKERCLI_REPOSITORY
@ -659,11 +655,6 @@ RUN <<EOT
docker-proxy --version
EOT
# devcontainer is a stage used by .devcontainer/devcontainer.json
FROM dev-base AS devcontainer
COPY --link . .
COPY --link --from=gopls /build/ /usr/local/bin/
# usage:
# > make shell
# > SYSTEMD=true make shell

View file

@ -164,7 +164,7 @@ SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPref
ARG GO_VERSION=1.21.9
ARG GOTESTSUM_VERSION=v1.8.2
ARG GOWINRES_VERSION=v0.3.1
ARG CONTAINERD_VERSION=v1.7.15
ARG CONTAINERD_VERSION=v1.7.13
# Environment variable notes:
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux.

View file

@ -16,9 +16,6 @@ export VALIDATE_REPO
export VALIDATE_BRANCH
export VALIDATE_ORIGIN_BRANCH
export PAGER
export GIT_PAGER
# env vars passed through directly to Docker's build scripts
# to allow things like `make KEEPBUNDLE=1 binary` easily
# `project/PACKAGERS.md` have some limited documentation of some of these
@ -80,8 +77,6 @@ DOCKER_ENVS := \
-e DEFAULT_PRODUCT_LICENSE \
-e PRODUCT \
-e PACKAGER_NAME \
-e PAGER \
-e GIT_PAGER \
-e OTEL_EXPORTER_OTLP_ENDPOINT \
-e OTEL_EXPORTER_OTLP_PROTOCOL \
-e OTEL_SERVICE_NAME

View file

@ -2,17 +2,8 @@ package api // import "github.com/docker/docker/api"
// Common constants for daemon and client.
const (
// DefaultVersion of the current REST API.
DefaultVersion = "1.45"
// MinSupportedAPIVersion is the minimum API version that can be supported
// by the API server, specified as "major.minor". Note that the daemon
// may be configured with a different minimum API version, as returned
// in [github.com/docker/docker/api/types.Version.MinAPIVersion].
//
// API requests for API versions lower than the configured version produce
// an error.
MinSupportedAPIVersion = "1.24"
// DefaultVersion of Current REST API
DefaultVersion = "1.44"
// NoBaseImageSpecifier is the symbol used by the FROM
// command to specify that no base image is to be used.

View file

@ -0,0 +1,34 @@
package server
import (
"net/http"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/gorilla/mux"
"google.golang.org/grpc/status"
)
// makeErrorHandler makes an HTTP handler that decodes a Docker error and
// returns it in the response.
func makeErrorHandler(err error) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
statusCode := httpstatus.FromError(err)
vars := mux.Vars(r)
if apiVersionSupportsJSONErrors(vars["version"]) {
response := &types.ErrorResponse{
Message: err.Error(),
}
_ = httputils.WriteJSON(w, statusCode, response)
} else {
http.Error(w, status.Convert(err).Message(), statusCode)
}
}
}
func apiVersionSupportsJSONErrors(version string) bool {
const firstAPIVersionWithJSONErrors = "1.23"
return version == "" || versions.GreaterThan(version, firstAPIVersionWithJSONErrors)
}

View file

@ -12,4 +12,5 @@ import (
// container configuration.
type ContainerDecoder interface {
DecodeConfig(src io.Reader) (*container.Config, *container.HostConfig, *network.NetworkingConfig, error)
DecodeHostConfig(src io.Reader) (*container.HostConfig, error)
}

View file

@ -6,7 +6,6 @@ import (
"net/http"
"runtime"
"github.com/docker/docker/api"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/versions"
)
@ -15,39 +14,18 @@ import (
// validates the client and server versions.
type VersionMiddleware struct {
serverVersion string
// defaultAPIVersion is the default API version provided by the API server,
// specified as "major.minor". It is usually configured to the latest API
// version [github.com/docker/docker/api.DefaultVersion].
//
// API requests for API versions greater than this version are rejected by
// the server and produce a [versionUnsupportedError].
defaultAPIVersion string
// minAPIVersion is the minimum API version provided by the API server,
// specified as "major.minor".
//
// API requests for API versions lower than this version are rejected by
// the server and produce a [versionUnsupportedError].
minAPIVersion string
defaultVersion string
minVersion string
}
// NewVersionMiddleware creates a VersionMiddleware with the given versions.
func NewVersionMiddleware(serverVersion, defaultAPIVersion, minAPIVersion string) (*VersionMiddleware, error) {
if versions.LessThan(defaultAPIVersion, api.MinSupportedAPIVersion) || versions.GreaterThan(defaultAPIVersion, api.DefaultVersion) {
return nil, fmt.Errorf("invalid default API version (%s): must be between %s and %s", defaultAPIVersion, api.MinSupportedAPIVersion, api.DefaultVersion)
// NewVersionMiddleware creates a new VersionMiddleware
// with the default versions.
func NewVersionMiddleware(s, d, m string) VersionMiddleware {
return VersionMiddleware{
serverVersion: s,
defaultVersion: d,
minVersion: m,
}
if versions.LessThan(minAPIVersion, api.MinSupportedAPIVersion) || versions.GreaterThan(minAPIVersion, api.DefaultVersion) {
return nil, fmt.Errorf("invalid minimum API version (%s): must be between %s and %s", minAPIVersion, api.MinSupportedAPIVersion, api.DefaultVersion)
}
if versions.GreaterThan(minAPIVersion, defaultAPIVersion) {
return nil, fmt.Errorf("invalid API version: the minimum API version (%s) is higher than the default version (%s)", minAPIVersion, defaultAPIVersion)
}
return &VersionMiddleware{
serverVersion: serverVersion,
defaultAPIVersion: defaultAPIVersion,
minAPIVersion: minAPIVersion,
}, nil
}
type versionUnsupportedError struct {
@ -67,18 +45,18 @@ func (e versionUnsupportedError) InvalidParameter() {}
func (v VersionMiddleware) WrapHandler(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.Header().Set("Server", fmt.Sprintf("Docker/%s (%s)", v.serverVersion, runtime.GOOS))
w.Header().Set("API-Version", v.defaultAPIVersion)
w.Header().Set("API-Version", v.defaultVersion)
w.Header().Set("OSType", runtime.GOOS)
apiVersion := vars["version"]
if apiVersion == "" {
apiVersion = v.defaultAPIVersion
apiVersion = v.defaultVersion
}
if versions.LessThan(apiVersion, v.minAPIVersion) {
return versionUnsupportedError{version: apiVersion, minVersion: v.minAPIVersion}
if versions.LessThan(apiVersion, v.minVersion) {
return versionUnsupportedError{version: apiVersion, minVersion: v.minVersion}
}
if versions.GreaterThan(apiVersion, v.defaultAPIVersion) {
return versionUnsupportedError{version: apiVersion, maxVersion: v.defaultAPIVersion}
if versions.GreaterThan(apiVersion, v.defaultVersion) {
return versionUnsupportedError{version: apiVersion, maxVersion: v.defaultVersion}
}
ctx = context.WithValue(ctx, httputils.APIVersionKey{}, apiVersion)
return handler(ctx, w, r, vars)

View file

@ -2,82 +2,27 @@ package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"runtime"
"testing"
"github.com/docker/docker/api"
"github.com/docker/docker/api/server/httputils"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
func TestNewVersionMiddlewareValidation(t *testing.T) {
tests := []struct {
doc, defaultVersion, minVersion, expectedErr string
}{
{
doc: "defaults",
defaultVersion: api.DefaultVersion,
minVersion: api.MinSupportedAPIVersion,
},
{
doc: "invalid default lower than min",
defaultVersion: api.MinSupportedAPIVersion,
minVersion: api.DefaultVersion,
expectedErr: fmt.Sprintf("invalid API version: the minimum API version (%s) is higher than the default version (%s)", api.DefaultVersion, api.MinSupportedAPIVersion),
},
{
doc: "invalid default too low",
defaultVersion: "0.1",
minVersion: api.MinSupportedAPIVersion,
expectedErr: fmt.Sprintf("invalid default API version (0.1): must be between %s and %s", api.MinSupportedAPIVersion, api.DefaultVersion),
},
{
doc: "invalid default too high",
defaultVersion: "9999.9999",
minVersion: api.DefaultVersion,
expectedErr: fmt.Sprintf("invalid default API version (9999.9999): must be between %s and %s", api.MinSupportedAPIVersion, api.DefaultVersion),
},
{
doc: "invalid minimum too low",
defaultVersion: api.MinSupportedAPIVersion,
minVersion: "0.1",
expectedErr: fmt.Sprintf("invalid minimum API version (0.1): must be between %s and %s", api.MinSupportedAPIVersion, api.DefaultVersion),
},
{
doc: "invalid minimum too high",
defaultVersion: api.DefaultVersion,
minVersion: "9999.9999",
expectedErr: fmt.Sprintf("invalid minimum API version (9999.9999): must be between %s and %s", api.MinSupportedAPIVersion, api.DefaultVersion),
},
}
for _, tc := range tests {
tc := tc
t.Run(tc.doc, func(t *testing.T) {
_, err := NewVersionMiddleware("1.2.3", tc.defaultVersion, tc.minVersion)
if tc.expectedErr == "" {
assert.Check(t, err)
} else {
assert.Check(t, is.Error(err, tc.expectedErr))
}
})
}
}
func TestVersionMiddlewareVersion(t *testing.T) {
expectedVersion := "<not set>"
defaultVersion := "1.10.0"
minVersion := "1.2.0"
expectedVersion := defaultVersion
handler := func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v := httputils.VersionFromContext(ctx)
assert.Check(t, is.Equal(expectedVersion, v))
return nil
}
m, err := NewVersionMiddleware("1.2.3", api.DefaultVersion, api.MinSupportedAPIVersion)
assert.NilError(t, err)
m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
@ -90,19 +35,19 @@ func TestVersionMiddlewareVersion(t *testing.T) {
errString string
}{
{
expectedVersion: api.DefaultVersion,
expectedVersion: "1.10.0",
},
{
reqVersion: api.MinSupportedAPIVersion,
expectedVersion: api.MinSupportedAPIVersion,
reqVersion: "1.9.0",
expectedVersion: "1.9.0",
},
{
reqVersion: "0.1",
errString: fmt.Sprintf("client version 0.1 is too old. Minimum supported API version is %s, please upgrade your client to a newer version", api.MinSupportedAPIVersion),
errString: "client version 0.1 is too old. Minimum supported API version is 1.2.0, please upgrade your client to a newer version",
},
{
reqVersion: "9999.9999",
errString: fmt.Sprintf("client version 9999.9999 is too new. Maximum supported API version is %s", api.DefaultVersion),
errString: "client version 9999.9999 is too new. Maximum supported API version is 1.10.0",
},
}
@ -126,8 +71,9 @@ func TestVersionMiddlewareWithErrorsReturnsHeaders(t *testing.T) {
return nil
}
m, err := NewVersionMiddleware("1.2.3", api.DefaultVersion, api.MinSupportedAPIVersion)
assert.NilError(t, err)
defaultVersion := "1.10.0"
minVersion := "1.2.0"
m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
@ -135,12 +81,12 @@ func TestVersionMiddlewareWithErrorsReturnsHeaders(t *testing.T) {
ctx := context.Background()
vars := map[string]string{"version": "0.1"}
err = h(ctx, resp, req, vars)
err := h(ctx, resp, req, vars)
assert.Check(t, is.ErrorContains(err, ""))
hdr := resp.Result().Header
assert.Check(t, is.Contains(hdr.Get("Server"), "Docker/1.2.3"))
assert.Check(t, is.Contains(hdr.Get("Server"), "Docker/"+defaultVersion))
assert.Check(t, is.Contains(hdr.Get("Server"), runtime.GOOS))
assert.Check(t, is.Equal(hdr.Get("API-Version"), api.DefaultVersion))
assert.Check(t, is.Equal(hdr.Get("API-Version"), defaultVersion))
assert.Check(t, is.Equal(hdr.Get("OSType"), runtime.GOOS))
}

View file

@ -42,7 +42,6 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
SuppressOutput: httputils.BoolValue(r, "q"),
NoCache: httputils.BoolValue(r, "nocache"),
ForceRemove: httputils.BoolValue(r, "forcerm"),
PullParent: httputils.BoolValue(r, "pull"),
MemorySwap: httputils.Int64ValueOrZero(r, "memswap"),
Memory: httputils.Int64ValueOrZero(r, "memory"),
CPUShares: httputils.Int64ValueOrZero(r, "cpushares"),
@ -67,14 +66,17 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
return nil, invalidParam{errors.New("security options are not supported on " + runtime.GOOS)}
}
if httputils.BoolValue(r, "forcerm") {
version := httputils.VersionFromContext(ctx)
if httputils.BoolValue(r, "forcerm") && versions.GreaterThanOrEqualTo(version, "1.12") {
options.Remove = true
} else if r.FormValue("rm") == "" {
} else if r.FormValue("rm") == "" && versions.GreaterThanOrEqualTo(version, "1.12") {
options.Remove = true
} else {
options.Remove = httputils.BoolValue(r, "rm")
}
version := httputils.VersionFromContext(ctx)
if httputils.BoolValue(r, "pull") && versions.GreaterThanOrEqualTo(version, "1.16") {
options.PullParent = true
}
if versions.GreaterThanOrEqualTo(version, "1.32") {
options.Platform = r.FormValue("platform")
}

View file

@ -24,6 +24,7 @@ type execBackend interface {
// copyBackend includes functions to implement to provide container copy functionality.
type copyBackend interface {
ContainerArchivePath(name string, path string) (content io.ReadCloser, stat *types.ContainerPathStat, err error)
ContainerCopy(name string, res string) (io.ReadCloser, error)
ContainerExport(ctx context.Context, name string, out io.Writer) error
ContainerExtractToDir(name, path string, copyUIDGID, noOverwriteDirNonDir bool, content io.Reader) error
ContainerStatPath(name string, path string) (stat *types.ContainerPathStat, err error)
@ -38,7 +39,7 @@ type stateBackend interface {
ContainerResize(name string, height, width int) error
ContainerRestart(ctx context.Context, name string, options container.StopOptions) error
ContainerRm(name string, config *backend.ContainerRmConfig) error
ContainerStart(ctx context.Context, name string, checkpoint string, checkpointDir string) error
ContainerStart(ctx context.Context, name string, hostConfig *container.HostConfig, checkpoint string, checkpointDir string) error
ContainerStop(ctx context.Context, name string, options container.StopOptions) error
ContainerUnpause(name string) error
ContainerUpdate(name string, hostConfig *container.HostConfig) (container.ContainerUpdateOKBody, error)

View file

@ -56,6 +56,7 @@ func (r *containerRouter) initRoutes() {
router.NewPostRoute("/containers/{name:.*}/wait", r.postContainersWait),
router.NewPostRoute("/containers/{name:.*}/resize", r.postContainersResize),
router.NewPostRoute("/containers/{name:.*}/attach", r.postContainersAttach),
router.NewPostRoute("/containers/{name:.*}/copy", r.postContainersCopy), // Deprecated since 1.8 (API v1.20), errors out since 1.12 (API v1.24)
router.NewPostRoute("/containers/{name:.*}/exec", r.postContainerExecCreate),
router.NewPostRoute("/exec/{name:.*}/start", r.postContainerExecStart),
router.NewPostRoute("/exec/{name:.*}/resize", r.postContainerExecResize),

View file

@ -39,6 +39,13 @@ func (s *containerRouter) postCommit(ctx context.Context, w http.ResponseWriter,
return err
}
// TODO: remove pause arg, and always pause in backend
pause := httputils.BoolValue(r, "pause")
version := httputils.VersionFromContext(ctx)
if r.FormValue("pause") == "" && versions.GreaterThanOrEqualTo(version, "1.13") {
pause = true
}
config, _, _, err := s.decoder.DecodeConfig(r.Body)
if err != nil && !errors.Is(err, io.EOF) { // Do not fail if body is empty.
return err
@ -50,7 +57,7 @@ func (s *containerRouter) postCommit(ctx context.Context, w http.ResponseWriter,
}
imgID, err := s.backend.CreateImageFromContainer(ctx, r.Form.Get("container"), &backend.CreateImageConfig{
Pause: httputils.BoolValueOrDefault(r, "pause", true), // TODO(dnephin): remove pause arg, and always pause in backend
Pause: pause,
Tag: ref,
Author: r.Form.Get("author"),
Comment: r.Form.Get("comment"),
@ -111,11 +118,14 @@ func (s *containerRouter) getContainersStats(ctx context.Context, w http.Respons
oneShot = httputils.BoolValueOrDefault(r, "one-shot", false)
}
return s.backend.ContainerStats(ctx, vars["name"], &backend.ContainerStatsConfig{
config := &backend.ContainerStatsConfig{
Stream: stream,
OneShot: oneShot,
OutStream: w,
})
Version: httputils.VersionFromContext(ctx),
}
return s.backend.ContainerStats(ctx, vars["name"], config)
}
func (s *containerRouter) getContainersLogs(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@ -168,6 +178,14 @@ func (s *containerRouter) getContainersExport(ctx context.Context, w http.Respon
return s.backend.ContainerExport(ctx, vars["name"], w)
}
type bodyOnStartError struct{}
func (bodyOnStartError) Error() string {
return "starting container with non-empty request body was deprecated since API v1.22 and removed in v1.24"
}
func (bodyOnStartError) InvalidParameter() {}
func (s *containerRouter) postContainersStart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// If contentLength is -1, we can assumed chunked encoding
// or more technically that the length is unknown
@ -175,17 +193,33 @@ func (s *containerRouter) postContainersStart(ctx context.Context, w http.Respon
// net/http otherwise seems to swallow any headers related to chunked encoding
// including r.TransferEncoding
// allow a nil body for backwards compatibility
//
version := httputils.VersionFromContext(ctx)
var hostConfig *container.HostConfig
// A non-nil json object is at least 7 characters.
if r.ContentLength > 7 || r.ContentLength == -1 {
return errdefs.InvalidParameter(errors.New("starting container with non-empty request body was deprecated since API v1.22 and removed in v1.24"))
if versions.GreaterThanOrEqualTo(version, "1.24") {
return bodyOnStartError{}
}
if err := httputils.CheckForJSON(r); err != nil {
return err
}
c, err := s.decoder.DecodeHostConfig(r.Body)
if err != nil {
return err
}
hostConfig = c
}
if err := httputils.ParseForm(r); err != nil {
return err
}
if err := s.backend.ContainerStart(ctx, vars["name"], r.Form.Get("checkpoint"), r.Form.Get("checkpoint-dir")); err != nil {
checkpoint := r.Form.Get("checkpoint")
checkpointDir := r.Form.Get("checkpoint-dir")
if err := s.backend.ContainerStart(ctx, vars["name"], hostConfig, checkpoint, checkpointDir); err != nil {
return err
}
@ -221,14 +255,25 @@ func (s *containerRouter) postContainersStop(ctx context.Context, w http.Respons
return nil
}
func (s *containerRouter) postContainersKill(_ context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainersKill(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
name := vars["name"]
if err := s.backend.ContainerKill(name, r.Form.Get("signal")); err != nil {
return errors.Wrapf(err, "cannot kill container: %s", name)
var isStopped bool
if errdefs.IsConflict(err) {
isStopped = true
}
// Return error that's not caused because the container is stopped.
// Return error if the container is not running and the api is >= 1.20
// to keep backwards compatibility.
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.20") || !isStopped {
return errors.Wrapf(err, "Cannot kill container: %s", name)
}
}
w.WriteHeader(http.StatusNoContent)
@ -456,29 +501,18 @@ func (s *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
if hostConfig == nil {
hostConfig = &container.HostConfig{}
}
if hostConfig.NetworkMode == "" {
hostConfig.NetworkMode = "default"
}
if networkingConfig == nil {
networkingConfig = &network.NetworkingConfig{}
}
if networkingConfig.EndpointsConfig == nil {
networkingConfig.EndpointsConfig = make(map[string]*network.EndpointSettings)
}
// The NetworkMode "default" is used as a way to express a container should
// be attached to the OS-dependant default network, in an OS-independent
// way. Doing this conversion as soon as possible ensures we have less
// NetworkMode to handle down the path (including in the
// backward-compatibility layer we have just below).
//
// Note that this is not the only place where this conversion has to be
// done (as there are various other places where containers get created).
if hostConfig.NetworkMode == "" || hostConfig.NetworkMode.IsDefault() {
hostConfig.NetworkMode = runconfig.DefaultDaemonNetworkMode()
if nw, ok := networkingConfig.EndpointsConfig[network.NetworkDefault]; ok {
networkingConfig.EndpointsConfig[hostConfig.NetworkMode.NetworkName()] = nw
delete(networkingConfig.EndpointsConfig, network.NetworkDefault)
}
}
version := httputils.VersionFromContext(ctx)
adjustCPUShares := versions.LessThan(version, "1.19")
// When using API 1.24 and under, the client is responsible for removing the container
if versions.LessThan(version, "1.25") {
@ -604,14 +638,6 @@ func (s *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
}
}
if versions.LessThan(version, "1.45") {
for _, m := range hostConfig.Mounts {
if m.VolumeOptions != nil && m.VolumeOptions.Subpath != "" {
return errdefs.InvalidParameter(errors.New("VolumeOptions.Subpath needs API v1.45 or newer"))
}
}
}
var warnings []string
if warn, err := handleMACAddressBC(config, hostConfig, networkingConfig, version); err != nil {
return err
@ -632,6 +658,7 @@ func (s *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
Config: config,
HostConfig: hostConfig,
NetworkingConfig: networkingConfig,
AdjustCPUShares: adjustCPUShares,
Platform: platform,
DefaultReadOnlyNonRecursive: defaultReadOnlyNonRecursive,
})
@ -658,7 +685,7 @@ func handleMACAddressBC(config *container.Config, hostConfig *container.HostConf
}
return "", nil
}
if !hostConfig.NetworkMode.IsBridge() && !hostConfig.NetworkMode.IsUserDefined() {
if !hostConfig.NetworkMode.IsDefault() && !hostConfig.NetworkMode.IsBridge() && !hostConfig.NetworkMode.IsUserDefined() {
return "", runconfig.ErrConflictContainerNetworkAndMac
}
@ -687,7 +714,7 @@ func handleMACAddressBC(config *container.Config, hostConfig *container.HostConf
return "", nil
}
var warning string
if hostConfig.NetworkMode.IsBridge() || hostConfig.NetworkMode.IsUserDefined() {
if hostConfig.NetworkMode.IsDefault() || hostConfig.NetworkMode.IsBridge() || hostConfig.NetworkMode.IsUserDefined() {
nwName := hostConfig.NetworkMode.NetworkName()
// If there's no endpoint config, create a place to store the configured address.
if len(networkingConfig.EndpointsConfig) == 0 {

View file

@ -11,10 +11,49 @@ import (
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
gddohttputil "github.com/golang/gddo/httputil"
)
// setContainerPathStatHeader encodes the stat to JSON, base64 encode, and place in a header.
type pathError struct{}
func (pathError) Error() string {
return "Path cannot be empty"
}
func (pathError) InvalidParameter() {}
// postContainersCopy is deprecated in favor of getContainersArchive.
//
// Deprecated since 1.8 (API v1.20), errors out since 1.12 (API v1.24)
func (s *containerRouter) postContainersCopy(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.24") {
w.WriteHeader(http.StatusNotFound)
return nil
}
cfg := types.CopyConfig{}
if err := httputils.ReadJSON(r, &cfg); err != nil {
return err
}
if cfg.Resource == "" {
return pathError{}
}
data, err := s.backend.ContainerCopy(vars["name"], cfg.Resource)
if err != nil {
return err
}
defer data.Close()
w.Header().Set("Content-Type", "application/x-tar")
_, err = io.Copy(w, data)
return err
}
// // Encode the stat to JSON, base64 encode, and place in a header.
func setContainerPathStatHeader(stat *types.ContainerPathStat, header http.Header) error {
statJSON, err := json.Marshal(stat)
if err != nil {

View file

@ -71,6 +71,15 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
return err
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.22") {
// API versions before 1.22 did not enforce application/json content-type.
// Allow older clients to work by patching the content-type.
if r.Header.Get("Content-Type") != "application/json" {
r.Header.Set("Content-Type", "application/json")
}
}
var (
execName = vars["name"]
stdin, inStream io.ReadCloser
@ -87,8 +96,6 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
}
if execStartCheck.ConsoleSize != nil {
version := httputils.VersionFromContext(ctx)
// Not supported before 1.42
if versions.LessThan(version, "1.42") {
execStartCheck.ConsoleSize = nil

View file

@ -4,7 +4,6 @@ import (
"context"
"encoding/json"
"net/http"
"os"
"github.com/distribution/reference"
"github.com/docker/distribution"
@ -13,7 +12,6 @@ import (
"github.com/docker/distribution/manifest/schema2"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/registry"
distributionpkg "github.com/docker/docker/distribution"
"github.com/docker/docker/errdefs"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
@ -26,10 +24,10 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
w.Header().Set("Content-Type", "application/json")
imgName := vars["name"]
image := vars["name"]
// TODO why is reference.ParseAnyReference() / reference.ParseNormalizedNamed() not using the reference.ErrTagInvalidFormat (and so on) errors?
ref, err := reference.ParseAnyReference(imgName)
ref, err := reference.ParseAnyReference(image)
if err != nil {
return errdefs.InvalidParameter(err)
}
@ -39,7 +37,7 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
// full image ID
return errors.Errorf("no manifest found for full image ID")
}
return errdefs.InvalidParameter(errors.Errorf("unknown image reference format: %s", imgName))
return errdefs.InvalidParameter(errors.Errorf("unknown image reference format: %s", image))
}
// For a search it is not an error if no auth was given. Ignore invalid
@ -155,9 +153,6 @@ func (s *distributionRouter) fetchManifest(ctx context.Context, distrepo distrib
}
}
case *schema1.SignedManifest:
if os.Getenv("DOCKER_ENABLE_DEPRECATED_PULL_SCHEMA_1_IMAGE") == "" {
return registry.DistributionInspect{}, distributionpkg.DeprecatedSchema1ImageError(namedRef)
}
platform := ocispec.Platform{
Architecture: mnfstObj.Architecture,
OS: "linux",

View file

@ -21,7 +21,7 @@ type grpcRouter struct {
// NewRouter initializes a new grpc http router
func NewRouter(backends ...Backend) router.Router {
unary := grpc.UnaryInterceptor(grpc_middleware.ChainUnaryServer(unaryInterceptor(), grpcerrors.UnaryServerInterceptor))
stream := grpc.StreamInterceptor(grpc_middleware.ChainStreamServer(otelgrpc.StreamServerInterceptor(), grpcerrors.StreamServerInterceptor)) //nolint:staticcheck // TODO(thaJeztah): ignore SA1019 for deprecated options: see https://github.com/moby/moby/issues/47437
stream := grpc.StreamInterceptor(grpc_middleware.ChainStreamServer(otelgrpc.StreamServerInterceptor(), grpcerrors.StreamServerInterceptor))
r := &grpcRouter{
h2Server: &http2.Server{},
@ -46,7 +46,7 @@ func (gr *grpcRouter) initRoutes() {
}
func unaryInterceptor() grpc.UnaryServerInterceptor {
withTrace := otelgrpc.UnaryServerInterceptor() //nolint:staticcheck // TODO(thaJeztah): ignore SA1019 for deprecated options: see https://github.com/moby/moby/issues/47437
withTrace := otelgrpc.UnaryServerInterceptor()
return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp interface{}, err error) {
// This method is used by the clients to send their traces to buildkit so they can be included

View file

@ -6,7 +6,6 @@ import (
"github.com/distribution/reference"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/registry"
@ -25,8 +24,8 @@ type Backend interface {
type imageBackend interface {
ImageDelete(ctx context.Context, imageRef string, force, prune bool) ([]image.DeleteResponse, error)
ImageHistory(ctx context.Context, imageName string) ([]*image.HistoryResponseItem, error)
Images(ctx context.Context, opts image.ListOptions) ([]*image.Summary, error)
GetImage(ctx context.Context, refOrID string, options backend.GetImageOpts) (*dockerimage.Image, error)
Images(ctx context.Context, opts types.ImageListOptions) ([]*image.Summary, error)
GetImage(ctx context.Context, refOrID string, options image.GetImageOpts) (*dockerimage.Image, error)
TagImage(ctx context.Context, id dockerimage.ID, newRef reference.Named) error
ImagesPrune(ctx context.Context, pruneFilters filters.Args) (*types.ImagesPruneReport, error)
}

View file

@ -15,9 +15,8 @@ import (
"github.com/docker/docker/api"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/filters"
imagetypes "github.com/docker/docker/api/types/image"
opts "github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/builder/remotecontext"
@ -73,9 +72,9 @@ func (ir *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrit
// Special case: "pull -a" may send an image name with a
// trailing :. This is ugly, but let's not break API
// compatibility.
imgName := strings.TrimSuffix(img, ":")
image := strings.TrimSuffix(img, ":")
ref, err := reference.ParseNormalizedNamed(imgName)
ref, err := reference.ParseNormalizedNamed(image)
if err != nil {
return errdefs.InvalidParameter(err)
}
@ -190,7 +189,7 @@ func (ir *imageRouter) postImagesPush(ctx context.Context, w http.ResponseWriter
var ref reference.Named
// Tag is empty only in case PushOptions.All is true.
// Tag is empty only in case ImagePushOptions.All is true.
if tag != "" {
r, err := httputils.RepoTagReference(img, tag)
if err != nil {
@ -286,7 +285,7 @@ func (ir *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter,
}
func (ir *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
img, err := ir.backend.GetImage(ctx, vars["name"], backend.GetImageOpts{Details: true})
img, err := ir.backend.GetImage(ctx, vars["name"], opts.GetImageOpts{Details: true})
if err != nil {
return err
}
@ -306,10 +305,6 @@ func (ir *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWrite
imageInspect.Created = time.Time{}.Format(time.RFC3339Nano)
}
}
if versions.GreaterThanOrEqualTo(version, "1.45") {
imageInspect.Container = "" //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
imageInspect.ContainerConfig = nil //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
}
return httputils.WriteJSON(w, http.StatusOK, imageInspect)
}
@ -364,7 +359,7 @@ func (ir *imageRouter) toImageInspect(img *image.Image) (*types.ImageInspect, er
Data: img.Details.Metadata,
},
RootFS: rootFSToAPIType(img.RootFS),
Metadata: imagetypes.Metadata{
Metadata: opts.Metadata{
LastTagTime: img.Details.LastUpdated,
},
}, nil
@ -406,7 +401,7 @@ func (ir *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter,
sharedSize = httputils.BoolValue(r, "shared-size")
}
images, err := ir.backend.Images(ctx, imagetypes.ListOptions{
images, err := ir.backend.Images(ctx, types.ImageListOptions{
All: httputils.BoolValue(r, "all"),
Filters: imageFilters,
SharedSize: sharedSize,
@ -463,7 +458,7 @@ func (ir *imageRouter) postImagesTag(ctx context.Context, w http.ResponseWriter,
return errdefs.InvalidParameter(errors.New("refusing to create an ambiguous tag using digest algorithm as name"))
}
img, err := ir.backend.GetImage(ctx, vars["name"], backend.GetImageOpts{})
img, err := ir.backend.GetImage(ctx, vars["name"], opts.GetImageOpts{})
if err != nil {
return errdefs.NotFound(err)
}

View file

@ -213,10 +213,6 @@ func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWr
return libnetwork.NetworkNameError(create.Name)
}
// For a Swarm-scoped network, this call to backend.CreateNetwork is used to
// validate the configuration. The network will not be created but, if the
// configuration is valid, ManagerRedirectError will be returned and handled
// below.
nw, err := n.backend.CreateNetwork(create)
if err != nil {
if _, ok := err.(libnetwork.ManagerRedirectError); !ok {

View file

@ -10,7 +10,6 @@ import (
"github.com/docker/docker/api/server/middleware"
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/server/router/debug"
"github.com/docker/docker/api/types"
"github.com/docker/docker/dockerversion"
"github.com/gorilla/mux"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
@ -58,13 +57,19 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc, operation string) ht
if statusCode >= 500 {
log.G(ctx).Errorf("Handler for %s %s returned error: %v", r.Method, r.URL.Path, err)
}
_ = httputils.WriteJSON(w, statusCode, &types.ErrorResponse{
Message: err.Error(),
})
makeErrorHandler(err)(w, r)
}
}), operation).ServeHTTP
}
type pageNotFoundError struct{}
func (pageNotFoundError) Error() string {
return "page not found"
}
func (pageNotFoundError) NotFound() {}
// CreateMux returns a new mux with all the routers registered.
func (s *Server) CreateMux(routers ...router.Router) *mux.Router {
m := mux.NewRouter()
@ -86,12 +91,7 @@ func (s *Server) CreateMux(routers ...router.Router) *mux.Router {
m.Path("/debug" + r.Path()).Handler(f)
}
notFoundHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_ = httputils.WriteJSON(w, http.StatusNotFound, &types.ErrorResponse{
Message: "page not found",
})
})
notFoundHandler := makeErrorHandler(pageNotFoundError{})
m.HandleFunc(versionMatcher+"/{path:.*}", notFoundHandler)
m.NotFoundHandler = notFoundHandler
m.MethodNotAllowedHandler = notFoundHandler

View file

@ -15,11 +15,8 @@ import (
func TestMiddlewares(t *testing.T) {
srv := &Server{}
m, err := middleware.NewVersionMiddleware("0.1omega2", api.DefaultVersion, api.MinSupportedAPIVersion)
if err != nil {
t.Fatal(err)
}
srv.UseMiddleware(*m)
const apiMinVersion = "1.12"
srv.UseMiddleware(middleware.NewVersionMiddleware("0.1omega2", api.DefaultVersion, apiMinVersion))
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
resp := httptest.NewRecorder()

View file

@ -19,10 +19,10 @@ produces:
consumes:
- "application/json"
- "text/plain"
basePath: "/v1.45"
basePath: "/v1.44"
info:
title: "Docker Engine API"
version: "1.45"
version: "1.44"
x-logo:
url: "https://docs.docker.com/assets/images/logo-docker-main.png"
description: |
@ -55,8 +55,8 @@ info:
the URL is not supported by the daemon, a HTTP `400 Bad Request` error message
is returned.
If you omit the version-prefix, the current version of the API (v1.45) is used.
For example, calling `/info` is the same as calling `/v1.45/info`. Using the
If you omit the version-prefix, the current version of the API (v1.44) is used.
For example, calling `/info` is the same as calling `/v1.44/info`. Using the
API without a version-prefix is deprecated and will be removed in a future release.
Engine releases in the near future should support this version of the API,
@ -427,10 +427,6 @@ definitions:
type: "object"
additionalProperties:
type: "string"
Subpath:
description: "Source path inside the volume. Must be relative without any back traversals."
type: "string"
example: "dir-inside-volume/subdirectory"
TmpfsOptions:
description: "Optional configuration for the `tmpfs` type."
type: "object"
@ -8774,7 +8770,8 @@ paths:
<p><br /></p>
> **Deprecated**: This field is deprecated and will always be "false".
> **Deprecated**: This field is deprecated and will always
> be "false" in future.
type: "boolean"
example: false
name:
@ -8817,8 +8814,13 @@ paths:
description: |
A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters:
- `is-automated=(true|false)` (deprecated, see below)
- `is-official=(true|false)`
- `stars=<number>` Matches images that has at least 'number' stars.
The `is-automated` filter is deprecated. The `is_automated` field has
been deprecated by Docker Hub's search API. Consequently, searching
for `is-automated=true` will yield no results.
type: "string"
tags: ["Image"]
/images/prune:

View file

@ -18,6 +18,7 @@ type ContainerCreateConfig struct {
HostConfig *container.HostConfig
NetworkingConfig *network.NetworkingConfig
Platform *ocispec.Platform
AdjustCPUShares bool
DefaultReadOnlyNonRecursive bool
}
@ -90,6 +91,7 @@ type ContainerStatsConfig struct {
Stream bool
OneShot bool
OutStream io.Writer
Version string
}
// ExecInspect holds information about a running process started
@ -129,13 +131,6 @@ type CreateImageConfig struct {
Changes []string
}
// GetImageOpts holds parameters to retrieve image information
// from the backend.
type GetImageOpts struct {
Platform *ocispec.Platform
Details bool
}
// CommitConfig is the configuration for creating an image as part of a build.
type CommitConfig struct {
Author string

View file

@ -157,12 +157,42 @@ type ImageBuildResponse struct {
OSType string
}
// ImageCreateOptions holds information to create images.
type ImageCreateOptions struct {
RegistryAuth string // RegistryAuth is the base64 encoded credentials for the registry.
Platform string // Platform is the target platform of the image if it needs to be pulled from the registry.
}
// ImageImportSource holds source information for ImageImport
type ImageImportSource struct {
Source io.Reader // Source is the data to send to the server to create this image from. You must set SourceName to "-" to leverage this.
SourceName string // SourceName is the name of the image to pull. Set to "-" to leverage the Source attribute.
}
// ImageImportOptions holds information to import images from the client host.
type ImageImportOptions struct {
Tag string // Tag is the name to tag this image with. This attribute is deprecated.
Message string // Message is the message to tag the image with
Changes []string // Changes are the raw changes to apply to this image
Platform string // Platform is the target platform of the image
}
// ImageListOptions holds parameters to list images with.
type ImageListOptions struct {
// All controls whether all images in the graph are filtered, or just
// the heads.
All bool
// Filters is a JSON-encoded set of filter arguments.
Filters filters.Args
// SharedSize indicates whether the shared size of images should be computed.
SharedSize bool
// ContainerCount indicates whether container count should be computed.
ContainerCount bool
}
// ImageLoadResponse returns information to the client about a load process.
type ImageLoadResponse struct {
// Body must be closed to avoid a resource leak
@ -170,6 +200,14 @@ type ImageLoadResponse struct {
JSON bool
}
// ImagePullOptions holds information to pull images.
type ImagePullOptions struct {
All bool
RegistryAuth string // RegistryAuth is the base64 encoded credentials for the registry
PrivilegeFunc RequestPrivilegeFunc
Platform string
}
// RequestPrivilegeFunc is a function interface that
// clients can supply to retry operations after
// getting an authorization error.
@ -178,6 +216,15 @@ type ImageLoadResponse struct {
// if the privilege request fails.
type RequestPrivilegeFunc func() (string, error)
// ImagePushOptions holds information to push images.
type ImagePushOptions ImagePullOptions
// ImageRemoveOptions holds parameters to remove images.
type ImageRemoveOptions struct {
Force bool
PruneChildren bool
}
// ImageSearchOptions holds parameters to search images with.
type ImageSearchOptions struct {
RegistryAuth string

View file

@ -5,8 +5,8 @@ import (
"time"
"github.com/docker/docker/api/types/strslice"
dockerspec "github.com/docker/docker/image/spec/specs-go/v1"
"github.com/docker/go-connections/nat"
dockerspec "github.com/moby/docker-image-spec/specs-go/v1"
)
// MinimumDuration puts a minimum on user configured duration.

View file

@ -1,57 +1,9 @@
package image
import "github.com/docker/docker/api/types/filters"
import ocispec "github.com/opencontainers/image-spec/specs-go/v1"
// ImportOptions holds information to import images from the client host.
type ImportOptions struct {
Tag string // Tag is the name to tag this image with. This attribute is deprecated.
Message string // Message is the message to tag the image with
Changes []string // Changes are the raw changes to apply to this image
Platform string // Platform is the target platform of the image
}
// CreateOptions holds information to create images.
type CreateOptions struct {
RegistryAuth string // RegistryAuth is the base64 encoded credentials for the registry.
Platform string // Platform is the target platform of the image if it needs to be pulled from the registry.
}
// PullOptions holds information to pull images.
type PullOptions struct {
All bool
RegistryAuth string // RegistryAuth is the base64 encoded credentials for the registry
// PrivilegeFunc is a function that clients can supply to retry operations
// after getting an authorization error. This function returns the registry
// authentication header value in base64 encoded format, or an error if the
// privilege request fails.
//
// Also see [github.com/docker/docker/api/types.RequestPrivilegeFunc].
PrivilegeFunc func() (string, error)
Platform string
}
// PushOptions holds information to push images.
type PushOptions PullOptions
// ListOptions holds parameters to list images with.
type ListOptions struct {
// All controls whether all images in the graph are filtered, or just
// the heads.
All bool
// Filters is a JSON-encoded set of filter arguments.
Filters filters.Args
// SharedSize indicates whether the shared size of images should be computed.
SharedSize bool
// ContainerCount indicates whether container count should be computed.
ContainerCount bool
}
// RemoveOptions holds parameters to remove images.
type RemoveOptions struct {
Force bool
PruneChildren bool
// GetImageOpts holds parameters to inspect an image.
type GetImageOpts struct {
Platform *ocispec.Platform
Details bool
}

View file

@ -96,7 +96,6 @@ type BindOptions struct {
type VolumeOptions struct {
NoCopy bool `json:",omitempty"`
Labels map[string]string `json:",omitempty"`
Subpath string `json:",omitempty"`
DriverConfig *Driver `json:",omitempty"`
}

View file

@ -94,7 +94,7 @@ type SearchResult struct {
Name string `json:"name"`
// IsAutomated indicates whether the result is automated.
//
// Deprecated: the "is_automated" field is deprecated and will always be "false".
// Deprecated: the "is_automated" field is deprecated and will always be "false" in the future.
IsAutomated bool `json:"is_automated"`
// Description is a textual description of the repository
Description string `json:"description"`

View file

@ -82,7 +82,7 @@ type ImageInspect struct {
// Depending on how the image was created, this field may be empty.
//
// Deprecated: this field is omitted in API v1.45, but kept for backward compatibility.
Container string `json:",omitempty"`
Container string
// ContainerConfig is an optional field containing the configuration of the
// container that was last committed when creating the image.
@ -91,7 +91,7 @@ type ImageInspect struct {
// and it is not in active use anymore.
//
// Deprecated: this field is omitted in API v1.45, but kept for backward compatibility.
ContainerConfig *container.Config `json:",omitempty"`
ContainerConfig *container.Config
// DockerVersion is the version of Docker that was used to build the image.
//

View file

@ -1,35 +1,138 @@
package types
import (
"github.com/docker/docker/api/types/checkpoint"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/system"
)
// ImageImportOptions holds information to import images from the client host.
// CheckpointCreateOptions holds parameters to create a checkpoint from a container.
//
// Deprecated: use [image.ImportOptions].
type ImageImportOptions = image.ImportOptions
// Deprecated: use [checkpoint.CreateOptions].
type CheckpointCreateOptions = checkpoint.CreateOptions
// ImageCreateOptions holds information to create images.
// CheckpointListOptions holds parameters to list checkpoints for a container
//
// Deprecated: use [image.CreateOptions].
type ImageCreateOptions = image.CreateOptions
// Deprecated: use [checkpoint.ListOptions].
type CheckpointListOptions = checkpoint.ListOptions
// ImagePullOptions holds information to pull images.
// CheckpointDeleteOptions holds parameters to delete a checkpoint from a container
//
// Deprecated: use [image.PullOptions].
type ImagePullOptions = image.PullOptions
// Deprecated: use [checkpoint.DeleteOptions].
type CheckpointDeleteOptions = checkpoint.DeleteOptions
// ImagePushOptions holds information to push images.
// Checkpoint represents the details of a checkpoint when listing endpoints.
//
// Deprecated: use [image.PushOptions].
type ImagePushOptions = image.PushOptions
// Deprecated: use [checkpoint.Summary].
type Checkpoint = checkpoint.Summary
// ImageListOptions holds parameters to list images with.
// Info contains response of Engine API:
// GET "/info"
//
// Deprecated: use [image.ListOptions].
type ImageListOptions = image.ListOptions
// Deprecated: use [system.Info].
type Info = system.Info
// ImageRemoveOptions holds parameters to remove images.
// Commit holds the Git-commit (SHA1) that a binary was built from, as reported
// in the version-string of external tools, such as containerd, or runC.
//
// Deprecated: use [image.RemoveOptions].
type ImageRemoveOptions = image.RemoveOptions
// Deprecated: use [system.Commit].
type Commit = system.Commit
// PluginsInfo is a temp struct holding Plugins name
// registered with docker daemon. It is used by [system.Info] struct
//
// Deprecated: use [system.PluginsInfo].
type PluginsInfo = system.PluginsInfo
// NetworkAddressPool is a temp struct used by [system.Info] struct.
//
// Deprecated: use [system.NetworkAddressPool].
type NetworkAddressPool = system.NetworkAddressPool
// Runtime describes an OCI runtime.
//
// Deprecated: use [system.Runtime].
type Runtime = system.Runtime
// SecurityOpt contains the name and options of a security option.
//
// Deprecated: use [system.SecurityOpt].
type SecurityOpt = system.SecurityOpt
// KeyValue holds a key/value pair.
//
// Deprecated: use [system.KeyValue].
type KeyValue = system.KeyValue
// ImageDeleteResponseItem image delete response item.
//
// Deprecated: use [image.DeleteResponse].
type ImageDeleteResponseItem = image.DeleteResponse
// ImageSummary image summary.
//
// Deprecated: use [image.Summary].
type ImageSummary = image.Summary
// ImageMetadata contains engine-local data about the image.
//
// Deprecated: use [image.Metadata].
type ImageMetadata = image.Metadata
// ServiceCreateResponse contains the information returned to a client
// on the creation of a new service.
//
// Deprecated: use [swarm.ServiceCreateResponse].
type ServiceCreateResponse = swarm.ServiceCreateResponse
// ServiceUpdateResponse service update response.
//
// Deprecated: use [swarm.ServiceUpdateResponse].
type ServiceUpdateResponse = swarm.ServiceUpdateResponse
// ContainerStartOptions holds parameters to start containers.
//
// Deprecated: use [container.StartOptions].
type ContainerStartOptions = container.StartOptions
// ResizeOptions holds parameters to resize a TTY.
// It can be used to resize container TTYs and
// exec process TTYs too.
//
// Deprecated: use [container.ResizeOptions].
type ResizeOptions = container.ResizeOptions
// ContainerAttachOptions holds parameters to attach to a container.
//
// Deprecated: use [container.AttachOptions].
type ContainerAttachOptions = container.AttachOptions
// ContainerCommitOptions holds parameters to commit changes into a container.
//
// Deprecated: use [container.CommitOptions].
type ContainerCommitOptions = container.CommitOptions
// ContainerListOptions holds parameters to list containers with.
//
// Deprecated: use [container.ListOptions].
type ContainerListOptions = container.ListOptions
// ContainerLogsOptions holds parameters to filter logs with.
//
// Deprecated: use [container.LogsOptions].
type ContainerLogsOptions = container.LogsOptions
// ContainerRemoveOptions holds parameters to remove containers.
//
// Deprecated: use [container.RemoveOptions].
type ContainerRemoveOptions = container.RemoveOptions
// DecodeSecurityOptions decodes a security options string slice to a type safe
// [system.SecurityOpt].
//
// Deprecated: use [system.DecodeSecurityOptions].
func DecodeSecurityOptions(opts []string) ([]system.SecurityOpt, error) {
return system.DecodeSecurityOptions(opts)
}

View file

@ -0,0 +1,14 @@
# Legacy API type versions
This package includes types for legacy API versions. The stable version of the API types live in `api/types/*.go`.
Consider moving a type here when you need to keep backwards compatibility in the API. This legacy types are organized by the latest API version they appear in. For instance, types in the `v1p19` package are valid for API versions below or equal `1.19`. Types in the `v1p20` package are valid for the API version `1.20`, since the versions below that will use the legacy types in `v1p19`.
## Package name conventions
The package name convention is to use `v` as a prefix for the version number and `p`(patch) as a separator. We use this nomenclature due to a few restrictions in the Go package name convention:
1. We cannot use `.` because it's interpreted by the language, think of `v1.20.CallFunction`.
2. We cannot use `_` because golint complains about it. The code is actually valid, but it looks probably more weird: `v1_20.CallFunction`.
For instance, if you want to modify a type that was available in the version `1.21` of the API but it will have different fields in the version `1.22`, you want to create a new package under `api/types/versions/v1p21`.

View file

@ -0,0 +1,35 @@
// Package v1p19 provides specific API types for the API version 1, patch 19.
package v1p19 // import "github.com/docker/docker/api/types/versions/v1p19"
import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/versions/v1p20"
"github.com/docker/go-connections/nat"
)
// ContainerJSON is a backcompatibility struct for APIs prior to 1.20.
// Note this is not used by the Windows daemon.
type ContainerJSON struct {
*types.ContainerJSONBase
Volumes map[string]string
VolumesRW map[string]bool
Config *ContainerConfig
NetworkSettings *v1p20.NetworkSettings
}
// ContainerConfig is a backcompatibility struct for APIs prior to 1.20.
type ContainerConfig struct {
*container.Config
MacAddress string
NetworkDisabled bool
ExposedPorts map[nat.Port]struct{}
// backward compatibility, they now live in HostConfig
VolumeDriver string
Memory int64
MemorySwap int64
CPUShares int64 `json:"CpuShares"`
CPUSet string `json:"Cpuset"`
}

View file

@ -0,0 +1,40 @@
// Package v1p20 provides specific API types for the API version 1, patch 20.
package v1p20 // import "github.com/docker/docker/api/types/versions/v1p20"
import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/go-connections/nat"
)
// ContainerJSON is a backcompatibility struct for the API 1.20
type ContainerJSON struct {
*types.ContainerJSONBase
Mounts []types.MountPoint
Config *ContainerConfig
NetworkSettings *NetworkSettings
}
// ContainerConfig is a backcompatibility struct used in ContainerJSON for the API 1.20
type ContainerConfig struct {
*container.Config
MacAddress string
NetworkDisabled bool
ExposedPorts map[nat.Port]struct{}
// backward compatibility, they now live in HostConfig
VolumeDriver string
}
// StatsJSON is a backcompatibility struct used in Stats for APIs prior to 1.21
type StatsJSON struct {
types.Stats
Network types.NetworkStats `json:"network,omitempty"`
}
// NetworkSettings is a backward compatible struct for APIs prior to 1.21
type NetworkSettings struct {
types.NetworkSettingsBase
types.DefaultNetworkSettings
}

View file

@ -238,13 +238,13 @@ type TopologyRequirement struct {
// If requisite is specified, all topologies in preferred list MUST
// also be present in the list of requisite topologies.
//
// If the SP is unable to make the provisioned volume available
// If the SP is unable to to make the provisioned volume available
// from any of the preferred topologies, the SP MAY choose a topology
// from the list of requisite topologies.
// If the list of requisite topologies is not specified, then the SP
// MAY choose from the list of all possible topologies.
// If the list of requisite topologies is specified and the SP is
// unable to make the provisioned volume available from any of the
// unable to to make the provisioned volume available from any of the
// requisite topologies it MUST fail the CreateVolume call.
//
// Example 1:
@ -254,7 +254,7 @@ type TopologyRequirement struct {
// {"region": "R1", "zone": "Z3"}
// preferred =
// {"region": "R1", "zone": "Z3"}
// then the SP SHOULD first attempt to make the provisioned volume
// then the the SP SHOULD first attempt to make the provisioned volume
// available from "zone" "Z3" in the "region" "R1" and fall back to
// "zone" "Z2" in the "region" "R1" if that is not possible.
//
@ -268,7 +268,7 @@ type TopologyRequirement struct {
// preferred =
// {"region": "R1", "zone": "Z4"},
// {"region": "R1", "zone": "Z2"}
// then the SP SHOULD first attempt to make the provisioned volume
// then the the SP SHOULD first attempt to make the provisioned volume
// accessible from "zone" "Z4" in the "region" "R1" and fall back to
// "zone" "Z2" in the "region" "R1" if that is not possible. If that
// is not possible, the SP may choose between either the "zone"
@ -287,7 +287,7 @@ type TopologyRequirement struct {
// preferred =
// {"region": "R1", "zone": "Z5"},
// {"region": "R1", "zone": "Z3"}
// then the SP SHOULD first attempt to make the provisioned volume
// then the the SP SHOULD first attempt to make the provisioned volume
// accessible from the combination of the two "zones" "Z5" and "Z3" in
// the "region" "R1". If that's not possible, it should fall back to
// a combination of "Z5" and other possibilities from the list of

View file

@ -9,7 +9,6 @@ import (
"fmt"
"io"
"path"
"strconv"
"strings"
"sync"
"time"
@ -35,15 +34,14 @@ import (
pkgprogress "github.com/docker/docker/pkg/progress"
"github.com/docker/docker/reference"
"github.com/moby/buildkit/cache"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb/sourceresolver"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/solver"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/source"
"github.com/moby/buildkit/source/containerimage"
srctypes "github.com/moby/buildkit/source/types"
"github.com/moby/buildkit/sourcepolicy"
policy "github.com/moby/buildkit/sourcepolicy/pb"
spb "github.com/moby/buildkit/sourcepolicy/pb"
"github.com/moby/buildkit/util/flightcontrol"
"github.com/moby/buildkit/util/imageutil"
@ -82,77 +80,9 @@ func NewSource(opt SourceOpt) (*Source, error) {
return &Source{SourceOpt: opt}, nil
}
// Schemes returns a list of SourceOp identifier schemes that this source
// should match.
func (is *Source) Schemes() []string {
return []string{srctypes.DockerImageScheme}
}
// Identifier constructs an Identifier from the given scheme, ref, and attrs,
// all of which come from a SourceOp.
func (is *Source) Identifier(scheme, ref string, attrs map[string]string, platform *pb.Platform) (source.Identifier, error) {
return is.registryIdentifier(ref, attrs, platform)
}
// Copied from github.com/moby/buildkit/source/containerimage/source.go
func (is *Source) registryIdentifier(ref string, attrs map[string]string, platform *pb.Platform) (source.Identifier, error) {
id, err := containerimage.NewImageIdentifier(ref)
if err != nil {
return nil, err
}
if platform != nil {
id.Platform = &ocispec.Platform{
OS: platform.OS,
Architecture: platform.Architecture,
Variant: platform.Variant,
OSVersion: platform.OSVersion,
}
if platform.OSFeatures != nil {
id.Platform.OSFeatures = append([]string{}, platform.OSFeatures...)
}
}
for k, v := range attrs {
switch k {
case pb.AttrImageResolveMode:
rm, err := resolver.ParseImageResolveMode(v)
if err != nil {
return nil, err
}
id.ResolveMode = rm
case pb.AttrImageRecordType:
rt, err := parseImageRecordType(v)
if err != nil {
return nil, err
}
id.RecordType = rt
case pb.AttrImageLayerLimit:
l, err := strconv.Atoi(v)
if err != nil {
return nil, errors.Wrapf(err, "invalid layer limit %s", v)
}
if l <= 0 {
return nil, errors.Errorf("invalid layer limit %s", v)
}
id.LayerLimit = &l
}
}
return id, nil
}
func parseImageRecordType(v string) (client.UsageRecordType, error) {
switch client.UsageRecordType(v) {
case "", client.UsageRecordTypeRegular:
return client.UsageRecordTypeRegular, nil
case client.UsageRecordTypeInternal:
return client.UsageRecordTypeInternal, nil
case client.UsageRecordTypeFrontend:
return client.UsageRecordTypeFrontend, nil
default:
return "", errors.Errorf("invalid record type %s", v)
}
// ID returns image scheme identifier
func (is *Source) ID() string {
return srctypes.DockerImageScheme
}
func (is *Source) resolveLocal(refStr string) (*image.Image, error) {
@ -177,7 +107,7 @@ type resolveRemoteResult struct {
dt []byte
}
func (is *Source) resolveRemote(ctx context.Context, ref string, platform *ocispec.Platform, sm *session.Manager, g session.Group) (digest.Digest, []byte, error) {
func (is *Source) resolveRemote(ctx context.Context, ref string, platform *ocispec.Platform, sm *session.Manager, g session.Group) (string, digest.Digest, []byte, error) {
p := platforms.DefaultSpec()
if platform != nil {
p = *platform
@ -186,36 +116,34 @@ func (is *Source) resolveRemote(ctx context.Context, ref string, platform *ocisp
key := "getconfig::" + ref + "::" + platforms.Format(p)
res, err := is.g.Do(ctx, key, func(ctx context.Context) (*resolveRemoteResult, error) {
res := resolver.DefaultPool.GetResolver(is.RegistryHosts, ref, "pull", sm, g)
dgst, dt, err := imageutil.Config(ctx, ref, res, is.ContentStore, is.LeaseManager, platform)
ref, dgst, dt, err := imageutil.Config(ctx, ref, res, is.ContentStore, is.LeaseManager, platform, []*policy.Policy{})
if err != nil {
return nil, err
}
return &resolveRemoteResult{ref: ref, dgst: dgst, dt: dt}, nil
})
if err != nil {
return "", nil, err
return ref, "", nil, err
}
return res.dgst, res.dt, nil
return res.ref, res.dgst, res.dt, nil
}
// ResolveImageConfig returns image config for an image
func (is *Source) ResolveImageConfig(ctx context.Context, ref string, opt sourceresolver.Opt, sm *session.Manager, g session.Group) (digest.Digest, []byte, error) {
if opt.ImageOpt == nil {
return "", nil, fmt.Errorf("can only resolve an image: %v, opt: %v", ref, opt)
}
func (is *Source) ResolveImageConfig(ctx context.Context, ref string, opt llb.ResolveImageConfigOpt, sm *session.Manager, g session.Group) (string, digest.Digest, []byte, error) {
ref, err := applySourcePolicies(ctx, ref, opt.SourcePolicies)
if err != nil {
return "", nil, err
return "", "", nil, err
}
resolveMode, err := resolver.ParseImageResolveMode(opt.ImageOpt.ResolveMode)
resolveMode, err := source.ParseImageResolveMode(opt.ResolveMode)
if err != nil {
return "", nil, err
return ref, "", nil, err
}
switch resolveMode {
case resolver.ResolveModeForcePull:
return is.resolveRemote(ctx, ref, opt.Platform, sm, g)
case source.ResolveModeForcePull:
ref, dgst, dt, err := is.resolveRemote(ctx, ref, opt.Platform, sm, g)
// TODO: pull should fallback to local in case of failure to allow offline behavior
// the fallback doesn't work currently
return ref, dgst, dt, err
/*
if err == nil {
return dgst, dt, err
@ -225,10 +153,10 @@ func (is *Source) ResolveImageConfig(ctx context.Context, ref string, opt source
return "", dt, err
*/
case resolver.ResolveModeDefault:
case source.ResolveModeDefault:
// default == prefer local, but in the future could be smarter
fallthrough
case resolver.ResolveModePreferLocal:
case source.ResolveModePreferLocal:
img, err := is.resolveLocal(ref)
if err == nil {
if opt.Platform != nil && !platformMatches(img, opt.Platform) {
@ -237,19 +165,19 @@ func (is *Source) ResolveImageConfig(ctx context.Context, ref string, opt source
path.Join(img.OS, img.Architecture, img.Variant),
)
} else {
return "", img.RawJSON(), err
return ref, "", img.RawJSON(), err
}
}
// fallback to remote
return is.resolveRemote(ctx, ref, opt.Platform, sm, g)
}
// should never happen
return "", nil, fmt.Errorf("builder cannot resolve image %s: invalid mode %q", ref, opt.ImageOpt.ResolveMode)
return ref, "", nil, fmt.Errorf("builder cannot resolve image %s: invalid mode %q", ref, opt.ResolveMode)
}
// Resolve returns access to pulling for an identifier
func (is *Source) Resolve(ctx context.Context, id source.Identifier, sm *session.Manager, vtx solver.Vertex) (source.SourceInstance, error) {
imageIdentifier, ok := id.(*containerimage.ImageIdentifier)
imageIdentifier, ok := id.(*source.ImageIdentifier)
if !ok {
return nil, errors.Errorf("invalid image identifier %v", id)
}
@ -273,7 +201,7 @@ type puller struct {
is *Source
resolveLocalOnce sync.Once
g flightcontrol.Group[struct{}]
src *containerimage.ImageIdentifier
src *source.ImageIdentifier
desc ocispec.Descriptor
ref string
config []byte
@ -325,7 +253,7 @@ func (p *puller) resolveLocal() {
}
}
if p.src.ResolveMode == resolver.ResolveModeDefault || p.src.ResolveMode == resolver.ResolveModePreferLocal {
if p.src.ResolveMode == source.ResolveModeDefault || p.src.ResolveMode == source.ResolveModePreferLocal {
ref := p.src.Reference.String()
img, err := p.is.resolveLocal(ref)
if err == nil {
@ -374,17 +302,12 @@ func (p *puller) resolve(ctx context.Context, g session.Group) error {
if err != nil {
return struct{}{}, err
}
_, dt, err := p.is.ResolveImageConfig(ctx, ref.String(), sourceresolver.Opt{
Platform: &p.platform,
ImageOpt: &sourceresolver.ResolveImageOpt{
ResolveMode: p.src.ResolveMode.String(),
},
}, p.sm, g)
newRef, _, dt, err := p.is.ResolveImageConfig(ctx, ref.String(), llb.ResolveImageConfigOpt{Platform: &p.platform, ResolveMode: p.src.ResolveMode.String()}, p.sm, g)
if err != nil {
return struct{}{}, err
}
p.ref = ref.String()
p.ref = newRef
p.config = dt
}
return struct{}{}, nil
@ -943,8 +866,12 @@ func applySourcePolicies(ctx context.Context, str string, spls []*spb.Policy) (s
if err != nil {
return "", errors.WithStack(err)
}
op := &pb.SourceOp{
op := &pb.Op{
Op: &pb.Op_Source{
Source: &pb.SourceOp{
Identifier: srctypes.DockerImageScheme + "://" + ref.String(),
},
},
}
mut, err := sourcepolicy.NewEngine(spls).Evaluate(ctx, op)
@ -957,9 +884,9 @@ func applySourcePolicies(ctx context.Context, str string, spls []*spb.Policy) (s
t string
ok bool
)
t, newRef, ok := strings.Cut(op.GetIdentifier(), "://")
t, newRef, ok := strings.Cut(op.GetSource().GetIdentifier(), "://")
if !ok {
return "", errors.Errorf("could not parse ref: %s", op.GetIdentifier())
return "", errors.Errorf("could not parse ref: %s", op.GetSource().GetIdentifier())
}
if ok && t != srctypes.DockerImageScheme {
return "", &imageutil.ResolveToNonImageError{Ref: str, Updated: newRef}

View file

@ -390,9 +390,8 @@ func (b *Builder) Build(ctx context.Context, opt backend.BuildConfig) (*builder.
req := &controlapi.SolveRequest{
Ref: id,
Exporters: []*controlapi.Exporter{
&controlapi.Exporter{Type: exporterName, Attrs: exporterAttrs},
},
Exporter: exporterName,
ExporterAttrs: exporterAttrs,
Frontend: "dockerfile.v0",
FrontendAttrs: frontendAttrs,
Session: opt.Options.SessionID,

View file

@ -67,11 +67,11 @@ func newController(ctx context.Context, rt http.RoundTripper, opt Opt) (*control
}
func getTraceExporter(ctx context.Context) trace.SpanExporter {
span, _, err := detect.Exporter()
exp, err := detect.Exporter()
if err != nil {
log.G(ctx).WithError(err).Error("Failed to detect trace exporter for buildkit controller")
}
return span
return exp
}
func newSnapshotterController(ctx context.Context, rt http.RoundTripper, opt Opt) (*control.Controller, error) {
@ -105,8 +105,7 @@ func newSnapshotterController(ctx context.Context, rt http.RoundTripper, opt Opt
wo, err := containerd.NewWorkerOpt(opt.Root, opt.ContainerdAddress, opt.Snapshotter, opt.ContainerdNamespace,
opt.Rootless, map[string]string{
label.Snapshotter: opt.Snapshotter,
}, dns, nc, opt.ApparmorProfile, false, nil, "", nil, ctd.WithTimeout(60*time.Second),
)
}, dns, nc, opt.ApparmorProfile, false, nil, "", ctd.WithTimeout(60*time.Second))
if err != nil {
return nil, err
}
@ -304,10 +303,8 @@ func newGraphDriverController(ctx context.Context, rt http.RoundTripper, opt Opt
exp, err := mobyexporter.New(mobyexporter.Opt{
ImageStore: dist.ImageStore,
ContentStore: store,
Differ: differ,
ImageTagger: opt.ImageTagger,
LeaseManager: lm,
})
if err != nil {
return nil, err

View file

@ -16,7 +16,6 @@ import (
"github.com/moby/buildkit/executor"
"github.com/moby/buildkit/executor/oci"
"github.com/moby/buildkit/executor/resources"
resourcestypes "github.com/moby/buildkit/executor/resources/types"
"github.com/moby/buildkit/executor/runcexecutor"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/solver/pb"
@ -57,16 +56,9 @@ func newExecutor(root, cgroupParent string, net *libnetwork.Controller, dnsConfi
return nil, err
}
runcCmds := []string{"runc"}
// TODO: FIXME: testing env var, replace with something better or remove in a major version or two
if runcOverride := os.Getenv("DOCKER_BUILDKIT_RUNC_COMMAND"); runcOverride != "" {
runcCmds = []string{runcOverride}
}
return runcexecutor.New(runcexecutor.Opt{
Root: filepath.Join(root, "executor"),
CommandCandidates: runcCmds,
CommandCandidates: []string{"runc"},
DefaultCgroupParent: cgroupParent,
Rootless: rootless,
NoPivot: os.Getenv("DOCKER_RAMDISK") != "",
@ -136,8 +128,8 @@ func (iface *lnInterface) init(c *libnetwork.Controller, n *libnetwork.Network)
}
// TODO(neersighted): Unstub Sample(), and collect data from the libnetwork Endpoint.
func (iface *lnInterface) Sample() (*resourcestypes.NetworkSample, error) {
return &resourcestypes.NetworkSample{}, nil
func (iface *lnInterface) Sample() (*network.Sample, error) {
return &network.Sample{}, nil
}
func (iface *lnInterface) Set(s *specs.Spec) error {

View file

@ -1,27 +1,24 @@
package mobyexporter
import (
"bytes"
"context"
"encoding/json"
"fmt"
"strings"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/leases"
"github.com/containerd/log"
distref "github.com/distribution/reference"
"github.com/docker/docker/image"
"github.com/docker/docker/internal/compatcontext"
"github.com/docker/docker/layer"
"github.com/moby/buildkit/exporter"
"github.com/moby/buildkit/exporter/containerimage"
"github.com/moby/buildkit/exporter/containerimage/exptypes"
"github.com/moby/buildkit/util/leaseutil"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
const (
keyImageName = "name"
)
// Differ can make a moby layer from a snapshot
type Differ interface {
EnsureLayer(ctx context.Context, key string) ([]layer.DiffID, error)
@ -36,8 +33,6 @@ type Opt struct {
ImageStore image.Store
Differ Differ
ImageTagger ImageTagger
ContentStore content.Store
LeaseManager leases.Manager
}
type imageExporter struct {
@ -50,14 +45,13 @@ func New(opt Opt) (exporter.Exporter, error) {
return im, nil
}
func (e *imageExporter) Resolve(ctx context.Context, id int, opt map[string]string) (exporter.ExporterInstance, error) {
func (e *imageExporter) Resolve(ctx context.Context, opt map[string]string) (exporter.ExporterInstance, error) {
i := &imageExporterInstance{
imageExporter: e,
id: id,
}
for k, v := range opt {
switch exptypes.ImageExporterOptKey(k) {
case exptypes.OptKeyName:
switch k {
case keyImageName:
for _, v := range strings.Split(v, ",") {
ref, err := distref.ParseNormalizedNamed(v)
if err != nil {
@ -77,15 +71,10 @@ func (e *imageExporter) Resolve(ctx context.Context, id int, opt map[string]stri
type imageExporterInstance struct {
*imageExporter
id int
targetNames []distref.Named
meta map[string][]byte
}
func (e *imageExporterInstance) ID() int {
return e.id
}
func (e *imageExporterInstance) Name() string {
return "exporting to image"
}
@ -94,7 +83,7 @@ func (e *imageExporterInstance) Config() *exporter.Config {
return exporter.NewConfig()
}
func (e *imageExporterInstance) Export(ctx context.Context, inp *exporter.Source, inlineCache exptypes.InlineCache, sessionID string) (map[string]string, exporter.DescriptorReference, error) {
func (e *imageExporterInstance) Export(ctx context.Context, inp *exporter.Source, sessionID string) (map[string]string, exporter.DescriptorReference, error) {
if len(inp.Refs) > 1 {
return nil, nil, fmt.Errorf("exporting multiple references to image store is currently unsupported")
}
@ -114,14 +103,18 @@ func (e *imageExporterInstance) Export(ctx context.Context, inp *exporter.Source
case 0:
config = inp.Metadata[exptypes.ExporterImageConfigKey]
case 1:
ps, err := exptypes.ParsePlatforms(inp.Metadata)
if err != nil {
return nil, nil, fmt.Errorf("cannot export image, failed to parse platforms: %w", err)
platformsBytes, ok := inp.Metadata[exptypes.ExporterPlatformsKey]
if !ok {
return nil, nil, fmt.Errorf("cannot export image, missing platforms mapping")
}
if len(ps.Platforms) != len(inp.Refs) {
return nil, nil, errors.Errorf("number of platforms does not match references %d %d", len(ps.Platforms), len(inp.Refs))
var p exptypes.Platforms
if err := json.Unmarshal(platformsBytes, &p); err != nil {
return nil, nil, errors.Wrapf(err, "failed to parse platforms passed to exporter")
}
config = inp.Metadata[fmt.Sprintf("%s/%s", exptypes.ExporterImageConfigKey, ps.Platforms[0].ID)]
if len(p.Platforms) != len(inp.Refs) {
return nil, nil, errors.Errorf("number of platforms does not match references %d %d", len(p.Platforms), len(inp.Refs))
}
config = inp.Metadata[fmt.Sprintf("%s/%s", exptypes.ExporterImageConfigKey, p.Platforms[0].ID)]
}
var diffs []digest.Digest
@ -164,21 +157,7 @@ func (e *imageExporterInstance) Export(ctx context.Context, inp *exporter.Source
diffs, history = normalizeLayersAndHistory(diffs, history, ref)
var inlineCacheEntry *exptypes.InlineCacheEntry
if inlineCache != nil {
inlineCacheResult, err := inlineCache(ctx)
if err != nil {
return nil, nil, err
}
if inlineCacheResult != nil {
if ref != nil {
inlineCacheEntry, _ = inlineCacheResult.FindRef(ref.ID())
} else {
inlineCacheEntry = inlineCacheResult.Ref
}
}
}
config, err = patchImageConfig(config, diffs, history, inlineCacheEntry)
config, err = patchImageConfig(config, diffs, history, inp.Metadata[exptypes.ExporterInlineCache])
if err != nil {
return nil, nil, err
}
@ -192,10 +171,8 @@ func (e *imageExporterInstance) Export(ctx context.Context, inp *exporter.Source
}
_ = configDone(nil)
var names []string
for _, targetName := range e.targetNames {
names = append(names, targetName.String())
if e.opt.ImageTagger != nil {
for _, targetName := range e.targetNames {
tagDone := oneOffProgress(ctx, "naming to "+targetName.String())
if err := e.opt.ImageTagger.TagImage(ctx, image.ID(digest.Digest(id)), targetName); err != nil {
return nil, nil, tagDone(err)
@ -204,49 +181,8 @@ func (e *imageExporterInstance) Export(ctx context.Context, inp *exporter.Source
}
}
resp := map[string]string{
return map[string]string{
exptypes.ExporterImageConfigDigestKey: configDigest.String(),
exptypes.ExporterImageDigestKey: id.String(),
}
if len(names) > 0 {
resp["image.name"] = strings.Join(names, ",")
}
descRef, err := e.newTempReference(ctx, config)
if err != nil {
return nil, nil, fmt.Errorf("failed to create a temporary descriptor reference: %w", err)
}
return resp, descRef, nil
}
func (e *imageExporterInstance) newTempReference(ctx context.Context, config []byte) (exporter.DescriptorReference, error) {
lm := e.opt.LeaseManager
dgst := digest.FromBytes(config)
leaseCtx, done, err := leaseutil.WithLease(ctx, lm, leaseutil.MakeTemporary)
if err != nil {
return nil, err
}
unlease := func(ctx context.Context) error {
err := done(compatcontext.WithoutCancel(ctx))
if err != nil {
log.G(ctx).WithError(err).Error("failed to delete descriptor reference lease")
}
return err
}
desc := ocispec.Descriptor{
Digest: dgst,
MediaType: "application/vnd.docker.container.image.v1+json",
Size: int64(len(config)),
}
if err := content.WriteBlob(leaseCtx, e.opt.ContentStore, desc.Digest.String(), bytes.NewReader(config), desc); err != nil {
unlease(leaseCtx)
return nil, fmt.Errorf("failed to save temporary image config: %w", err)
}
return containerimage.NewDescriptorReference(desc, unlease), nil
}, nil, nil
}

View file

@ -8,7 +8,6 @@ import (
"github.com/containerd/containerd/platforms"
"github.com/containerd/log"
"github.com/moby/buildkit/cache"
"github.com/moby/buildkit/exporter/containerimage/exptypes"
"github.com/moby/buildkit/util/progress"
"github.com/moby/buildkit/util/system"
"github.com/opencontainers/go-digest"
@ -39,7 +38,7 @@ func parseHistoryFromConfig(dt []byte) ([]ocispec.History, error) {
return config.History, nil
}
func patchImageConfig(dt []byte, dps []digest.Digest, history []ocispec.History, cache *exptypes.InlineCacheEntry) ([]byte, error) {
func patchImageConfig(dt []byte, dps []digest.Digest, history []ocispec.History, cache []byte) ([]byte, error) {
m := map[string]json.RawMessage{}
if err := json.Unmarshal(dt, &m); err != nil {
return nil, errors.Wrap(err, "failed to parse image config for patch")
@ -76,7 +75,7 @@ func patchImageConfig(dt []byte, dps []digest.Digest, history []ocispec.History,
}
if cache != nil {
dt, err := json.Marshal(cache.Data)
dt, err := json.Marshal(cache)
if err != nil {
return nil, err
}

View file

@ -19,7 +19,7 @@ func NewExporterWrapper(exp exporter.Exporter) (exporter.Exporter, error) {
}
// Resolve applies moby specific attributes to the request.
func (e *imageExporterMobyWrapper) Resolve(ctx context.Context, id int, exporterAttrs map[string]string) (exporter.ExporterInstance, error) {
func (e *imageExporterMobyWrapper) Resolve(ctx context.Context, exporterAttrs map[string]string) (exporter.ExporterInstance, error) {
if exporterAttrs == nil {
exporterAttrs = make(map[string]string)
}
@ -33,5 +33,5 @@ func (e *imageExporterMobyWrapper) Resolve(ctx context.Context, id int, exporter
exporterAttrs[string(exptypes.OptKeyDanglingPrefix)] = "moby-dangling"
}
return e.exp.Resolve(ctx, id, exporterAttrs)
return e.exp.Resolve(ctx, exporterAttrs)
}

View file

@ -12,7 +12,7 @@ import (
"github.com/containerd/containerd/platforms"
"github.com/containerd/containerd/rootfs"
"github.com/containerd/log"
imageadapter "github.com/docker/docker/builder/builder-next/adapters/containerimage"
"github.com/docker/docker/builder/builder-next/adapters/containerimage"
mobyexporter "github.com/docker/docker/builder/builder-next/exporter"
distmetadata "github.com/docker/docker/distribution/metadata"
"github.com/docker/docker/distribution/xfer"
@ -23,7 +23,7 @@ import (
"github.com/moby/buildkit/cache"
cacheconfig "github.com/moby/buildkit/cache/config"
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/client/llb/sourceresolver"
"github.com/moby/buildkit/client/llb"
"github.com/moby/buildkit/executor"
"github.com/moby/buildkit/exporter"
localexporter "github.com/moby/buildkit/exporter/local"
@ -37,7 +37,6 @@ import (
"github.com/moby/buildkit/solver/llbsolver/ops"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/source"
"github.com/moby/buildkit/source/containerimage"
"github.com/moby/buildkit/source/git"
"github.com/moby/buildkit/source/http"
"github.com/moby/buildkit/source/local"
@ -76,7 +75,7 @@ type Opt struct {
ContentStore *containerdsnapshot.Store
CacheManager cache.Manager
LeaseManager *leaseutil.Manager
ImageSource *imageadapter.Source
ImageSource *containerimage.Source
DownloadManager *xfer.LayerDownloadManager
V2MetadataService distmetadata.V2MetadataService
Transport nethttp.RoundTripper
@ -213,49 +212,6 @@ func (w *Worker) LoadRef(ctx context.Context, id string, hidden bool) (cache.Imm
return w.CacheManager().Get(ctx, id, nil, opts...)
}
func (w *Worker) ResolveSourceMetadata(ctx context.Context, op *pb.SourceOp, opt sourceresolver.Opt, sm *session.Manager, g session.Group) (*sourceresolver.MetaResponse, error) {
if opt.SourcePolicies != nil {
return nil, errors.New("source policies can not be set for worker")
}
var platform *pb.Platform
if p := opt.Platform; p != nil {
platform = &pb.Platform{
Architecture: p.Architecture,
OS: p.OS,
Variant: p.Variant,
OSVersion: p.OSVersion,
}
}
id, err := w.SourceManager.Identifier(&pb.Op_Source{Source: op}, platform)
if err != nil {
return nil, err
}
switch idt := id.(type) {
case *containerimage.ImageIdentifier:
if opt.ImageOpt == nil {
opt.ImageOpt = &sourceresolver.ResolveImageOpt{}
}
dgst, config, err := w.ImageSource.ResolveImageConfig(ctx, idt.Reference.String(), opt, sm, g)
if err != nil {
return nil, err
}
return &sourceresolver.MetaResponse{
Op: op,
Image: &sourceresolver.ResolveImageResponse{
Digest: dgst,
Config: config,
},
}, nil
}
return &sourceresolver.MetaResponse{
Op: op,
}, nil
}
// ResolveOp converts a LLB vertex into a LLB operation
func (w *Worker) ResolveOp(v solver.Vertex, s frontend.FrontendLLBBridge, sm *session.Manager) (solver.Op, error) {
if baseOp, ok := v.Sys().(*pb.Op); ok {
@ -280,7 +236,7 @@ func (w *Worker) ResolveOp(v solver.Vertex, s frontend.FrontendLLBBridge, sm *se
}
// ResolveImageConfig returns image config for an image
func (w *Worker) ResolveImageConfig(ctx context.Context, ref string, opt sourceresolver.Opt, sm *session.Manager, g session.Group) (digest.Digest, []byte, error) {
func (w *Worker) ResolveImageConfig(ctx context.Context, ref string, opt llb.ResolveImageConfigOpt, sm *session.Manager, g session.Group) (string, digest.Digest, []byte, error) {
return w.ImageSource.ResolveImageConfig(ctx, ref, opt, sm, g)
}
@ -570,7 +526,3 @@ type emptyProvider struct{}
func (p *emptyProvider) ReaderAt(ctx context.Context, dec ocispec.Descriptor) (content.ReaderAt, error) {
return nil, errors.Errorf("ReaderAt not implemented for empty provider")
}
func (p *emptyProvider) Info(ctx context.Context, d digest.Digest) (content.Info, error) {
return content.Info{}, errors.Errorf("Info not implemented for empty provider")
}

View file

@ -64,7 +64,7 @@ type ExecBackend interface {
// ContainerRm removes a container specified by `id`.
ContainerRm(name string, config *backend.ContainerRmConfig) error
// ContainerStart starts a new container
ContainerStart(ctx context.Context, containerID string, checkpoint string, checkpointDir string) error
ContainerStart(ctx context.Context, containerID string, hostConfig *container.HostConfig, checkpoint string, checkpointDir string) error
// ContainerWait stops processing until the given container is stopped.
ContainerWait(ctx context.Context, name string, condition containerpkg.WaitCondition) (<-chan containerpkg.StateStatus, error)
}

View file

@ -72,7 +72,7 @@ func (c *containerManager) Run(ctx context.Context, cID string, stdout, stderr i
}
}()
if err := c.backend.ContainerStart(ctx, cID, "", ""); err != nil {
if err := c.backend.ContainerStart(ctx, cID, nil, "", ""); err != nil {
close(finished)
logCancellationError(cancelErrCh, "error from ContainerStart: "+err.Error())
return err

View file

@ -22,7 +22,7 @@ func normalizeWorkdir(_ string, current string, requested string) (string, error
if !filepath.IsAbs(requested) {
return filepath.Join(string(os.PathSeparator), current, requested), nil
}
return filepath.Clean(requested), nil
return requested, nil
}
// resolveCmdLine takes a command line arg set and optionally prepends a platform-specific

View file

@ -15,13 +15,11 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/builder"
"github.com/docker/docker/image"
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/chrootarchive"
"github.com/docker/docker/pkg/stringid"
"github.com/docker/docker/runconfig"
"github.com/docker/go-connections/nat"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
@ -379,21 +377,12 @@ func hostConfigFromOptions(options *types.ImageBuildOptions) *container.HostConf
Ulimits: options.Ulimits,
}
// We need to make sure no empty string or "default" NetworkMode is
// provided to the daemon as it doesn't support them.
//
// This is in line with what the ContainerCreate API endpoint does.
networkMode := options.NetworkMode
if networkMode == "" || networkMode == network.NetworkDefault {
networkMode = runconfig.DefaultDaemonNetworkMode().NetworkName()
}
hc := &container.HostConfig{
SecurityOpt: options.SecurityOpt,
Isolation: options.Isolation,
ShmSize: options.ShmSize,
Resources: resources,
NetworkMode: container.NetworkMode(networkMode),
NetworkMode: container.NetworkMode(options.NetworkMode),
// Set a log config to override any default value set on the daemon
LogConfig: defaultLogConfig,
ExtraHosts: options.ExtraHosts,

View file

@ -46,7 +46,7 @@ func (m *MockBackend) CommitBuildStep(ctx context.Context, c backend.CommitConfi
return "", nil
}
func (m *MockBackend) ContainerStart(ctx context.Context, containerID string, checkpoint string, checkpointDir string) error {
func (m *MockBackend) ContainerStart(ctx context.Context, containerID string, hostConfig *container.HostConfig, checkpoint string, checkpointDir string) error {
return nil
}

View file

@ -10,11 +10,11 @@ import (
)
// DistributionInspect returns the image digest with the full manifest.
func (cli *Client) DistributionInspect(ctx context.Context, imageRef, encodedRegistryAuth string) (registry.DistributionInspect, error) {
func (cli *Client) DistributionInspect(ctx context.Context, image, encodedRegistryAuth string) (registry.DistributionInspect, error) {
// Contact the registry to retrieve digest and platform information
var distributionInspect registry.DistributionInspect
if imageRef == "" {
return distributionInspect, objectNotFoundError{object: "distribution", id: imageRef}
if image == "" {
return distributionInspect, objectNotFoundError{object: "distribution", id: image}
}
if err := cli.NewVersionError(ctx, "1.30", "distribution inspect"); err != nil {
@ -28,7 +28,7 @@ func (cli *Client) DistributionInspect(ctx context.Context, imageRef, encodedReg
}
}
resp, err := cli.get(ctx, "/distribution/"+imageRef+"/json", url.Values{}, headers)
resp, err := cli.get(ctx, "/distribution/"+image+"/json", url.Values{}, headers)
defer ensureReaderClosed(resp)
if err != nil {
return distributionInspect, err

View file

@ -8,13 +8,13 @@ import (
"strings"
"github.com/distribution/reference"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/registry"
)
// ImageCreate creates a new image based on the parent options.
// It returns the JSON content in the response body.
func (cli *Client) ImageCreate(ctx context.Context, parentReference string, options image.CreateOptions) (io.ReadCloser, error) {
func (cli *Client) ImageCreate(ctx context.Context, parentReference string, options types.ImageCreateOptions) (io.ReadCloser, error) {
ref, err := reference.ParseNormalizedNamed(parentReference)
if err != nil {
return nil, err

View file

@ -9,7 +9,7 @@ import (
"strings"
"testing"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/errdefs"
"gotest.tools/v3/assert"
@ -20,7 +20,7 @@ func TestImageCreateError(t *testing.T) {
client := &Client{
client: newMockClient(errorMock(http.StatusInternalServerError, "Server error")),
}
_, err := client.ImageCreate(context.Background(), "reference", image.CreateOptions{})
_, err := client.ImageCreate(context.Background(), "reference", types.ImageCreateOptions{})
assert.Check(t, is.ErrorType(err, errdefs.IsSystem))
}
@ -58,7 +58,7 @@ func TestImageCreate(t *testing.T) {
}),
}
createResponse, err := client.ImageCreate(context.Background(), expectedReference, image.CreateOptions{
createResponse, err := client.ImageCreate(context.Background(), expectedReference, types.ImageCreateOptions{
RegistryAuth: expectedRegistryAuth,
})
if err != nil {

View file

@ -8,12 +8,11 @@ import (
"github.com/distribution/reference"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/image"
)
// ImageImport creates a new image based on the source options.
// It returns the JSON content in the response body.
func (cli *Client) ImageImport(ctx context.Context, source types.ImageImportSource, ref string, options image.ImportOptions) (io.ReadCloser, error) {
func (cli *Client) ImageImport(ctx context.Context, source types.ImageImportSource, ref string, options types.ImageImportOptions) (io.ReadCloser, error) {
if ref != "" {
// Check if the given image name can be resolved
if _, err := reference.ParseNormalizedNamed(ref); err != nil {

View file

@ -11,7 +11,6 @@ import (
"testing"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/errdefs"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
@ -21,7 +20,7 @@ func TestImageImportError(t *testing.T) {
client := &Client{
client: newMockClient(errorMock(http.StatusInternalServerError, "Server error")),
}
_, err := client.ImageImport(context.Background(), types.ImageImportSource{}, "image:tag", image.ImportOptions{})
_, err := client.ImageImport(context.Background(), types.ImageImportSource{}, "image:tag", types.ImageImportOptions{})
assert.Check(t, is.ErrorType(err, errdefs.IsSystem))
}
@ -64,7 +63,7 @@ func TestImageImport(t *testing.T) {
importResponse, err := client.ImageImport(context.Background(), types.ImageImportSource{
Source: strings.NewReader("source"),
SourceName: "image_source",
}, "repository_name:imported", image.ImportOptions{
}, "repository_name:imported", types.ImageImportOptions{
Tag: "imported",
Message: "A message",
Changes: []string{"change1", "change2"},

View file

@ -5,13 +5,14 @@ import (
"encoding/json"
"net/url"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/versions"
)
// ImageList returns a list of images in the docker host.
func (cli *Client) ImageList(ctx context.Context, options image.ListOptions) ([]image.Summary, error) {
func (cli *Client) ImageList(ctx context.Context, options types.ImageListOptions) ([]image.Summary, error) {
var images []image.Summary
// Make sure we negotiated (if the client is configured to do so),

View file

@ -11,6 +11,7 @@ import (
"strings"
"testing"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/errdefs"
@ -23,7 +24,7 @@ func TestImageListError(t *testing.T) {
client: newMockClient(errorMock(http.StatusInternalServerError, "Server error")),
}
_, err := client.ImageList(context.Background(), image.ListOptions{})
_, err := client.ImageList(context.Background(), types.ImageListOptions{})
assert.Check(t, is.ErrorType(err, errdefs.IsSystem))
}
@ -35,7 +36,7 @@ func TestImageListConnectionError(t *testing.T) {
client, err := NewClientWithOpts(WithAPIVersionNegotiation(), WithHost("tcp://no-such-host.invalid"))
assert.NilError(t, err)
_, err = client.ImageList(context.Background(), image.ListOptions{})
_, err = client.ImageList(context.Background(), types.ImageListOptions{})
assert.Check(t, is.ErrorType(err, IsErrConnectionFailed))
}
@ -43,11 +44,11 @@ func TestImageList(t *testing.T) {
const expectedURL = "/images/json"
listCases := []struct {
options image.ListOptions
options types.ImageListOptions
expectedQueryParams map[string]string
}{
{
options: image.ListOptions{},
options: types.ImageListOptions{},
expectedQueryParams: map[string]string{
"all": "",
"filter": "",
@ -55,7 +56,7 @@ func TestImageList(t *testing.T) {
},
},
{
options: image.ListOptions{
options: types.ImageListOptions{
Filters: filters.NewArgs(
filters.Arg("label", "label1"),
filters.Arg("label", "label2"),
@ -69,7 +70,7 @@ func TestImageList(t *testing.T) {
},
},
{
options: image.ListOptions{
options: types.ImageListOptions{
Filters: filters.NewArgs(filters.Arg("dangling", "false")),
},
expectedQueryParams: map[string]string{
@ -152,7 +153,7 @@ func TestImageListApiBefore125(t *testing.T) {
version: "1.24",
}
options := image.ListOptions{
options := types.ImageListOptions{
Filters: filters.NewArgs(filters.Arg("reference", "image:tag")),
}
@ -173,12 +174,12 @@ func TestImageListWithSharedSize(t *testing.T) {
for _, tc := range []struct {
name string
version string
options image.ListOptions
options types.ImageListOptions
sharedSize string // expected value for the shared-size query param, or empty if it should not be set.
}{
{name: "unset after 1.42, no options set", version: "1.42"},
{name: "set after 1.42, if requested", version: "1.42", options: image.ListOptions{SharedSize: true}, sharedSize: "1"},
{name: "unset before 1.42, even if requested", version: "1.41", options: image.ListOptions{SharedSize: true}},
{name: "set after 1.42, if requested", version: "1.42", options: types.ImageListOptions{SharedSize: true}, sharedSize: "1"},
{name: "unset before 1.42, even if requested", version: "1.41", options: types.ImageListOptions{SharedSize: true}},
} {
tc := tc
t.Run(tc.name, func(t *testing.T) {

View file

@ -7,7 +7,7 @@ import (
"strings"
"github.com/distribution/reference"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types"
"github.com/docker/docker/errdefs"
)
@ -19,7 +19,7 @@ import (
// FIXME(vdemeester): there is currently used in a few way in docker/docker
// - if not in trusted content, ref is used to pass the whole reference, and tag is empty
// - if in trusted content, ref is used to pass the reference name, and tag for the digest
func (cli *Client) ImagePull(ctx context.Context, refStr string, options image.PullOptions) (io.ReadCloser, error) {
func (cli *Client) ImagePull(ctx context.Context, refStr string, options types.ImagePullOptions) (io.ReadCloser, error) {
ref, err := reference.ParseNormalizedNamed(refStr)
if err != nil {
return nil, err

View file

@ -9,7 +9,7 @@ import (
"strings"
"testing"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/errdefs"
"gotest.tools/v3/assert"
@ -23,7 +23,7 @@ func TestImagePullReferenceParseError(t *testing.T) {
}),
}
// An empty reference is an invalid reference
_, err := client.ImagePull(context.Background(), "", image.PullOptions{})
_, err := client.ImagePull(context.Background(), "", types.ImagePullOptions{})
if err == nil || !strings.Contains(err.Error(), "invalid reference format") {
t.Fatalf("expected an error, got %v", err)
}
@ -33,7 +33,7 @@ func TestImagePullAnyError(t *testing.T) {
client := &Client{
client: newMockClient(errorMock(http.StatusInternalServerError, "Server error")),
}
_, err := client.ImagePull(context.Background(), "myimage", image.PullOptions{})
_, err := client.ImagePull(context.Background(), "myimage", types.ImagePullOptions{})
assert.Check(t, is.ErrorType(err, errdefs.IsSystem))
}
@ -41,7 +41,7 @@ func TestImagePullStatusUnauthorizedError(t *testing.T) {
client := &Client{
client: newMockClient(errorMock(http.StatusUnauthorized, "Unauthorized error")),
}
_, err := client.ImagePull(context.Background(), "myimage", image.PullOptions{})
_, err := client.ImagePull(context.Background(), "myimage", types.ImagePullOptions{})
assert.Check(t, is.ErrorType(err, errdefs.IsUnauthorized))
}
@ -52,7 +52,7 @@ func TestImagePullWithUnauthorizedErrorAndPrivilegeFuncError(t *testing.T) {
privilegeFunc := func() (string, error) {
return "", fmt.Errorf("Error requesting privilege")
}
_, err := client.ImagePull(context.Background(), "myimage", image.PullOptions{
_, err := client.ImagePull(context.Background(), "myimage", types.ImagePullOptions{
PrivilegeFunc: privilegeFunc,
})
if err == nil || err.Error() != "Error requesting privilege" {
@ -67,7 +67,7 @@ func TestImagePullWithUnauthorizedErrorAndAnotherUnauthorizedError(t *testing.T)
privilegeFunc := func() (string, error) {
return "a-auth-header", nil
}
_, err := client.ImagePull(context.Background(), "myimage", image.PullOptions{
_, err := client.ImagePull(context.Background(), "myimage", types.ImagePullOptions{
PrivilegeFunc: privilegeFunc,
})
assert.Check(t, is.ErrorType(err, errdefs.IsUnauthorized))
@ -108,7 +108,7 @@ func TestImagePullWithPrivilegedFuncNoError(t *testing.T) {
privilegeFunc := func() (string, error) {
return "IAmValid", nil
}
resp, err := client.ImagePull(context.Background(), "myimage", image.PullOptions{
resp, err := client.ImagePull(context.Background(), "myimage", types.ImagePullOptions{
RegistryAuth: "NotValid",
PrivilegeFunc: privilegeFunc,
})
@ -179,7 +179,7 @@ func TestImagePullWithoutErrors(t *testing.T) {
}, nil
}),
}
resp, err := client.ImagePull(context.Background(), pullCase.reference, image.PullOptions{
resp, err := client.ImagePull(context.Background(), pullCase.reference, types.ImagePullOptions{
All: pullCase.all,
})
if err != nil {

View file

@ -8,7 +8,7 @@ import (
"net/url"
"github.com/distribution/reference"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/errdefs"
)
@ -17,7 +17,7 @@ import (
// It executes the privileged function if the operation is unauthorized
// and it tries one more time.
// It's up to the caller to handle the io.ReadCloser and close it properly.
func (cli *Client) ImagePush(ctx context.Context, image string, options image.PushOptions) (io.ReadCloser, error) {
func (cli *Client) ImagePush(ctx context.Context, image string, options types.ImagePushOptions) (io.ReadCloser, error) {
ref, err := reference.ParseNormalizedNamed(image)
if err != nil {
return nil, err

View file

@ -9,7 +9,7 @@ import (
"strings"
"testing"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/errdefs"
"gotest.tools/v3/assert"
@ -23,12 +23,12 @@ func TestImagePushReferenceError(t *testing.T) {
}),
}
// An empty reference is an invalid reference
_, err := client.ImagePush(context.Background(), "", image.PushOptions{})
_, err := client.ImagePush(context.Background(), "", types.ImagePushOptions{})
if err == nil || !strings.Contains(err.Error(), "invalid reference format") {
t.Fatalf("expected an error, got %v", err)
}
// An canonical reference cannot be pushed
_, err = client.ImagePush(context.Background(), "repo@sha256:ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", image.PushOptions{})
_, err = client.ImagePush(context.Background(), "repo@sha256:ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", types.ImagePushOptions{})
if err == nil || err.Error() != "cannot push a digest reference" {
t.Fatalf("expected an error, got %v", err)
}
@ -38,7 +38,7 @@ func TestImagePushAnyError(t *testing.T) {
client := &Client{
client: newMockClient(errorMock(http.StatusInternalServerError, "Server error")),
}
_, err := client.ImagePush(context.Background(), "myimage", image.PushOptions{})
_, err := client.ImagePush(context.Background(), "myimage", types.ImagePushOptions{})
assert.Check(t, is.ErrorType(err, errdefs.IsSystem))
}
@ -46,7 +46,7 @@ func TestImagePushStatusUnauthorizedError(t *testing.T) {
client := &Client{
client: newMockClient(errorMock(http.StatusUnauthorized, "Unauthorized error")),
}
_, err := client.ImagePush(context.Background(), "myimage", image.PushOptions{})
_, err := client.ImagePush(context.Background(), "myimage", types.ImagePushOptions{})
assert.Check(t, is.ErrorType(err, errdefs.IsUnauthorized))
}
@ -57,7 +57,7 @@ func TestImagePushWithUnauthorizedErrorAndPrivilegeFuncError(t *testing.T) {
privilegeFunc := func() (string, error) {
return "", fmt.Errorf("Error requesting privilege")
}
_, err := client.ImagePush(context.Background(), "myimage", image.PushOptions{
_, err := client.ImagePush(context.Background(), "myimage", types.ImagePushOptions{
PrivilegeFunc: privilegeFunc,
})
if err == nil || err.Error() != "Error requesting privilege" {
@ -72,7 +72,7 @@ func TestImagePushWithUnauthorizedErrorAndAnotherUnauthorizedError(t *testing.T)
privilegeFunc := func() (string, error) {
return "a-auth-header", nil
}
_, err := client.ImagePush(context.Background(), "myimage", image.PushOptions{
_, err := client.ImagePush(context.Background(), "myimage", types.ImagePushOptions{
PrivilegeFunc: privilegeFunc,
})
assert.Check(t, is.ErrorType(err, errdefs.IsUnauthorized))
@ -109,7 +109,7 @@ func TestImagePushWithPrivilegedFuncNoError(t *testing.T) {
privilegeFunc := func() (string, error) {
return "IAmValid", nil
}
resp, err := client.ImagePush(context.Background(), "myimage:tag", image.PushOptions{
resp, err := client.ImagePush(context.Background(), "myimage:tag", types.ImagePushOptions{
RegistryAuth: "NotValid",
PrivilegeFunc: privilegeFunc,
})
@ -179,7 +179,7 @@ func TestImagePushWithoutErrors(t *testing.T) {
}, nil
}),
}
resp, err := client.ImagePush(context.Background(), tc.reference, image.PushOptions{
resp, err := client.ImagePush(context.Background(), tc.reference, types.ImagePushOptions{
All: tc.all,
})
if err != nil {

View file

@ -5,11 +5,12 @@ import (
"encoding/json"
"net/url"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/image"
)
// ImageRemove removes an image from the docker host.
func (cli *Client) ImageRemove(ctx context.Context, imageID string, options image.RemoveOptions) ([]image.DeleteResponse, error) {
func (cli *Client) ImageRemove(ctx context.Context, imageID string, options types.ImageRemoveOptions) ([]image.DeleteResponse, error) {
query := url.Values{}
if options.Force {

View file

@ -10,6 +10,7 @@ import (
"strings"
"testing"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/errdefs"
"gotest.tools/v3/assert"
@ -21,7 +22,7 @@ func TestImageRemoveError(t *testing.T) {
client: newMockClient(errorMock(http.StatusInternalServerError, "Server error")),
}
_, err := client.ImageRemove(context.Background(), "image_id", image.RemoveOptions{})
_, err := client.ImageRemove(context.Background(), "image_id", types.ImageRemoveOptions{})
assert.Check(t, is.ErrorType(err, errdefs.IsSystem))
}
@ -30,7 +31,7 @@ func TestImageRemoveImageNotFound(t *testing.T) {
client: newMockClient(errorMock(http.StatusNotFound, "no such image: unknown")),
}
_, err := client.ImageRemove(context.Background(), "unknown", image.RemoveOptions{})
_, err := client.ImageRemove(context.Background(), "unknown", types.ImageRemoveOptions{})
assert.Check(t, is.ErrorContains(err, "no such image: unknown"))
assert.Check(t, is.ErrorType(err, errdefs.IsNotFound))
}
@ -92,7 +93,7 @@ func TestImageRemove(t *testing.T) {
}, nil
}),
}
imageDeletes, err := client.ImageRemove(context.Background(), "image_id", image.RemoveOptions{
imageDeletes, err := client.ImageRemove(context.Background(), "image_id", types.ImageRemoveOptions{
Force: removeCase.force,
PruneChildren: removeCase.pruneChildren,
})

View file

@ -90,15 +90,15 @@ type ImageAPIClient interface {
ImageBuild(ctx context.Context, context io.Reader, options types.ImageBuildOptions) (types.ImageBuildResponse, error)
BuildCachePrune(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error)
BuildCancel(ctx context.Context, id string) error
ImageCreate(ctx context.Context, parentReference string, options image.CreateOptions) (io.ReadCloser, error)
ImageCreate(ctx context.Context, parentReference string, options types.ImageCreateOptions) (io.ReadCloser, error)
ImageHistory(ctx context.Context, image string) ([]image.HistoryResponseItem, error)
ImageImport(ctx context.Context, source types.ImageImportSource, ref string, options image.ImportOptions) (io.ReadCloser, error)
ImageImport(ctx context.Context, source types.ImageImportSource, ref string, options types.ImageImportOptions) (io.ReadCloser, error)
ImageInspectWithRaw(ctx context.Context, image string) (types.ImageInspect, []byte, error)
ImageList(ctx context.Context, options image.ListOptions) ([]image.Summary, error)
ImageList(ctx context.Context, options types.ImageListOptions) ([]image.Summary, error)
ImageLoad(ctx context.Context, input io.Reader, quiet bool) (types.ImageLoadResponse, error)
ImagePull(ctx context.Context, ref string, options image.PullOptions) (io.ReadCloser, error)
ImagePush(ctx context.Context, ref string, options image.PushOptions) (io.ReadCloser, error)
ImageRemove(ctx context.Context, image string, options image.RemoveOptions) ([]image.DeleteResponse, error)
ImagePull(ctx context.Context, ref string, options types.ImagePullOptions) (io.ReadCloser, error)
ImagePush(ctx context.Context, ref string, options types.ImagePushOptions) (io.ReadCloser, error)
ImageRemove(ctx context.Context, image string, options types.ImageRemoveOptions) ([]image.DeleteResponse, error)
ImageSearch(ctx context.Context, term string, options types.ImageSearchOptions) ([]registry.SearchResult, error)
ImageSave(ctx context.Context, images []string) (io.ReadCloser, error)
ImageTag(ctx context.Context, image, ref string) error

View file

@ -15,7 +15,7 @@ import (
)
var (
testBuf = []byte("Buffalo1 buffalo2 Buffalo3 buffalo4 buffalo5 buffalo6 Buffalo7 buffalo8")
testBuf = []byte("Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo")
testBufSize = len(testBuf)
)

View file

@ -45,6 +45,9 @@ func installConfigFlags(conf *config.Config, flags *pflag.FlagSet) error {
flags.StringVar(&conf.CgroupParent, "cgroup-parent", "", "Set parent cgroup for all containers")
flags.StringVar(&conf.RemappedRoot, "userns-remap", "", "User/Group setting for user namespaces")
flags.BoolVar(&conf.LiveRestoreEnabled, "live-restore", false, "Enable live restore of docker when containers are still running")
// TODO(thaJeztah): Used to produce a deprecation error; remove the flag and the OOMScoreAdjust field for the next release after v25.0.0.
flags.IntVar(&conf.OOMScoreAdjust, "oom-score-adjust", 0, "Set the oom_score_adj for the daemon (deprecated)")
_ = flags.MarkDeprecated("oom-score-adjust", "and will be removed in the next release.")
flags.BoolVar(&conf.Init, "init", false, "Run an init in the container to forward signals and reap processes")
flags.StringVar(&conf.InitPath, "init-path", "", "Path to the docker-init binary")
flags.Int64Var(&conf.CPURealtimePeriod, "cpu-rt-period", 0, "Limit the CPU real-time period in microseconds for the parent cgroup for all containers (not supported with cgroups v2)")

View file

@ -242,7 +242,7 @@ func (cli *DaemonCli) start(opts *daemonOptions) (err error) {
// Override BuildKit's default Resource so that it matches the semconv
// version that is used in our code.
detect.OverrideResource(resource.Default())
detect.Resource = resource.Default()
detect.Recorder = detect.NewTraceRecorder()
tp, err := detect.TracerProvider()
@ -256,10 +256,7 @@ func (cli *DaemonCli) start(opts *daemonOptions) (err error) {
pluginStore := plugin.NewStore()
var apiServer apiserver.Server
cli.authzMiddleware, err = initMiddlewares(&apiServer, cli.Config, pluginStore)
if err != nil {
return errors.Wrap(err, "failed to start API server")
}
cli.authzMiddleware = initMiddlewares(&apiServer, cli.Config, pluginStore)
d, err := daemon.NewDaemon(ctx, cli.Config, pluginStore, cli.authzMiddleware)
if err != nil {
@ -310,12 +307,14 @@ func (cli *DaemonCli) start(opts *daemonOptions) (err error) {
//
// FIXME(thaJeztah): better separate runtime and config data?
daemonCfg := d.Config()
routerOpts, err := newRouterOptions(routerCtx, &daemonCfg, d, c)
routerOptions, err := newRouterOptions(routerCtx, &daemonCfg, d)
if err != nil {
return err
}
httpServer.Handler = apiServer.CreateMux(routerOpts.Build()...)
routerOptions.cluster = c
httpServer.Handler = apiServer.CreateMux(routerOptions.Build()...)
go d.ProcessClusterNotifications(ctx, c.GetWatchStream())
@ -357,7 +356,7 @@ func (cli *DaemonCli) start(opts *daemonOptions) (err error) {
notifyStopping()
shutdownDaemon(ctx, d)
if err := routerOpts.buildkit.Close(); err != nil {
if err := routerOptions.buildkit.Close(); err != nil {
log.G(ctx).WithError(err).Error("Failed to close buildkit")
}
@ -381,19 +380,11 @@ func (cli *DaemonCli) start(opts *daemonOptions) (err error) {
func setOTLPProtoDefault() {
const (
tracesEnv = "OTEL_EXPORTER_OTLP_TRACES_PROTOCOL"
metricsEnv = "OTEL_EXPORTER_OTLP_METRICS_PROTOCOL"
protoEnv = "OTEL_EXPORTER_OTLP_PROTOCOL"
defaultProto = "http/protobuf"
)
if os.Getenv(protoEnv) == "" {
if os.Getenv(tracesEnv) == "" {
os.Setenv(tracesEnv, defaultProto)
}
if os.Getenv(metricsEnv) == "" {
os.Setenv(metricsEnv, defaultProto)
}
if os.Getenv(tracesEnv) == "" && os.Getenv(protoEnv) == "" {
os.Setenv(tracesEnv, "http/protobuf")
}
}
@ -406,17 +397,23 @@ type routerOptions struct {
cluster *cluster.Cluster
}
func newRouterOptions(ctx context.Context, config *config.Config, d *daemon.Daemon, c *cluster.Cluster) (routerOptions, error) {
func newRouterOptions(ctx context.Context, config *config.Config, d *daemon.Daemon) (routerOptions, error) {
opts := routerOptions{}
sm, err := session.NewManager()
if err != nil {
return routerOptions{}, errors.Wrap(err, "failed to create sessionmanager")
return opts, errors.Wrap(err, "failed to create sessionmanager")
}
manager, err := dockerfile.NewBuildManager(d.BuilderBackend(), d.IdentityMapping())
if err != nil {
return routerOptions{}, err
return opts, err
}
cgroupParent := newCgroupParent(config)
ro := routerOptions{
sessionManager: sm,
features: d.Features,
daemon: d,
}
bk, err := buildkit.New(ctx, buildkit.Opt{
SessionManager: sm,
@ -438,22 +435,18 @@ func newRouterOptions(ctx context.Context, config *config.Config, d *daemon.Daem
ContainerdNamespace: config.ContainerdNamespace,
})
if err != nil {
return routerOptions{}, err
return opts, err
}
bb, err := buildbackend.NewBackend(d.ImageService(), manager, bk, d.EventsService)
if err != nil {
return routerOptions{}, errors.Wrap(err, "failed to create buildmanager")
return opts, errors.Wrap(err, "failed to create buildmanager")
}
return routerOptions{
sessionManager: sm,
buildBackend: bb,
features: d.Features,
buildkit: bk,
daemon: d,
cluster: c,
}, nil
ro.buildBackend = bb
ro.buildkit = bk
return ro, nil
}
func (cli *DaemonCli) reloadConfig() {
@ -629,10 +622,6 @@ func loadDaemonCliConfig(opts *daemonOptions) (*config.Config, error) {
conf.CDISpecDirs = nil
}
if err := loadCLIPlatformConfig(conf); err != nil {
return nil, err
}
return conf, nil
}
@ -719,15 +708,14 @@ func (opts routerOptions) Build() []router.Router {
return routers
}
func initMiddlewares(s *apiserver.Server, cfg *config.Config, pluginStore plugingetter.PluginGetter) (*authorization.Middleware, error) {
func initMiddlewares(s *apiserver.Server, cfg *config.Config, pluginStore plugingetter.PluginGetter) *authorization.Middleware {
v := dockerversion.Version
exp := middleware.NewExperimentalMiddleware(cfg.Experimental)
s.UseMiddleware(exp)
vm, err := middleware.NewVersionMiddleware(dockerversion.Version, api.DefaultVersion, cfg.MinAPIVersion)
if err != nil {
return nil, err
}
s.UseMiddleware(*vm)
vm := middleware.NewVersionMiddleware(v, api.DefaultVersion, cfg.MinAPIVersion)
s.UseMiddleware(vm)
if cfg.CorsHeaders != "" {
c := middleware.NewCORSMiddleware(cfg.CorsHeaders)
@ -736,7 +724,7 @@ func initMiddlewares(s *apiserver.Server, cfg *config.Config, pluginStore plugin
authzMiddleware := authorization.NewMiddleware(cfg.AuthorizationPlugins, pluginStore)
s.UseMiddleware(authzMiddleware)
return authzMiddleware, nil
return authzMiddleware
}
func (cli *DaemonCli) getContainerdDaemonOpts() ([]supervisor.DaemonOpt, error) {
@ -844,7 +832,6 @@ func loadListeners(cfg *config.Config, tlsConfig *tls.Config) ([]net.Listener, [
if proto == "tcp" && !authEnabled {
log.G(ctx).WithField("host", protoAddr).Warn("Binding to IP address without --tlsverify is insecure and gives root access on this machine to everyone who has access to your network.")
log.G(ctx).WithField("host", protoAddr).Warn("Binding to an IP address, even on localhost, can also give access to scripts run in a browser. Be safe out there!")
log.G(ctx).WithField("host", protoAddr).Warn("[DEPRECATION NOTICE] In future versions this will be a hard failure preventing the daemon from starting! Learn more at: https://docs.docker.com/go/api-security/")
time.Sleep(time.Second)
// If TLSVerify is explicitly set to false we'll take that as "Please let me shoot myself in the foot"

View file

@ -3,28 +3,11 @@ package main
import (
cdcgroups "github.com/containerd/cgroups/v3"
systemdDaemon "github.com/coreos/go-systemd/v22/daemon"
"github.com/docker/docker/daemon"
"github.com/docker/docker/daemon/config"
"github.com/docker/docker/pkg/sysinfo"
"github.com/pkg/errors"
)
// loadCLIPlatformConfig loads the platform specific CLI configuration
func loadCLIPlatformConfig(conf *config.Config) error {
if conf.RemappedRoot == "" {
return nil
}
containerdNamespace, containerdPluginNamespace, err := daemon.RemapContainerdNamespaces(conf)
if err != nil {
return err
}
conf.ContainerdNamespace = containerdNamespace
conf.ContainerdPluginNamespace = containerdPluginNamespace
return nil
}
// preNotifyReady sends a message to the host when the API is active, but before the daemon is
func preNotifyReady() {
}

View file

@ -16,12 +16,6 @@ func getDefaultDaemonConfigFile() (string, error) {
return "", nil
}
// loadCLIPlatformConfig loads the platform specific CLI configuration
// there is none on windows, so this is a no-op
func loadCLIPlatformConfig(conf *config.Config) error {
return nil
}
// setDefaultUmask doesn't do anything on windows
func setDefaultUmask() error {
return nil

View file

@ -5,7 +5,7 @@ import (
"sync"
)
// attachContext is the context used for attach calls.
// attachContext is the context used for for attach calls.
type attachContext struct {
mu sync.Mutex
ctx context.Context

View file

@ -299,8 +299,8 @@ func (container *Container) SetupWorkingDirectory(rootIdentity idtools.Identity)
return nil
}
workdir := filepath.Clean(container.Config.WorkingDir)
pth, err := container.GetResourcePath(workdir)
container.Config.WorkingDir = filepath.Clean(container.Config.WorkingDir)
pth, err := container.GetResourcePath(container.Config.WorkingDir)
if err != nil {
return err
}
@ -514,14 +514,14 @@ func (container *Container) AddMountPointWithVolume(destination string, vol volu
}
// UnmountVolumes unmounts all volumes
func (container *Container) UnmountVolumes(ctx context.Context, volumeEventLog func(name string, action events.Action, attributes map[string]string)) error {
func (container *Container) UnmountVolumes(volumeEventLog func(name string, action events.Action, attributes map[string]string)) error {
var errs []string
for _, volumeMount := range container.MountPoints {
if volumeMount.Volume == nil {
continue
}
if err := volumeMount.Cleanup(ctx); err != nil {
if err := volumeMount.Cleanup(); err != nil {
errs = append(errs, err.Error())
continue
}

View file

@ -15,6 +15,8 @@ import (
"github.com/docker/docker/api/types/events"
mounttypes "github.com/docker/docker/api/types/mount"
swarmtypes "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/pkg/stringid"
"github.com/docker/docker/volume"
volumemounts "github.com/docker/docker/volume/mounts"
"github.com/moby/sys/mount"
"github.com/opencontainers/selinux/go-selinux/label"
@ -127,11 +129,34 @@ func (container *Container) NetworkMounts() []Mount {
}
// CopyImagePathContent copies files in destination to the volume.
func (container *Container) CopyImagePathContent(volumePath, destination string) error {
if err := label.Relabel(volumePath, container.MountLabel, true); err != nil && !errors.Is(err, syscall.ENOTSUP) {
func (container *Container) CopyImagePathContent(v volume.Volume, destination string) error {
rootfs, err := container.GetResourcePath(destination)
if err != nil {
return err
}
return copyExistingContents(destination, volumePath)
if _, err := os.Stat(rootfs); err != nil {
if os.IsNotExist(err) {
return nil
}
return err
}
id := stringid.GenerateRandomID()
path, err := v.Mount(id)
if err != nil {
return err
}
defer func() {
if err := v.Unmount(id); err != nil {
log.G(context.TODO()).Warnf("error while unmounting volume %s: %v", v.Name(), err)
}
}()
if err := label.Relabel(path, container.MountLabel, true); err != nil && !errors.Is(err, syscall.ENOTSUP) {
return err
}
return copyExistingContents(rootfs, path)
}
// ShmResourcePath returns path to shm
@ -371,7 +396,7 @@ func (container *Container) DetachAndUnmount(volumeEventLog func(name string, ac
Warn("Unable to unmount")
}
}
return container.UnmountVolumes(ctx, volumeEventLog)
return container.UnmountVolumes(volumeEventLog)
}
// ignoreUnsupportedXAttrs ignores errors when extended attributes
@ -394,13 +419,9 @@ func copyExistingContents(source, destination string) error {
return err
}
if len(dstList) != 0 {
log.G(context.TODO()).WithFields(log.Fields{
"source": source,
"destination": destination,
}).Debug("destination is not empty, do not copy")
// destination is not empty, do not copy
return nil
}
return fs.CopyDir(destination, source, ignoreUnsupportedXAttrs())
}

View file

@ -1,7 +1,6 @@
package container // import "github.com/docker/docker/container"
import (
"context"
"fmt"
"os"
"path/filepath"
@ -129,7 +128,7 @@ func (container *Container) ConfigMounts() []Mount {
// On Windows it only delegates to `UnmountVolumes` since there is nothing to
// force unmount.
func (container *Container) DetachAndUnmount(volumeEventLog func(name string, action events.Action, attributes map[string]string)) error {
return container.UnmountVolumes(context.TODO(), volumeEventLog)
return container.UnmountVolumes(volumeEventLog)
}
// TmpfsMounts returns the list of tmpfs mounts

View file

@ -15,7 +15,6 @@
# * DOCKERD_ROOTLESS_ROOTLESSKIT_PORT_DRIVER=(builtin|slirp4netns|implicit): the rootlesskit port driver. Defaults to "builtin".
# * DOCKERD_ROOTLESS_ROOTLESSKIT_SLIRP4NETNS_SANDBOX=(auto|true|false): whether to protect slirp4netns with a dedicated mount namespace. Defaults to "auto".
# * DOCKERD_ROOTLESS_ROOTLESSKIT_SLIRP4NETNS_SECCOMP=(auto|true|false): whether to protect slirp4netns with seccomp. Defaults to "auto".
# * DOCKERD_ROOTLESS_ROOTLESSKIT_DISABLE_HOST_LOOPBACK=(true|false): prohibit connections to 127.0.0.1 on the host (including via 10.0.2.2, in the case of slirp4netns). Defaults to "true".
# To apply an environment variable via systemd, create ~/.config/systemd/user/docker.service.d/override.conf as follows,
# and run `systemctl --user daemon-reload && systemctl --user restart docker`:
@ -72,7 +71,6 @@ fi
: "${DOCKERD_ROOTLESS_ROOTLESSKIT_PORT_DRIVER:=builtin}"
: "${DOCKERD_ROOTLESS_ROOTLESSKIT_SLIRP4NETNS_SANDBOX:=auto}"
: "${DOCKERD_ROOTLESS_ROOTLESSKIT_SLIRP4NETNS_SECCOMP:=auto}"
: "${DOCKERD_ROOTLESS_ROOTLESSKIT_DISABLE_HOST_LOOPBACK:=}"
net=$DOCKERD_ROOTLESS_ROOTLESSKIT_NET
mtu=$DOCKERD_ROOTLESS_ROOTLESSKIT_MTU
if [ -z "$net" ]; then
@ -100,11 +98,6 @@ if [ -z "$mtu" ]; then
mtu=1500
fi
host_loopback="--disable-host-loopback"
if [ "$DOCKERD_ROOTLESS_ROOTLESSKIT_DISABLE_HOST_LOOPBACK" = "false" ]; then
host_loopback=""
fi
dockerd="${DOCKERD:-dockerd}"
if [ -z "$_DOCKERD_ROOTLESS_CHILD" ]; then
@ -132,7 +125,7 @@ if [ -z "$_DOCKERD_ROOTLESS_CHILD" ]; then
--net=$net --mtu=$mtu \
--slirp4netns-sandbox=$DOCKERD_ROOTLESS_ROOTLESSKIT_SLIRP4NETNS_SANDBOX \
--slirp4netns-seccomp=$DOCKERD_ROOTLESS_ROOTLESSKIT_SLIRP4NETNS_SECCOMP \
$host_loopback --port-driver=$DOCKERD_ROOTLESS_ROOTLESSKIT_PORT_DRIVER \
--disable-host-loopback --port-driver=$DOCKERD_ROOTLESS_ROOTLESSKIT_PORT_DRIVER \
--copy-up=/etc --copy-up=/run \
--propagation=rslave \
$DOCKERD_ROOTLESS_ROOTLESSKIT_FLAGS \

View file

@ -8,6 +8,25 @@ import (
"github.com/docker/docker/errdefs"
)
// ContainerCopy performs a deprecated operation of archiving the resource at
// the specified path in the container identified by the given name.
func (daemon *Daemon) ContainerCopy(name string, res string) (io.ReadCloser, error) {
ctr, err := daemon.GetContainer(name)
if err != nil {
return nil, err
}
data, err := daemon.containerCopy(ctr, res)
if err == nil {
return data, nil
}
if os.IsNotExist(err) {
return nil, containerFileNotFound{res, name}
}
return nil, errdefs.System(err)
}
// ContainerStatPath stats the filesystem resource at the specified path in the
// container identified by the given name.
func (daemon *Daemon) ContainerStatPath(name string, path string) (stat *types.ContainerPathStat, err error) {

View file

@ -161,6 +161,55 @@ func (daemon *Daemon) containerExtractToDir(container *container.Container, path
return nil
}
func (daemon *Daemon) containerCopy(container *container.Container, resource string) (rc io.ReadCloser, err error) {
container.Lock()
defer func() {
if err != nil {
// Wait to unlock the container until the archive is fully read
// (see the ReadCloseWrapper func below) or if there is an error
// before that occurs.
container.Unlock()
}
}()
cfs, err := daemon.openContainerFS(container)
if err != nil {
return nil, err
}
defer func() {
if err != nil {
cfs.Close()
}
}()
err = cfs.RunInFS(context.TODO(), func() error {
_, err := os.Stat(resource)
return err
})
if err != nil {
return nil, err
}
tb, err := archive.NewTarballer(resource, &archive.TarOptions{
Compression: archive.Uncompressed,
})
if err != nil {
return nil, err
}
cfs.GoInFS(context.TODO(), tb.Do)
archv := tb.Reader()
reader := ioutils.NewReadCloserWrapper(archv, func() error {
err := archv.Close()
_ = cfs.Close()
container.Unlock()
return err
})
daemon.LogContainerEvent(container, events.ActionCopy)
return reader, nil
}
// checkIfPathIsInAVolume checks if the path is in a volume. If it is, it
// cannot be in a read-only volume. If it is not in a volume, the container
// cannot be configured with a read-only rootfs.

View file

@ -122,7 +122,6 @@ func containerSpecFromGRPC(c *swarmapi.ContainerSpec) *types.ContainerSpec {
mount.VolumeOptions = &mounttypes.VolumeOptions{
NoCopy: m.VolumeOptions.NoCopy,
Labels: m.VolumeOptions.Labels,
Subpath: m.VolumeOptions.Subpath,
}
if m.VolumeOptions.DriverConfig != nil {
mount.VolumeOptions.DriverConfig = &mounttypes.Driver{
@ -409,7 +408,6 @@ func containerToGRPC(c *types.ContainerSpec) (*swarmapi.ContainerSpec, error) {
mount.VolumeOptions = &swarmapi.Mount_VolumeOptions{
NoCopy: m.VolumeOptions.NoCopy,
Labels: m.VolumeOptions.Labels,
Subpath: m.VolumeOptions.Subpath,
}
if m.VolumeOptions.DriverConfig != nil {
mount.VolumeOptions.DriverConfig = &swarmapi.Driver{

View file

@ -1,42 +0,0 @@
package convert
import (
"github.com/docker/docker/pkg/plugingetter"
"github.com/moby/swarmkit/v2/node/plugin"
)
// SwarmPluginGetter adapts a plugingetter.PluginGetter to a Swarmkit plugin.Getter.
func SwarmPluginGetter(pg plugingetter.PluginGetter) plugin.Getter {
return pluginGetter{pg}
}
type pluginGetter struct {
pg plugingetter.PluginGetter
}
var _ plugin.Getter = (*pluginGetter)(nil)
type swarmPlugin struct {
plugingetter.CompatPlugin
}
func (p swarmPlugin) Client() plugin.Client {
return p.CompatPlugin.Client()
}
func (g pluginGetter) Get(name string, capability string) (plugin.Plugin, error) {
p, err := g.pg.Get(name, capability, plugingetter.Lookup)
if err != nil {
return nil, err
}
return swarmPlugin{p}, nil
}
func (g pluginGetter) GetAllManagedPluginsByCap(capability string) []plugin.Plugin {
pp := g.pg.GetAllManagedPluginsByCap(capability)
ret := make([]plugin.Plugin, len(pp))
for i, p := range pp {
ret[i] = swarmPlugin{p}
}
return ret
}

View file

@ -4,13 +4,11 @@ import (
"testing"
containertypes "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount"
swarmtypes "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/swarm/runtime"
google_protobuf3 "github.com/gogo/protobuf/types"
swarmapi "github.com/moby/swarmkit/v2/api"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
func TestServiceConvertFromGRPCRuntimeContainer(t *testing.T) {
@ -111,11 +109,11 @@ func TestServiceConvertToGRPCGenericRuntimePlugin(t *testing.T) {
}
func TestServiceConvertToGRPCContainerRuntime(t *testing.T) {
const imgName = "alpine:latest"
image := "alpine:latest"
s := swarmtypes.ServiceSpec{
TaskTemplate: swarmtypes.TaskSpec{
ContainerSpec: &swarmtypes.ContainerSpec{
Image: imgName,
Image: image,
},
},
Mode: swarmtypes.ServiceMode{
@ -133,8 +131,8 @@ func TestServiceConvertToGRPCContainerRuntime(t *testing.T) {
t.Fatal("expected type swarmapi.TaskSpec_Container")
}
if v.Container.Image != imgName {
t.Fatalf("expected image %s; received %s", imgName, v.Container.Image)
if v.Container.Image != image {
t.Fatalf("expected image %s; received %s", image, v.Container.Image)
}
}
@ -613,32 +611,3 @@ func TestServiceConvertToGRPCConfigs(t *testing.T) {
})
}
}
func TestServiceConvertToGRPCVolumeSubpath(t *testing.T) {
s := swarmtypes.ServiceSpec{
TaskTemplate: swarmtypes.TaskSpec{
ContainerSpec: &swarmtypes.ContainerSpec{
Mounts: []mount.Mount{
{
Source: "/foo/bar",
Target: "/baz",
Type: mount.TypeVolume,
ReadOnly: false,
VolumeOptions: &mount.VolumeOptions{
Subpath: "sub",
},
},
},
},
},
}
g, err := ServiceSpecToGRPC(s)
assert.NilError(t, err)
v, ok := g.Task.Runtime.(*swarmapi.TaskSpec_Container)
assert.Assert(t, ok)
assert.Check(t, is.Len(v.Container.Mounts, 1))
assert.Check(t, is.Equal(v.Container.Mounts[0].VolumeOptions.Subpath, "sub"))
}

View file

@ -12,6 +12,7 @@ import (
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/api/types/filters"
opts "github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/swarm"
@ -38,7 +39,7 @@ type Backend interface {
SetupIngress(clustertypes.NetworkCreateRequest, string) (<-chan struct{}, error)
ReleaseIngress() (<-chan struct{}, error)
CreateManagedContainer(ctx context.Context, config backend.ContainerCreateConfig) (container.CreateResponse, error)
ContainerStart(ctx context.Context, name string, checkpoint string, checkpointDir string) error
ContainerStart(ctx context.Context, name string, hostConfig *container.HostConfig, checkpoint string, checkpointDir string) error
ContainerStop(ctx context.Context, name string, config container.StopOptions) error
ContainerLogs(ctx context.Context, name string, config *container.LogsOptions) (msgs <-chan *backend.LogMessage, tty bool, err error)
ConnectContainerToNetwork(containerName, networkName string, endpointConfig *network.EndpointSettings) error
@ -77,5 +78,5 @@ type VolumeBackend interface {
type ImageBackend interface {
PullImage(ctx context.Context, ref reference.Named, platform *ocispec.Platform, metaHeaders map[string][]string, authConfig *registry.AuthConfig, outStream io.Writer) error
GetRepositories(context.Context, reference.Named, *registry.AuthConfig) ([]distribution.Repository, error)
GetImage(ctx context.Context, refOrID string, options backend.GetImageOpts) (*image.Image, error)
GetImage(ctx context.Context, refOrID string, options opts.GetImageOpts) (*image.Image, error)
}

View file

@ -17,14 +17,13 @@ import (
"github.com/docker/docker/api/types/backend"
containertypes "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/api/types/network"
imagetypes "github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/registry"
containerpkg "github.com/docker/docker/container"
"github.com/docker/docker/daemon"
"github.com/docker/docker/daemon/cluster/convert"
executorpkg "github.com/docker/docker/daemon/cluster/executor"
"github.com/docker/docker/libnetwork"
"github.com/docker/docker/runconfig"
volumeopts "github.com/docker/docker/volume/service/opts"
gogotypes "github.com/gogo/protobuf/types"
"github.com/moby/swarmkit/v2/agent/exec"
@ -77,7 +76,7 @@ func (c *containerAdapter) pullImage(ctx context.Context) error {
named, err := reference.ParseNormalizedNamed(spec.Image)
if err == nil {
if _, ok := named.(reference.Canonical); ok {
_, err := c.imageBackend.GetImage(ctx, spec.Image, backend.GetImageOpts{})
_, err := c.imageBackend.GetImage(ctx, spec.Image, imagetypes.GetImageOpts{})
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
@ -292,34 +291,14 @@ func (c *containerAdapter) waitForDetach(ctx context.Context) error {
}
func (c *containerAdapter) create(ctx context.Context) error {
hostConfig := c.container.hostConfig(c.dependencies.Volumes())
netConfig := c.container.createNetworkingConfig(c.backend)
// We need to make sure no empty string or "default" NetworkMode is
// provided to the daemon as it doesn't support them.
//
// This is in line with what the ContainerCreate API endpoint does, but
// unlike that endpoint we can't do that in the ServiceCreate endpoint as
// the cluster leader and the current node might not be running on the same
// OS. Since the normalized value isn't the same on Windows and Linux, we
// need to make this normalization happen once we're sure we won't make a
// cross-OS API call.
if hostConfig.NetworkMode == "" || hostConfig.NetworkMode.IsDefault() {
hostConfig.NetworkMode = runconfig.DefaultDaemonNetworkMode()
if v, ok := netConfig.EndpointsConfig[network.NetworkDefault]; ok {
delete(netConfig.EndpointsConfig, network.NetworkDefault)
netConfig.EndpointsConfig[runconfig.DefaultDaemonNetworkMode().NetworkName()] = v
}
}
var cr containertypes.CreateResponse
var err error
if cr, err = c.backend.CreateManagedContainer(ctx, backend.ContainerCreateConfig{
Name: c.container.name(),
Config: c.container.config(),
HostConfig: hostConfig,
HostConfig: c.container.hostConfig(c.dependencies.Volumes()),
// Use the first network in container create
NetworkingConfig: netConfig,
NetworkingConfig: c.container.createNetworkingConfig(c.backend),
}); err != nil {
return err
}
@ -369,7 +348,7 @@ func (c *containerAdapter) start(ctx context.Context) error {
return err
}
return c.backend.ContainerStart(ctx, c.container.name(), "", "")
return c.backend.ContainerStart(ctx, c.container.name(), nil, "", "")
}
func (c *containerAdapter) inspect(ctx context.Context) (types.ContainerJSON, error) {

View file

@ -339,7 +339,6 @@ func convertMount(m api.Mount) enginemount.Mount {
if m.VolumeOptions != nil {
mount.VolumeOptions = &enginemount.VolumeOptions{
NoCopy: m.VolumeOptions.NoCopy,
Subpath: m.VolumeOptions.Subpath,
}
if m.VolumeOptions.Labels != nil {
mount.VolumeOptions.Labels = make(map[string]string, len(m.VolumeOptions.Labels))

View file

@ -52,7 +52,7 @@ func NewExecutor(b executorpkg.Backend, p plugin.Backend, i executorpkg.ImageBac
pluginBackend: p,
imageBackend: i,
volumeBackend: v,
dependencies: agent.NewDependencyManager(convert.SwarmPluginGetter(b.PluginGetter())),
dependencies: agent.NewDependencyManager(b.PluginGetter()),
}
}

View file

@ -10,12 +10,10 @@ import (
"github.com/containerd/log"
types "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/daemon/cluster/convert"
"github.com/docker/docker/daemon/cluster/executor/container"
lncluster "github.com/docker/docker/libnetwork/cluster"
"github.com/docker/docker/libnetwork/cnmallocator"
swarmapi "github.com/moby/swarmkit/v2/api"
"github.com/moby/swarmkit/v2/manager/allocator/networkallocator"
swarmallocator "github.com/moby/swarmkit/v2/manager/allocator/cnmallocator"
swarmnode "github.com/moby/swarmkit/v2/node"
"github.com/pkg/errors"
"google.golang.org/grpc"
@ -125,7 +123,7 @@ func (n *nodeRunner) start(conf nodeStartConfig) error {
ListenControlAPI: control,
ListenRemoteAPI: conf.ListenAddr,
AdvertiseRemoteAPI: conf.AdvertiseAddr,
NetworkConfig: &networkallocator.Config{
NetworkConfig: &swarmallocator.NetworkConfig{
DefaultAddrPool: conf.DefaultAddressPool,
SubnetSize: conf.SubnetSize,
VXLANUDPPort: conf.DataPathPort,
@ -146,8 +144,7 @@ func (n *nodeRunner) start(conf nodeStartConfig) error {
ElectionTick: n.cluster.config.RaftElectionTick,
UnlockKey: conf.lockKey,
AutoLockManagers: conf.autolock,
PluginGetter: convert.SwarmPluginGetter(n.cluster.config.Backend.PluginGetter()),
NetworkProvider: cnmallocator.NewProvider(n.cluster.config.Backend.PluginGetter()),
PluginGetter: n.cluster.config.Backend.PluginGetter(),
}
if conf.availability != "" {
avail, ok := swarmapi.NodeSpec_Availability_value[strings.ToUpper(string(conf.availability))]

View file

@ -56,9 +56,9 @@ const (
DefaultPluginNamespace = "plugins.moby"
// defaultMinAPIVersion is the minimum API version supported by the API.
// This version can be overridden through the "DOCKER_MIN_API_VERSION"
// environment variable. It currently defaults to the minimum API version
// supported by the API server.
defaultMinAPIVersion = api.MinSupportedAPIVersion
// environment variable. The minimum allowed version is determined
// by [minAPIVersion].
defaultMinAPIVersion = "1.24"
// SeccompProfileDefault is the built-in default seccomp profile.
SeccompProfileDefault = "builtin"
// SeccompProfileUnconfined is a special profile name for seccomp to use an
@ -610,8 +610,8 @@ func ValidateMinAPIVersion(ver string) error {
if strings.EqualFold(ver[0:1], "v") {
return errors.New(`API version must be provided without "v" prefix`)
}
if versions.LessThan(ver, defaultMinAPIVersion) {
return errors.Errorf(`minimum supported API version is %s: %s`, defaultMinAPIVersion, ver)
if versions.LessThan(ver, minAPIVersion) {
return errors.Errorf(`minimum supported API version is %s: %s`, minAPIVersion, ver)
}
if versions.GreaterThan(ver, api.DefaultVersion) {
return errors.Errorf(`maximum supported API version is %s: %s`, api.DefaultVersion, ver)

View file

@ -33,6 +33,9 @@ const (
// OCI runtime being shipped with the docker daemon package.
StockRuntimeName = "runc"
// minAPIVersion represents Minimum REST API version supported
minAPIVersion = "1.12"
// userlandProxyBinary is the name of the userland-proxy binary.
// In rootless-mode, [rootless.RootlessKitDockerProxyBinary] is used instead.
userlandProxyBinary = "docker-proxy"
@ -80,6 +83,7 @@ type Config struct {
Ulimits map[string]*units.Ulimit `json:"default-ulimits,omitempty"`
CPURealtimePeriod int64 `json:"cpu-rt-period,omitempty"`
CPURealtimeRuntime int64 `json:"cpu-rt-runtime,omitempty"`
OOMScoreAdjust int `json:"oom-score-adjust,omitempty"` // Deprecated: configure the daemon's oom-score-adjust using a process manager instead.
Init bool `json:"init,omitempty"`
InitPath string `json:"init-path,omitempty"`
SeccompProfile string `json:"seccomp-profile,omitempty"`
@ -174,6 +178,10 @@ func verifyDefaultCgroupNsMode(mode string) error {
// ValidatePlatformConfig checks if any platform-specific configuration settings are invalid.
func (conf *Config) ValidatePlatformConfig() error {
if conf.OOMScoreAdjust != 0 {
return errors.New(`DEPRECATED: The "oom-score-adjust" config parameter and the dockerd "--oom-score-adjust" options have been removed.`)
}
if conf.EnableUserlandProxy {
if conf.UserlandProxyPath == "" {
return errors.New("invalid userland-proxy-path: userland-proxy is enabled, but userland-proxy-path is not set")
@ -196,10 +204,6 @@ func (conf *Config) ValidatePlatformConfig() error {
return errors.Wrap(err, "invalid fixed-cidr-v6")
}
if _, ok := conf.Features["windows-dns-proxy"]; ok {
return errors.New("feature option 'windows-dns-proxy' is only available on Windows")
}
return verifyDefaultCgroupNsMode(conf.CgroupNamespaceMode)
}

View file

@ -13,6 +13,13 @@ const (
// default value. On Windows keep this empty so the value is auto-detected
// based on other options.
StockRuntimeName = ""
// minAPIVersion represents Minimum REST API version supported
// Technically the first daemon API version released on Windows is v1.25 in
// engine version 1.13. However, some clients are explicitly using downlevel
// APIs (e.g. docker-compose v2.1 file format) and that is just too restrictive.
// Hence also allowing 1.24 on Windows.
minAPIVersion string = "1.24"
)
// BridgeConfig is meant to store all the parameters for both the bridge driver and the default bridge network. On

View file

@ -364,6 +364,6 @@ func translateWorkingDir(config *containertypes.Config) error {
if !system.IsAbs(wd) {
return fmt.Errorf("the working directory '%s' is invalid, it needs to be an absolute path", config.WorkingDir)
}
config.WorkingDir = filepath.Clean(wd)
config.WorkingDir = wd
return nil
}

View file

@ -54,8 +54,7 @@ func (daemon *Daemon) buildSandboxOptions(cfg *config.Config, container *contain
sboxOptions = append(sboxOptions, libnetwork.OptionUseExternalKey())
}
// Add platform-specific Sandbox options.
if err := buildSandboxPlatformOptions(container, cfg, &sboxOptions); err != nil {
if err := setupPathsAndSandboxOptions(container, cfg, &sboxOptions); err != nil {
return nil, err
}
@ -421,6 +420,9 @@ func (daemon *Daemon) updateContainerNetworkSettings(container *container.Contai
}
networkName := mode.NetworkName()
if mode.IsDefault() {
networkName = daemon.netController.Config().DefaultNetwork
}
if mode.IsUserDefined() {
var err error
@ -459,6 +461,15 @@ func (daemon *Daemon) updateContainerNetworkSettings(container *container.Contai
}
}
// Convert any settings added by client in default name to
// engine's default network name key
if mode.IsDefault() {
if nConf, ok := container.NetworkSettings.Networks[mode.NetworkName()]; ok {
container.NetworkSettings.Networks[networkName] = nConf
delete(container.NetworkSettings.Networks, mode.NetworkName())
}
}
if !mode.IsUserDefined() {
return
}

View file

@ -99,45 +99,6 @@ func (daemon *Daemon) getPIDContainer(id string) (*container.Container, error) {
return ctr, nil
}
// setupContainerDirs sets up base container directories (root, ipc, tmpfs and secrets).
func (daemon *Daemon) setupContainerDirs(c *container.Container) (_ []container.Mount, err error) {
if err := daemon.setupContainerMountsRoot(c); err != nil {
return nil, err
}
if err := daemon.setupIPCDirs(c); err != nil {
return nil, err
}
if err := daemon.setupSecretDir(c); err != nil {
return nil, err
}
defer func() {
if err != nil {
daemon.cleanupSecretDir(c)
}
}()
var ms []container.Mount
if !c.HostConfig.IpcMode.IsPrivate() && !c.HostConfig.IpcMode.IsEmpty() {
ms = append(ms, c.IpcMounts()...)
}
tmpfsMounts, err := c.TmpfsMounts()
if err != nil {
return nil, err
}
ms = append(ms, tmpfsMounts...)
secretMounts, err := c.SecretMounts()
if err != nil {
return nil, err
}
ms = append(ms, secretMounts...)
return ms, nil
}
func (daemon *Daemon) setupIPCDirs(c *container.Container) error {
ipcMode := c.HostConfig.IpcMode
@ -417,7 +378,7 @@ func serviceDiscoveryOnDefaultNetwork() bool {
return false
}
func buildSandboxPlatformOptions(container *container.Container, cfg *config.Config, sboxOptions *[]libnetwork.SandboxOption) error {
func setupPathsAndSandboxOptions(container *container.Container, cfg *config.Config, sboxOptions *[]libnetwork.SandboxOption) error {
var err error
var originResolvConfPath string
@ -481,7 +442,6 @@ func buildSandboxPlatformOptions(container *container.Container, cfg *config.Con
return err
}
*sboxOptions = append(*sboxOptions, libnetwork.OptionResolvConfPath(container.ResolvConfPath))
return nil
}

View file

@ -163,13 +163,7 @@ func serviceDiscoveryOnDefaultNetwork() bool {
return true
}
func buildSandboxPlatformOptions(container *container.Container, cfg *config.Config, sboxOptions *[]libnetwork.SandboxOption) error {
// By default, the Windows internal resolver does not forward requests to
// external resolvers - but forwarding can be enabled using feature flag
// "windows-dns-proxy":true.
if doproxy, exists := cfg.Features["windows-dns-proxy"]; !exists || !doproxy {
*sboxOptions = append(*sboxOptions, libnetwork.OptionDNSNoProxy())
}
func setupPathsAndSandboxOptions(container *container.Container, cfg *config.Config, sboxOptions *[]libnetwork.SandboxOption) error {
return nil
}

View file

@ -2,226 +2,211 @@ package containerd
import (
"context"
"encoding/json"
"errors"
"fmt"
"reflect"
"strings"
"github.com/containerd/containerd/content"
cerrdefs "github.com/containerd/containerd/errdefs"
"github.com/containerd/log"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
imagetype "github.com/docker/docker/api/types/image"
"github.com/docker/docker/builder"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/image"
"github.com/docker/docker/image/cache"
"github.com/docker/docker/internal/multierror"
"github.com/docker/docker/layer"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
// MakeImageCache creates a stateful image cache.
func (i *ImageService) MakeImageCache(ctx context.Context, sourceRefs []string) (builder.ImageCache, error) {
return cache.New(ctx, cacheAdaptor{i}, sourceRefs)
func (i *ImageService) MakeImageCache(ctx context.Context, cacheFrom []string) (builder.ImageCache, error) {
images := []*image.Image{}
if len(cacheFrom) == 0 {
return &localCache{
imageService: i,
}, nil
}
type cacheAdaptor struct {
is *ImageService
}
func (c cacheAdaptor) Get(id image.ID) (*image.Image, error) {
ctx := context.TODO()
ref := id.String()
outImg, err := c.is.GetImage(ctx, id.String(), backend.GetImageOpts{})
for _, c := range cacheFrom {
h, err := i.ImageHistory(ctx, c)
if err != nil {
return nil, fmt.Errorf("GetImage: %w", err)
continue
}
c8dImg, err := c.is.resolveImage(ctx, ref)
for _, hi := range h {
if hi.ID != "<missing>" {
im, err := i.GetImage(ctx, hi.ID, imagetype.GetImageOpts{})
if err != nil {
return nil, fmt.Errorf("resolveImage: %w", err)
}
var errFound = errors.New("success")
err = c.is.walkImageManifests(ctx, c8dImg, func(img *ImageManifest) error {
desc, err := img.Config(ctx)
if err != nil {
log.G(ctx).WithFields(log.Fields{
"image": img,
"error": err,
}).Warn("failed to get config descriptor for image")
return nil
}
info, err := c.is.content.Info(ctx, desc.Digest)
if err != nil {
if !cerrdefs.IsNotFound(err) {
log.G(ctx).WithFields(log.Fields{
"image": img,
"desc": desc,
"error": err,
}).Warn("failed to get info of image config")
}
return nil
}
if dgstStr, ok := info.Labels[contentLabelGcRefContainerConfig]; ok {
dgst, err := digest.Parse(dgstStr)
if err != nil {
log.G(ctx).WithFields(log.Fields{
"label": contentLabelClassicBuilderImage,
"value": dgstStr,
"content": desc.Digest,
"error": err,
}).Warn("invalid digest in label")
return nil
}
configDesc := ocispec.Descriptor{
Digest: dgst,
}
var config container.Config
if err := readConfig(ctx, c.is.content, configDesc, &config); err != nil {
if !errdefs.IsNotFound(err) {
log.G(ctx).WithFields(log.Fields{
"configDigest": dgst,
"error": err,
}).Warn("failed to read container config")
}
return nil
}
outImg.ContainerConfig = config
// We already have the config we looked for, so return an error to
// stop walking the image further. This error will be ignored.
return errFound
}
return nil
})
if err != nil && err != errFound {
return nil, err
}
return outImg, nil
images = append(images, im)
}
func (c cacheAdaptor) GetByRef(ctx context.Context, refOrId string) (*image.Image, error) {
return c.is.GetImage(ctx, refOrId, backend.GetImageOpts{})
}
func (c cacheAdaptor) SetParent(target, parent image.ID) error {
ctx := context.TODO()
_, imgs, err := c.is.resolveAllReferences(ctx, target.String())
if err != nil {
return fmt.Errorf("failed to list images with digest %q", target)
}
var errs []error
is := c.is.images
for _, img := range imgs {
if img.Labels == nil {
img.Labels = make(map[string]string)
}
img.Labels[imageLabelClassicBuilderParent] = parent.String()
if _, err := is.Update(ctx, img, "labels."+imageLabelClassicBuilderParent); err != nil {
errs = append(errs, fmt.Errorf("failed to update parent label on image %v: %w", img, err))
}
}
return multierror.Join(errs...)
return &imageCache{
lc: &localCache{
imageService: i,
},
images: images,
imageService: i,
}, nil
}
func (c cacheAdaptor) GetParent(target image.ID) (image.ID, error) {
ctx := context.TODO()
value, err := c.is.getImageLabelByDigest(ctx, target.Digest(), imageLabelClassicBuilderParent)
if err != nil {
return "", fmt.Errorf("failed to read parent image: %w", err)
type localCache struct {
imageService *ImageService
}
dgst, err := digest.Parse(value)
if err != nil {
return "", fmt.Errorf("invalid parent value: %q", value)
}
return image.ID(dgst), nil
}
func (c cacheAdaptor) Create(parent *image.Image, target image.Image, extraLayer layer.DiffID) (image.ID, error) {
ctx := context.TODO()
data, err := json.Marshal(target)
if err != nil {
return "", fmt.Errorf("failed to marshal image config: %w", err)
}
var layerDigest digest.Digest
if extraLayer != "" {
info, err := findContentByUncompressedDigest(ctx, c.is.client.ContentStore(), digest.Digest(extraLayer))
if err != nil {
return "", fmt.Errorf("failed to find content for diff ID %q: %w", extraLayer, err)
}
layerDigest = info.Digest
}
var parentRef string
if parent != nil {
parentRef = parent.ID().String()
}
img, err := c.is.CreateImage(ctx, data, parentRef, layerDigest)
if err != nil {
return "", fmt.Errorf("failed to created cached image: %w", err)
}
return image.ID(img.ImageID()), nil
}
func (c cacheAdaptor) IsBuiltLocally(target image.ID) (bool, error) {
ctx := context.TODO()
value, err := c.is.getImageLabelByDigest(ctx, target.Digest(), imageLabelClassicBuilderContainerConfig)
if err != nil {
return false, fmt.Errorf("failed to read container config label: %w", err)
}
return value != "", nil
}
func (c cacheAdaptor) Children(id image.ID) []image.ID {
func (ic *localCache) GetCache(parentID string, cfg *container.Config, platform ocispec.Platform) (imageID string, err error) {
ctx := context.TODO()
if id.String() == "" {
imgs, err := c.is.getImagesWithLabel(ctx, imageLabelClassicBuilderFromScratch, "1")
var children []image.ID
// FROM scratch
if parentID == "" {
c, err := ic.imageService.getImagesWithLabel(ctx, imageLabelClassicBuilderFromScratch, "1")
if err != nil {
log.G(ctx).WithError(err).Error("failed to get from scratch images")
return nil
return "", err
}
return imgs
}
imgs, err := c.is.Children(ctx, id)
children = c
} else {
c, err := ic.imageService.Children(ctx, image.ID(parentID))
if err != nil {
log.G(ctx).WithError(err).Error("failed to get image children")
return nil
return "", err
}
children = c
}
return imgs
var match *image.Image
for _, child := range children {
ccDigestStr, err := ic.imageService.getImageLabelByDigest(ctx, child.Digest(), imageLabelClassicBuilderContainerConfig)
if err != nil {
return "", err
}
if ccDigestStr == "" {
continue
}
func findContentByUncompressedDigest(ctx context.Context, cs content.Manager, uncompressed digest.Digest) (content.Info, error) {
var out content.Info
dgst, err := digest.Parse(ccDigestStr)
if err != nil {
log.G(ctx).WithError(err).Warnf("invalid container config digest: %q", ccDigestStr)
continue
}
errStopWalk := errors.New("success")
err := cs.Walk(ctx, func(i content.Info) error {
out = i
return errStopWalk
}, `labels."containerd.io/uncompressed"==`+uncompressed.String())
var cc container.Config
if err := readConfig(ctx, ic.imageService.content, ocispec.Descriptor{Digest: dgst}, &cc); err != nil {
if errdefs.IsNotFound(err) {
log.G(ctx).WithError(err).WithField("image", child).Warnf("missing container config: %q", ccDigestStr)
continue
}
return "", err
}
if err != nil && err != errStopWalk {
return out, err
if cache.CompareConfig(&cc, cfg) {
childImage, err := ic.imageService.GetImage(ctx, child.String(), imagetype.GetImageOpts{Platform: &platform})
if err != nil {
if errdefs.IsNotFound(err) {
continue
}
if out.Digest == "" {
return out, errdefs.NotFound(errors.New("no content matches this uncompressed digest"))
return "", err
}
return out, nil
if childImage.Created != nil && (match == nil || match.Created.Before(*childImage.Created)) {
match = childImage
}
}
}
if match == nil {
return "", nil
}
return match.ID().String(), nil
}
type imageCache struct {
images []*image.Image
imageService *ImageService
lc *localCache
}
func (ic *imageCache) GetCache(parentID string, cfg *container.Config, platform ocispec.Platform) (imageID string, err error) {
ctx := context.TODO()
imgID, err := ic.lc.GetCache(parentID, cfg, platform)
if err != nil {
return "", err
}
if imgID != "" {
for _, s := range ic.images {
if ic.isParent(ctx, s, image.ID(imgID)) {
return imgID, nil
}
}
}
var parent *image.Image
lenHistory := 0
if parentID != "" {
parent, err = ic.imageService.GetImage(ctx, parentID, imagetype.GetImageOpts{Platform: &platform})
if err != nil {
return "", err
}
lenHistory = len(parent.History)
}
for _, target := range ic.images {
if !isValidParent(target, parent) || !isValidConfig(cfg, target.History[lenHistory]) {
continue
}
return target.ID().String(), nil
}
return "", nil
}
func isValidConfig(cfg *container.Config, h image.History) bool {
// todo: make this format better than join that loses data
return strings.Join(cfg.Cmd, " ") == h.CreatedBy
}
func isValidParent(img, parent *image.Image) bool {
if len(img.History) == 0 {
return false
}
if parent == nil || len(parent.History) == 0 && len(parent.RootFS.DiffIDs) == 0 {
return true
}
if len(parent.History) >= len(img.History) {
return false
}
if len(parent.RootFS.DiffIDs) > len(img.RootFS.DiffIDs) {
return false
}
for i, h := range parent.History {
if !reflect.DeepEqual(h, img.History[i]) {
return false
}
}
for i, d := range parent.RootFS.DiffIDs {
if d != img.RootFS.DiffIDs[i] {
return false
}
}
return true
}
func (ic *imageCache) isParent(ctx context.Context, img *image.Image, parentID image.ID) bool {
ii, err := ic.imageService.resolveImage(ctx, img.ImageID())
if err != nil {
return false
}
parent, ok := ii.Labels[imageLabelClassicBuilderParent]
if ok {
return parent == parentID.String()
}
p, err := ic.imageService.GetImage(ctx, parentID.String(), imagetype.GetImageOpts{})
if err != nil {
return false
}
return ic.isParent(ctx, p, parentID)
}

Some files were not shown because too many files have changed in this diff Show more