Commit 8b7af1d0f added some code to update the DNSNames of all
endpoints attached to a sandbox by loading a new instance of each
affected endpoints from the datastore through a call to
`Network.EndpointByID()`.
This method then calls `Network.getEndpointFromStore()`, that in
turn calls `store.GetObject()`, which then calls `cache.get()`,
which calls `o.CopyTo(kvObject)`. This effectively creates a fresh
new instance of an Endpoint. However, endpoints are already kept in
memory by Sandbox, meaning we now have two in-memory instances of
the same Endpoint.
As it turns out, libnetwork is built around the idea that no two objects
representing the same thing should leave in-memory, otherwise breaking
mutex locking and optimistic locking (as both instances will have a drifting
version tracking ID -- dbIndex in libnetwork parliance).
In this specific case, this bug materializes by container rename failing
when applied a second time for a given container. An integration test is
added to make sure this won't happen again.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Prior to 7a9b680a, the container short ID was added to the network
aliases only for custom networks. However, this logic wasn't preserved
in 6a2542d and now the cid is always added to the list of network
aliases.
This commit reintroduces the old logic.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Commit 21e50b89c9 added a label on the buildkit
worker to advertise the host-gateway-ip. This option can be either set by the
user in the daemon config, or otherwise defaults to the gateway-ip.
If no value is set by the user, discovery of the gateway-ip happens when
initializing the network-controller (`NewDaemon`, `daemon.restore()`).
However d222bf097c changed how we handle the
daemon config. As a result, the `cli.Config` used when initializing the
builder only holds configuration information form the daemon config
(user-specified or defaults), but is not updated with information set
by `NewDaemon`.
This patch adds an accessor on the daemon to get the current daemon config.
An alternative could be to return the config by `NewDaemon` (which should
likely be a _copy_ of the config).
Before this patch:
docker buildx inspect default
Name: default
Driver: docker
Nodes:
Name: default
Endpoint: default
Status: running
Buildkit: v0.12.4+3b6880d2a00f
Platforms: linux/arm64, linux/amd64, linux/amd64/v2, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
Labels:
org.mobyproject.buildkit.worker.moby.host-gateway-ip: <nil>
After this patch:
docker buildx inspect default
Name: default
Driver: docker
Nodes:
Name: default
Endpoint: default
Status: running
Buildkit: v0.12.4+3b6880d2a00f
Platforms: linux/arm64, linux/amd64, linux/amd64/v2, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
Labels:
org.mobyproject.buildkit.worker.moby.host-gateway-ip: 172.18.0.1
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The `GetImageOpts` struct is used for options to be passed to the backend,
and are not used in client code. This struct currently is intended for internal
use only.
This patch moves the `GetImageOpts` struct to the backend package to prevent
it being imported in the client, and to make it more clear that this is part
of internal APIs, and not public-facing.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The MAC address of a running container was stored in the same place as
the configured address for a container.
When starting a stopped container, a generated address was treated as a
configured address. If that generated address (based on an IPAM-assigned
IP address) had been reused, the containers ended up with duplicate MAC
addresses.
So, remember whether the MAC address was explicitly configured, and
clear it if not.
Signed-off-by: Rob Murray <rob.murray@docker.com>
With containerd snapshotters enabled `docker run` currently fails when
creating a container from an image that doesn't have the default host
platform without an explicit `--platform` selection:
```
$ docker run image:amd64
Unable to find image 'asdf:amd64' locally
docker: Error response from daemon: pull access denied for asdf, repository does not exist or may require 'docker login'.
See 'docker run --help'.
```
This is confusing and the graphdriver behavior is much better here,
because it runs whatever platform the image has, but prints a warning:
```
$ docker run image:amd64
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
```
This commits changes the containerd snapshotter behavior to be the same
as the graphdriver. This doesn't affect container creation when platform
is specified explicitly.
```
$ docker run --rm --platform linux/arm64 asdf:amd64
Unable to find image 'asdf:amd64' locally
docker: Error response from daemon: pull access denied for asdf, repository does not exist or may require 'docker login'.
See 'docker run --help'.
```
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Since v25.0 (commit ff50388), we validate endpoint settings when
containers are created, instead of doing so when containers are started.
However, a container created prior to that release would still trigger
validation error at start-time. In such case, the API returns a 500
status code because the Go error isn't wrapped into an InvalidParameter
error. This is now fixed.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
This matcher was only used internally in the containerd implementation of
the image store. Un-export it, and make it a local utility in that package
to prevent external use.
This package was introduced in 1616a09b61
(v24.0), and there are no known external consumers of this package, so there
should be no need to deprecate / alias the old location.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This flag was marked deprecated in commit 5a922dc16 (released in v24.0)
and to be removed in the next release.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
`VolumeOptions` now has a `Subpath` field which allows to specify a path
relative to the volume that should be mounted as a destination.
Symlinks are supported, but they cannot escape the base volume
directory.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
We constructed a "function level" logger, which was used once "as-is", but
also added additional Fields in a loop (for each resource), effectively
overwriting the previous one for each iteration. Adding additional
fields can result in some overhead, so let's construct a "logger" only for
inside the loop.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
We have many "image" packages, so these vars easily conflict/shadow
imports. Let's rename them (and in some cases use a const) to
prevent that.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
All commonly used filesystems should use ref-counted mounter, so make it
the default instead of having to whitelist them.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Prior to this commit, a container running with `--net=host` had
`{"type":"network","path":"/var/run/docker/netns/default"}` in
the ``.linux.namespaces` field of the OCI Runtime Config,
but this wasn't needed.
Close issue 47100
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
The actual divergence is due to differences in the snapshotter and
graphfilter mount behaviour on Windows, but the snapshotter behaviour is
better, so we deal with it here rather than changing the snapshotter
behaviour.
We're relying on the internals of containerd's Windows mount
implementation here. Unless this code flow is replaced, future work is
to move getBackingDeviceForContainerdMount into containerd's mount
implementation.
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
The existing API ImageService.GetLayerFolders didn't have access to the
ID of the container, and once we have that, the snapshotter Mounts API
provides all the information we need here.
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Needed for Diff on Windows. Don't remount it afterwards as the layer is
going to be released anyway.
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
This is consistent with layerStore's CreateRWLayer behaviour.
Potentially this can be refactored to avoid creating the -init layer,
but as noted in layerStore's initMount, this name may be special, and
should be cleared-out all-at-once.
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Consider only images that were built `FROM scratch` as valid candidates
for the `FROM scratch` + INSTRUCTION build step.
The images are marked as `FROM scratch` based by the classic builder
with a special label. It must be a new label instead of empty parent
label, because empty label values are not persisted.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
In order for the cache in the classic builder to work we need to:
- use the came comparison function as the graph drivers implementation
- save the container config when commiting the image
- use all images to search a 'FROM "scratch"' image
- load all images if `cacheFrom` is empty
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
The health status and probe log of containers are not mission-criticial
data which must survive a crash. It is not worth prematrely wearing out
consumer-grade flash storage by overwriting and fsync()ing the container
config on after every probe. Update only the live Container object and
the ViewDB replica on every container health probe instead. It will
eventually get checkpointed along with some other state (or config)
change. Running containers will not be checkpointed on daemon shutdown
when live-restore is enabled, but it does not matter: the health status
and probe log will be zeroed out when the daemon starts back up.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Save the unmodified manifest list to keep the image ID of the
multi-platform images when not all platforms are present.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Switch github.com/imdario/mergo to dario.cat/mergo v1.0.0, because
the module was renamed, and reached v1.0.0
full diff: https://github.com/imdario/mergo/compare/v0.3.13...v1.0.0
vendor: github.com/containerd/containerd v1.7.12
- full diff: https://github.com/containerd/containerd/compare/v1.7.11...v1.7.12
- release notes: https://github.com/containerd/containerd/releases/tag/v1.7.12
Welcome to the v1.7.12 release of containerd!
The twelfth patch release for containerd 1.7 contains various fixes and updates.
Notable Updates
- Fix on dialer function for Windows
- Improve `/etc/group` handling when appending groups
- Update shim pidfile permissions to 0644
- Update runc binary to v1.1.11
- Allow import and export to reference missing content
- Remove runc import
- Update Go version to 1.20.13
Deprecation Warnings
- Emit deprecation warning for `containerd.io/restart.logpath` label usage
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Split task creation and start into two separate method calls in the
libcontainerd API. Clients now have the opportunity to inspect the
freshly-created task and customize its runtime environment before
starting execution of the user-specified binary.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The container may have been running without health probes for an
indeterminate amount of time. The container may have become unhealthy in
the interim. We should probe it sooner than in steady-state, while also
giving it some leeway to recover from e.g. timed-out connections. This
is easy to achieve by probing the container like a freshly-started one.
The original author of health-checks came to the same conclusion; the
health monitor was reinitialized on live-restored containers before
v17.11.0, when health monitoring of live-restored containers was
accidentally broken. Revert to the original behavior.
Signed-off-by: Cory Snider <csnider@mirantis.com>
commit 4f9db655ed moved looking up the
userland-proxy binary to early in the startup process, and introduced
a log-message if the binary was missing.
However, a side-effect of this was this message would also be printed
when running "--version";
dockerd --version
time="2024-01-09T09:18:53.705271292Z" level=warning msg="failed to lookup default userland-proxy binary" error="exec: \"docker-proxy\": executable file not found in $PATH"
Docker version v25.0.0-rc.1, build 9cebefa717
We should look if we can avoid this, but let's change the message to be
a debug message as a short-term workaround.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The output var was used in a `defer`, but named `err` and shadowed in various
places. Rename the var to a more explicit name to make clear where it's used
and to prevent it being accidentally shadowed.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
If the daemon is configured to use a mirror for the default (Docker Hub)
registry, the endpoint did not fall back to querying the upstream if the mirror
did not contain the given reference.
For pull-through registry-mirrors, this was not a problem, as in that case the
registry would forward the request, but for other mirrors, no fallback would
happen. This was inconsistent with how "pulling" images handled this situation;
when pulling images, both the mirror and upstream would be tried.
This patch brings the daemon-side lookup of image-manifests on-par with the
client-side lookup (the GET /distribution endpoint) as used in API 1.30 and
higher.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
If the daemon is configured to use a mirror for the default (Docker Hub)
registry, the endpoint did not fall back to querying the upstream if the mirror
did not contain the given reference.
If the daemon is configured to use a mirror for the default (Docker Hub)
registry, did not fall back to querying the upstream if the mirror did not
contain the given reference.
For pull-through registry-mirrors, this was not a problem, as in that case the
registry would forward the request, but for other mirrors, no fallback would
happen. This was inconsistent with how "pulling" images handled this situation;
when pulling images, both the mirror and upstream would be tried.
This problem was caused by the logic used in GetRepository, which had an
optimization to only return the first registry it was successfully able to
configure (and connect to), with the assumption that the mirror either contained
all images used, or to be configured as a pull-through mirror.
This patch:
- Introduces a GetRepositories method, which returns all candidates (both
mirror(s) and upstream).
- Updates the endpoint to try all
Before this patch:
# the daemon is configured to use a mirror for Docker Hub
cat /etc/docker/daemon.json
{ "registry-mirrors": ["http://localhost:5000"]}
# start the mirror (empty registry, not configured as pull-through mirror)
docker run -d --name registry -p 127.0.0.1:5000:5000 registry:2
# querying the endpoint fails, because the image-manifest is not found in the mirror:
curl -s --unix-socket /var/run/docker.sock http://localhost/v1.43/distribution/docker.io/library/hello-world:latest/json
{
"message": "manifest unknown: manifest unknown"
}
With this patch applied:
# the daemon is configured to use a mirror for Docker Hub
cat /etc/docker/daemon.json
{ "registry-mirrors": ["http://localhost:5000"]}
# start the mirror (empty registry, not configured as pull-through mirror)
docker run -d --name registry -p 127.0.0.1:5000:5000 registry:2
# querying the endpoint succeeds (manifest is fetched from the upstream Docker Hub registry):
curl -s --unix-socket /var/run/docker.sock http://localhost/v1.43/distribution/docker.io/library/hello-world:latest/json | jq .
{
"Descriptor": {
"mediaType": "application/vnd.oci.image.index.v1+json",
"digest": "sha256:1b9844d846ce3a6a6af7013e999a373112c3c0450aca49e155ae444526a2c45e",
"size": 3849
},
"Platforms": [
{
"architecture": "amd64",
"os": "linux"
}
]
}
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The Go 1.21.5 compiler has a bug: per-file language version override
directives do not take effect when instantiating generic functions which
have certain nontrivial type constraints. Consequently, a module-mode
project with Moby as a dependency may fail to compile when the compiler
incorrectly applies go1.16 semantics to the generic function call.
As the offending function is trivial and is only used in one place, work
around the issue by converting it to a concretely-typed function.
Signed-off-by: Cory Snider <csnider@mirantis.com>
When mapping a port with the userland-proxy enabled, the daemon would
perform an "exec.LookPath" for every mapped port (which, in case of
a range of ports, would be for every port in the range).
This was both inefficient (looking up the binary for each port), inconsistent
(when running in rootless-mode, the binary was looked-up once), as well as
inconvenient, because a missing binary, or a mis-configureed userland-proxy-path
would not be detected daeemon startup, and not produce an error until starting
the container;
docker run -d -P nginx:alpine
4f7b6589a1680f883d98d03db12203973387f9061e7a963331776170e4414194
docker: Error response from daemon: driver failed programming external connectivity on endpoint romantic_wiles (7cfdc361821f75cbc665564cf49856cf216a5b09046d3c22d5b9988836ee088d): fork/exec docker-proxy: no such file or directory.
However, the container would still be created (but invalid);
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
869f41d7e94f nginx:alpine "/docker-entrypoint.…" 10 seconds ago Created romantic_wiles
This patch changes how the userland-proxy is configured;
- The path of the userland-proxy is now looked up / configured at daemon
startup; this is similar to how the proxy is configured in rootless-mode.
- A warning is logged when failing to lookup the binary.
- If the daemon is configured with "userland-proxy" enabled, an error is
produced, and the daemon will refuse to start.
- The "proxyPath" argument for newProxyCommand() (in libnetwork/portmapper)
is now required to be set. It no longer looks up the executable, and
produces an error if no path was provided. While this change was not
required, it makes the daemon config the canonical source of truth, instead
of logic spread accross multiplee locations.
Some of this logic is a change of behavior, but these changes were made with
the assumption that we don't want to support;
- installing the userland proxy _after_ the daemon was started
- moving the userland proxy (or installing a proxy with a higher
preference in PATH)
With this patch:
Validating the config produces an error if the binary is not found:
dockerd --validate
WARN[2023-12-29T11:36:39.748699591Z] failed to lookup default userland-proxy binary error="exec: \"docker-proxy\": executable file not found in $PATH"
userland-proxy is enabled, but userland-proxy-path is not set
Disabling userland-proxy prints a warning, but validates as "OK":
dockerd --userland-proxy=false --validate
WARN[2023-12-29T11:38:30.752523879Z] ffailed to lookup default userland-proxy binary error="exec: \"docker-proxy\": executable file not found in $PATH"
configuration OK
Speficying a non-absolute path produces an error:
dockerd --userland-proxy-path=docker-proxy --validate
invalid userland-proxy-path: must be an absolute path: docker-proxy
Befor this patch, we would not validate this path, which would allow the daemon
to start, but fail to map a port;
docker run -d -P nginx:alpine
4f7b6589a1680f883d98d03db12203973387f9061e7a963331776170e4414194
docker: Error response from daemon: driver failed programming external connectivity on endpoint romantic_wiles (7cfdc361821f75cbc665564cf49856cf216a5b09046d3c22d5b9988836ee088d): fork/exec docker-proxy: no such file or directory.
Specifying an invalid userland-proxy-path produces an error as well:
dockerd --userland-proxy-path=/usr/local/bin/no-such-binary --validate
userland-proxy-path is invalid: stat /usr/local/bin/no-such-binary: no such file or directory
mkdir -p /usr/local/bin/not-a-file
dockerd --userland-proxy-path=/usr/local/bin/not-a-file --validate
userland-proxy-path is invalid: exec: "/usr/local/bin/not-a-file": is a directory
touch /usr/local/bin/not-an-executable
dockerd --userland-proxy-path=/usr/local/bin/not-an-executable --validate
userland-proxy-path is invalid: exec: "/usr/local/bin/not-an-executable": permission denied
Same when using the daemon.json config-file;
echo '{"userland-proxy-path":"no-such-binary"}' > /etc/docker/daemon.json
dockerd --validate
unable to configure the Docker daemon with file /etc/docker/daemon.json: merged configuration validation from file and command line flags failed: invalid userland-proxy-path: must be an absolute path: no-such-binary
dockerd --userland-proxy-path=hello --validate
unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: userland-proxy-path: (from flag: hello, from file: /usr/local/bin/docker-proxy)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
No more concept of "anonymous endpoints". The equivalent is now an
endpoint with no DNSNames set.
Some of the code removed by this commit was mutating user-supplied
endpoint's Aliases to add container's short ID to that list. In order to
preserve backward compatibility for the ContainerInspect endpoint, this
commit also takes care of adding that short ID (and the container
hostname) to `EndpointSettings.Aliases` before returning the response.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
The remapping in the commit code was in the wrong place, we would create
a diff and then remap the snapshot, but the descriptor created in
"CreateDiff" was still pointing to the old snapshot, we now remap the
snapshot before creating a diff. Also make sure we don't lose any
capabilities, they used to be lost after the chown.
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
The semantics of an "anonymous" endpoint has always been weird: it was
set on endpoints which name shouldn't be taken into account when
inserting DNS records into libnetwork's `Controller.svcRecords` (and
into the NetworkDB). However, in that case the endpoint's aliases would
still be used to create DNS records; thus, making those "anonymous
endpoints" not so anonymous.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
The `(*Endpoint).rename()` method is changed to only mutate `ep.name`
and let a new method `(*Endpoint).UpdateDNSNames()` handle DNS updates.
As a consequence, the rollback code that was part of
`(*Endpoint).rename()` is now removed, and DNS updates are now
rolled back by `ContainerRename`.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Instead of special-casing anonymous endpoints in libnetwork, let the
daemon specify what (non fully qualified) DNS names should be associated
to container's endpoints.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Ensure that when removing an image, an image is checked consistently
against the images with the same target digest. Add unit testing around
delete.
Signed-off-by: Derek McGowan <derek@mcg.dev>
They just happen to exist on a network that doesn't support DNS-based
service discovery (ie. no embedded DNS servers are started for them).
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Calculate the IPv6 addreesses needed on a bridge, then reconcile them
with the addresses on an existing bridge by deleting then adding as
required.
(Previously, required addresses were added one-by-one, then unwanted
addresses were removed. This meant the daemon failed to start if, for
example, an existing bridge had address '2000:db8::/64' and the config
was changed to '2000:db8::/80'.)
IPv6 addresses are now calculated and applied in one go, so there's no
need for setupVerifyAndReconcile() to check the set of IPv6 addresses on
the bridge. And, it was guarded by !config.InhibitIPv4, which can't have
been right. So, removed its IPv6 parts, and added IPv4 to its name.
Link local addresses, the example given in the original ticket, are now
released when containers are stopped. Not releasing them meant that
when using an LL subnet on the default bridge, no container could be
started after a container was stopped (because the calculated address
could not be re-allocated). In non-default bridge networks using an
LL subnet, addresses leaked.
Linux always uses the standard 'fe80::/64' LL network. So, if a bridge
is configured with an LL subnet prefix that overlaps with it, a config
error is reported. Non-overlapping LL subnet prefixes are allowed.
Signed-off-by: Rob Murray <rob.murray@docker.com>
Before this change `ParentId` was filled for images when calling the
`/images/json` (image list) endpoint but was not for the
`/images/<image>/json` (image inspect).
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
This repository is not yet a module (i.e., does not have a `go.mod`). This
is not problematic when building the code in GOPATH or "vendor" mode, but
when using the code as a module-dependency (in module-mode), different semantics
are applied since Go1.21, which switches Go _language versions_ on a per-module,
per-package, or even per-file base.
A condensed summary of that logic [is as follows][1]:
- For modules that have a go.mod containing a go version directive; that
version is considered a minimum _required_ version (starting with the
go1.19.13 and go1.20.8 patch releases: before those, it was only a
recommendation).
- For dependencies that don't have a go.mod (not a module), go language
version go1.16 is assumed.
- Likewise, for modules that have a go.mod, but the file does not have a
go version directive, go language version go1.16 is assumed.
- If a go.work file is present, but does not have a go version directive,
language version go1.17 is assumed.
When switching language versions, Go _downgrades_ the language version,
which means that language features (such as generics, and `any`) are not
available, and compilation fails. For example:
# github.com/docker/cli/cli/context/store
/go/pkg/mod/github.com/docker/cli@v25.0.0-beta.2+incompatible/cli/context/store/storeconfig.go:6:24: predeclared any requires go1.18 or later (-lang was set to go1.16; check go.mod)
/go/pkg/mod/github.com/docker/cli@v25.0.0-beta.2+incompatible/cli/context/store/store.go:74:12: predeclared any requires go1.18 or later (-lang was set to go1.16; check go.mod)
Note that these fallbacks are per-module, per-package, and can even be
per-file, so _(indirect) dependencies_ can still use modern language
features, as long as their respective go.mod has a version specified.
Unfortunately, these failures do not occur when building locally (using
vendor / GOPATH mode), but will affect consumers of the module.
Obviously, this situation is not ideal, and the ultimate solution is to
move to go modules (add a go.mod), but this comes with a non-insignificant
risk in other areas (due to our complex dependency tree).
We can revert to using go1.16 language features only, but this may be
limiting, and may still be problematic when (e.g.) matching signatures
of dependencies.
There is an escape hatch: adding a `//go:build` directive to files that
make use of go language features. From the [go toolchain docs][2]:
> The go line for each module sets the language version the compiler enforces
> when compiling packages in that module. The language version can be changed
> on a per-file basis by using a build constraint.
>
> For example, a module containing code that uses the Go 1.21 language version
> should have a `go.mod` file with a go line such as `go 1.21` or `go 1.21.3`.
> If a specific source file should be compiled only when using a newer Go
> toolchain, adding `//go:build go1.22` to that source file both ensures that
> only Go 1.22 and newer toolchains will compile the file and also changes
> the language version in that file to Go 1.22.
This patch adds `//go:build` directives to those files using recent additions
to the language. It's currently using go1.19 as version to match the version
in our "vendor.mod", but we can consider being more permissive ("any" requires
go1.18 or up), or more "optimistic" (force go1.21, which is the version we
currently use to build).
For completeness sake, note that any file _without_ a `//go:build` directive
will continue to use go1.16 language version when used as a module.
[1]: 58c28ba286/src/cmd/go/internal/gover/version.go (L9-L56)
[2]: https://go.dev/doc/toolchain
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
No need to copy the parent label from the source dangling image, because
it will already be copied from the source image.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
A validation step was added to prevent the daemon from considering "logentries"
as a dynamically loaded plugin, causing it to continue trying to load the plugin;
WARN[2023-12-12T21:53:16.866857127Z] Unable to locate plugin: logentries, retrying in 1s
WARN[2023-12-12T21:53:17.868296836Z] Unable to locate plugin: logentries, retrying in 2s
WARN[2023-12-12T21:53:19.874259254Z] Unable to locate plugin: logentries, retrying in 4s
WARN[2023-12-12T21:53:23.879869881Z] Unable to locate plugin: logentries, retrying in 8s
But would ultimately be returned as an error to the user:
docker container create --name foo --log-driver=logentries nginx:alpine
Error response from daemon: error looking up logging plugin logentries: plugin "logentries" not found
With the additional validation step, an error is returned immediately:
docker container create --log-driver=logentries busybox
Error response from daemon: the logentries logging driver has been deprecated and removed
A migration step was added on container restore. Containers using the
"logentries" logging driver are migrated to use the "local" logging driver:
WARN[2023-12-12T22:38:53.108349297Z] migrated deprecated logentries logging driver container=4c9309fedce75d807340ea1820cc78dc5c774d7bfcae09f3744a91b84ce6e4f7 error="<nil>"
As an alternative to the validation step, I also considered using a "stub"
deprecation driver, however this would not result in an error when creating
the container, and only produce an error when starting:
docker container create --name foo --log-driver=logentries nginx:alpine
4c9309fedce75d807340ea1820cc78dc5c774d7bfcae09f3744a91b84ce6e4f7
docker start foo
Error response from daemon: failed to create task for container: failed to initialize logging driver: the logentries logging driver has been deprecated and removed
Error: failed to start containers: foo
For containers, this validation is added in the backend (daemon). For services,
this was not sufficient, as SwarmKit would try to schedule the task, which
caused a close loop;
docker service create --log-driver=logentries --name foo nginx:alpine
zo0lputagpzaua7cwga4lfmhp
overall progress: 0 out of 1 tasks
1/1: no suitable node (missing plugin on 1 node)
Operation continuing in background.
DEBU[2023-12-12T22:50:28.132732757Z] Calling GET /v1.43/tasks?filters=%7B%22_up-to-date%22%3A%7B%22true%22%3Atrue%7D%2C%22service%22%3A%7B%22zo0lputagpzaua7cwga4lfmhp%22%3Atrue%7D%7D
DEBU[2023-12-12T22:50:28.137961549Z] Calling GET /v1.43/nodes
DEBU[2023-12-12T22:50:28.340665007Z] Calling GET /v1.43/services/zo0lputagpzaua7cwga4lfmhp?insertDefaults=false
DEBU[2023-12-12T22:50:28.343437632Z] Calling GET /v1.43/tasks?filters=%7B%22_up-to-date%22%3A%7B%22true%22%3Atrue%7D%2C%22service%22%3A%7B%22zo0lputagpzaua7cwga4lfmhp%22%3Atrue%7D%7D
DEBU[2023-12-12T22:50:28.345201257Z] Calling GET /v1.43/nodes
So a validation was added in the service create and update endpoints;
docker service create --log-driver=logentries --name foo nginx:alpine
Error response from daemon: the logentries logging driver has been deprecated and removed
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The Logentries service will be discontinued next week:
> Dear Logentries user,
>
> We have identified you as the owner of, or collaborator of, a Logentries account.
>
> The Logentries service will be discontinued on November 15th, 2022. This means that your Logentries account access will be removed and all your log data will be permanently deleted on this date.
>
> Next Steps
> If you are interested in an alternative Rapid7 log management solution, InsightOps will be available for purchase through December 16th, 2022. Please note, there is no support to migrate your existing Logentries account to InsightOps.
>
> Thank you for being a valued user of Logentries.
>
> Thank you,
> Rapid7 Customer Success
There is no reason to preserve this code in Moby as a result.
Signed-off-by: Bjorn Neergaard <bneergaard@mirantis.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When reading logs, timestamps should always be presented in UTC. Unlike
the "json-file" and other logging drivers, the "local" logging driver
was using local time.
Thanks to Roman Valov for reporting this issue, and locating the bug.
Before this change:
echo $TZ
Europe/Amsterdam
docker run -d --log-driver=local nginx:alpine
fc166c6b2c35c871a13247dddd95de94f5796459e2130553eee91cac82766af3
docker logs --timestamps fc166c6b2c35c871a13247dddd95de94f5796459e2130553eee91cac82766af3
2023-12-08T18:16:56.291023422+01:00 /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
2023-12-08T18:16:56.291056463+01:00 /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
2023-12-08T18:16:56.291890130+01:00 /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
...
With this patch:
echo $TZ
Europe/Amsterdam
docker run -d --log-driver=local nginx:alpine
14e780cce4c827ce7861d7bc3ccf28b21f6e460b9bfde5cd39effaa73a42b4d5
docker logs --timestamps 14e780cce4c827ce7861d7bc3ccf28b21f6e460b9bfde5cd39effaa73a42b4d5
2023-12-08T17:18:46.635967625Z /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
2023-12-08T17:18:46.635989792Z /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
2023-12-08T17:18:46.636897417Z /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
...
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
If no `dangling` filter is specified, prune should only delete dangling
images.
This wasn't visible by doing `docker image prune` because the CLI
explicitly sets this filter to true.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
This struct is intended for internal use only for the backend, and is
not intended to be used externally.
This moves the plugin-related `NetworkListConfig` types to the backend
package to prevent it being imported in the client, and to make it more
clear that this is part of internal APIs, and not public-facing.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
These structs are intended for internal use only for the backend, and are
not intended to be used externally.
This moves the plugin-related `PluginRmConfig`, `PluginEnableConfig`, and
`PluginDisableConfig` types to the backend package to prevent them being
imported in the client, and to make it more clear that this is part of
internal APIs, and not public-facing.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The daemon currently provides support for API versions all the way back
to v1.12, which is the version of the API that shipped with docker 1.0. On
Windows, the minimum supported version is v1.24.
Such old versions of the client are rare, and supporting older API versions
has accumulated significant amounts of code to remain backward-compatible
(which is largely untested, and a "best-effort" at most).
This patch updates the minimum API version to v1.24, which is the fallback
API version used when API-version negotiation fails. The intent is to start
deprecating older API versions, but no code is removed yet as part of this
patch, and a DOCKER_MIN_API_VERSION environment variable is added, which
allows overriding the minimum version (to allow restoring the behavior from
before this patch).
With this patch the daemon defaults to API v1.24 as minimum:
docker version
Client:
Version: 24.0.2
API version: 1.43
Go version: go1.20.4
Git commit: cb74dfc
Built: Thu May 25 21:50:49 2023
OS/Arch: linux/arm64
Context: default
Server:
Engine:
Version: dev
API version: 1.44 (minimum version 1.24)
Go version: go1.21.3
Git commit: 0322a29b9ef8806aaa4b45dc9d9a2ebcf0244bf4
Built: Mon Dec 4 15:22:17 2023
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: v1.7.9
GitCommit: 4f03e100cb967922bec7459a78d16ccbac9bb81d
runc:
Version: 1.1.10
GitCommit: v1.1.10-0-g18a0cb0
docker-init:
Version: 0.19.0
GitCommit: de40ad0
Trying to use an older version of the API produces an error:
DOCKER_API_VERSION=1.23 docker version
Client:
Version: 24.0.2
API version: 1.23 (downgraded from 1.43)
Go version: go1.20.4
Git commit: cb74dfc
Built: Thu May 25 21:50:49 2023
OS/Arch: linux/arm64
Context: default
Error response from daemon: client version 1.23 is too old. Minimum supported API version is 1.24, please upgrade your client to a newer version
To restore the previous minimum, users can start the daemon with the
DOCKER_MIN_API_VERSION environment variable set:
DOCKER_MIN_API_VERSION=1.12 dockerd
API 1.12 is the oldest supported API version on Linux;
docker version
Client:
Version: 24.0.2
API version: 1.43
Go version: go1.20.4
Git commit: cb74dfc
Built: Thu May 25 21:50:49 2023
OS/Arch: linux/arm64
Context: default
Server:
Engine:
Version: dev
API version: 1.44 (minimum version 1.12)
Go version: go1.21.3
Git commit: 0322a29b9ef8806aaa4b45dc9d9a2ebcf0244bf4
Built: Mon Dec 4 15:22:17 2023
OS/Arch: linux/arm64
Experimental: false
containerd:
Version: v1.7.9
GitCommit: 4f03e100cb967922bec7459a78d16ccbac9bb81d
runc:
Version: 1.1.10
GitCommit: v1.1.10-0-g18a0cb0
docker-init:
Version: 0.19.0
GitCommit: de40ad0
When using the `DOCKER_MIN_API_VERSION` with a version of the API that
is not supported, an error is produced when starting the daemon;
DOCKER_MIN_API_VERSION=1.11 dockerd --validate
invalid DOCKER_MIN_API_VERSION: minimum supported API version is 1.12: 1.11
DOCKER_MIN_API_VERSION=1.45 dockerd --validate
invalid DOCKER_MIN_API_VERSION: maximum supported API version is 1.44: 1.45
Specifying a malformed API version also produces the same error;
DOCKER_MIN_API_VERSION=hello dockerd --validate
invalid DOCKER_MIN_API_VERSION: minimum supported API version is 1.12: hello
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The `ContainerCreateConfig` and `ContainerRmConfig` structs are used for
options to be passed to the backend, and are not used in client code.
Thess struct currently is intended for internal use only (for example, the
`AdjustCPUShares` is an internal implementation details to adjust the container's
config when older API versions are used).
Somewhat ironically, the signature of the Backend has a nicer UX than that
of the client's `ContainerCreate` signature (which expects all options to
be passed as separate arguments), so we may want to update that signature
to be closer to what the backend is using, but that can be left as a future
exercise.
This patch moves the `ContainerCreateConfig` and `ContainerRmConfig` structs
to the backend package to prevent it being imported in the client, and to make
it more clear that this is part of internal APIs, and not public-facing.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Move the initialization logic to the attachContext itself, so that
the container doesn't have to be aware about mutexes and other logic.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>