When creating a new daemon in the `TestDaemonProxy`, reset the
`OTEL_EXPORTER_OTLP_ENDPOINT` to an empty value to disable OTEL
collection to avoid it hitting the proxy.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Don't use OTEL tracing in this test because we're testing the HTTP proxy
behavior here and we don't want OTEL to interfere.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
To prevent a circular import between api/types and api/types image,
the RequestPrivilegeFunc reference was not moved, but defined as
part of the PullOptions / PushOptions.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Replace `time.Sleep` with a poll that checks if process no longer exists
to avoid possible race condition.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
When live-restore is enabled, containers with autoremove enabled
shouldn't be forcibly killed when engine restarts.
They still should be removed if they exited while the engine was down
though.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Integration tests will now configure clients to propagate traces as well
as create spans for all tests.
Some extra changes were needed (or desired for trace propagation) in the
test helpers to pass through tracing spans via context.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Make sure that the content in the live-restored volume mounted in a new
container is the same as the content in the old container.
This checks if volume's _data directory doesn't get unmounted on
startup.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Also move the validation function to live with the type definition,
which allows it to be used outside of the daemon as well.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
I noticed this was always being skipped because of race conditions
checking the logs.
This change adds a log scanner which will look through the logs line by
line rather than allocating a big buffer.
Additionally it adds a `poll.Check` which we can use to actually wait
for the desired log entry.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The daemon.lazyInitializeVolume() function only handles restoring Volumes
if a Driver is specified. The Container's MountPoints field may also
contain other kind of mounts (e.g., bind-mounts). Those were ignored, and
don't return an error; 1d9c8619cd/daemon/volumes.go (L243-L252C2)
However, the prepareMountPoints() assumed each MountPoint was a volume,
and logged an informational message about the volume being restored;
1d9c8619cd/daemon/mounts.go (L18-L25)
This would panic if the MountPoint was not a volume;
github.com/docker/docker/daemon.(*Daemon).prepareMountPoints(0xc00054b7b8?, 0xc0007c2500)
/root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/mounts.go:24 +0x1c0
github.com/docker/docker/daemon.(*Daemon).restore.func5(0xc0007c2500, 0x0?)
/root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:552 +0x271
created by github.com/docker/docker/daemon.(*Daemon).restore
/root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:530 +0x8d8
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x564e9be4c7c0]
This issue was introduced in 647c2a6cdd
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When live-restoring a container the volume driver needs be notified that
there is an active mount for the volume.
Before this change the count is zero until the container stops and the
uint64 overflows pretty much making it so the volume can never be
removed until another daemon restart.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Since cc19eba (backported to v23.0.4), the PreferredPool for docker0 is
set only when the user provides the bip config parameter or when the
default bridge already exist. That means, if a user provides the
fixed-cidr parameter on a fresh install or reboot their computer/server
without bip set, dockerd throw the following error when it starts:
> failed to start daemon: Error initializing network controller: Error
> creating default "bridge" network: failed to parse pool request for
> address space "LocalDefault" pool "" subpool "100.64.0.0/26": Invalid
> Address SubPool
See #45356.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
The migration code is in the 22.06 branch, and if we don't migrate
the only side-effect is the daemon's ID being regenerated (as a
UUID).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Before this change restarting the daemon in live-restore with running
containers + a restart policy meant that volume refs were not restored.
This specifically happens when the container is still running *and*
there is a restart policy that would make sure the container was running
again on restart.
The bug allows volumes to be removed even though containers are
referencing them. 😱
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This change is in preparation of deprecating support for old manifests.
Currently the daemon's ID is based on the trust-key ID, which will be
removed once we fully deprecate support for old manifests (the trust
key is currently only used in tests).
This patch:
- looks if a trust-key is present; if so, it migrates the trust-key
ID to the new "engine-id" file within the daemon's root.
- if no trust-key is present (so in case it's a "fresh" install), we
generate a UUID instead and use that as ID.
The migration is to prevent engines from getting a new ID on upgrades;
while we don't provide any guarantees on the engine's ID, users may
expect the ID to be "stable" (not change) between upgrades.
A test has been added, which can be ran with;
make DOCKER_GRAPHDRIVER=vfs TEST_FILTER='TestConfigDaemonID' test-integration
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This is a follow-up to 427c7cc5f8, which added
proxy-configuration options ("http-proxy", "https-proxy", "no-proxy") to the
dockerd cli and in `daemon.json`.
While working on documentation changes for this feature, I realised that those
options won't be "next" to each-other when formatting the daemon.json JSON, for
example using `jq` (which sorts the fields alphabetically). As it's possible that
additional proxy configuration options are added in future, I considered that
grouping these options in a struct within the JSON may help setting these options,
as well as discovering related options.
This patch introduces a "proxies" field in the JSON, which includes the
"http-proxy", "https-proxy", "no-proxy" options.
Conflict detection continues to work as before; with this patch applied:
mkdir -p /etc/docker/
echo '{"proxies":{"http-proxy":"http-config", "https-proxy":"https-config", "no-proxy": "no-proxy-config"}}' > /etc/docker/daemon.json
dockerd --http-proxy=http-flag --https-proxy=https-flag --no-proxy=no-proxy-flag --validate
unable to configure the Docker daemon with file /etc/docker/daemon.json:
the following directives are specified both as a flag and in the configuration file:
http-proxy: (from flag: http-flag, from file: http-config),
https-proxy: (from flag: https-flag, from file: https-config),
no-proxy: (from flag: no-proxy-flag, from file: no-proxy-config)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The daemon can print the proxy configuration as part of error-messages,
and when reloading the daemon configuration (SIGHUP). Make sure that
the configuration is sanitized before printing.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The proxy configuration works, but looks like we're unable to connect to the
test proxy server as part of our test;
level=debug msg="Trying to pull example.org:5000/some/image from https://example.org:5000 v2"
level=warning msg="Error getting v2 registry: Get \"https://example.org:5000/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:45999: connect: connection refused"
level=info msg="Attempting next endpoint for pull after error: Get \"https://example.org:5000/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:45999: connect: connection refused"
level=error msg="Handler for POST /v1.42/images/create returned error: Get \"https://example.org:5000/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:45999: connect: connection refused"
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This allows configuring the daemon's proxy server through the daemon.json con-
figuration file or command-line flags configuration file, in addition to the
existing option (through environment variables).
Configuring environment variables on Windows to configure a service is more
complicated than on Linux, and adding alternatives for this to the daemon con-
figuration makes the configuration more transparent and easier to use.
The configuration as set through command-line flags or through the daemon.json
configuration file takes precedence over env-vars in the daemon's environment,
which allows the daemon to use a different proxy. If both command-line flags
and a daemon.json configuration option is set, an error is produced when starting
the daemon.
Note that this configuration is not "live reloadable" due to Golang's use of
`sync.Once()` for proxy configuration, which means that changing the proxy
configuration requires a restart of the daemon (reload / SIGHUP will not update
the configuration.
With this patch:
cat /etc/docker/daemon.json
{
"http-proxy": "http://proxytest.example.com:80",
"https-proxy": "https://proxytest.example.com:443"
}
docker pull busybox
Using default tag: latest
Error response from daemon: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp: lookup proxytest.example.com on 127.0.0.11:53: no such host
docker build .
Sending build context to Docker daemon 89.28MB
Step 1/3 : FROM golang:1.16-alpine AS base
Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp: lookup proxytest.example.com on 127.0.0.11:53: no such host
Integration tests were added to test the behavior:
- verify that the configuration through all means are used (env-var,
command-line flags, damon.json), and used in the expected order of
preference.
- verify that conflicting options produce an error.
Signed-off-by: Anca Iordache <anca.iordache@docker.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The io/ioutil package has been deprecated in Go 1.16. This commit
replaces the existing io/ioutil functions with their new definitions in
io and os packages.
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
This allows containers to use the embedded default profile if a different
default is set (e.g. "unconfined") in the daemon configuration. Without this
option, users would have to copy the default profile to a file in order to
use the default.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Commit b237189e6c implemented an option to
set the default seccomp profile in the daemon configuration. When that PR
was reviewed, it was discussed to have the option accept the path to a custom
profile JSON file; https://github.com/moby/moby/pull/26276#issuecomment-253546966
However, in the implementation, the special "unconfined" value was not taken into
account. The "unconfined" value is meant to disable seccomp (more factually:
run with an empty profile).
While it's likely possible to achieve this by creating a file with an an empty
(`{}`) profile, and passing the path to that file, it's inconsistent with the
`--security-opt seccomp=unconfined` option on `docker run` and `docker create`,
which is both confusing, and makes it harder to use (especially on Docker Desktop,
where there's no direct access to the VM's filesystem).
This patch adds the missing check for the special "unconfined" value.
Co-authored-by: Tianon Gravi <admwiggin@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Fixes#36911
If config file is invalid we'll exit anyhow, so this just prevents
the daemon from starting if the configuration is fine.
Mainly useful for making config changes and restarting the daemon
iff the config is valid.
Signed-off-by: Rich Horwood <rjhorwood@apple.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Anca Iordache <anca.iordache@docker.com>