This type was added in 247f4796d2, and
at the time was added as an alias for string;
> api/types/events: add "Type" type for event-type enum
>
> Currently just an alias for string, but we can change it to be an
> actual type.
Now that all code uses the defined types, we should be able to make
this an actual type.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
`Daemon.getPidContainer()` was wrapping the error-message with a message
("cannot join PID of a non running container") that did not reflect the
actual reason for the error; `Daemon.GetContainer()` could either return
an invalid parameter (invalid / empty identifier), or a "not found" error
if the specified container-ID could not be found.
In the latter case, we don't want to return a "not found" error through
the API, as this would indicate that the container we're _starting_ was
not found (which is not the case), so we need to convert the error into
an `errdefs.ErrInvalidParameter` (the container-ID specified for the PID
namespace is invalid if the container doesn't exist).
This logic is similar to what we do for IPC namespaces. which received
a similar fix in c3d7a0c603.
This patch updates the error-types, and moves them into the getIpcContainer
and getPidContainer container functions, both of which should return
an "invalid parameter" if the container was not found.
It's worth noting that, while `WithNamespaces()` may return an "invalid
parameter" error, the `start` endpoint itself may _not_ be. as outlined
in commit bf1fb97575, starting a container
that has an invalid configuration should be considered an internal server
error, and is not an invalid _request_. However, for uses other than
container "start", `WithNamespaces()` should return the correct error
to allow code to handle it accordingly.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This test is currently failing with containerd-integration, which should
be looked into, but let's start with preventing it from panicking, to make
the test-failures less noisy;
--- FAIL: TestDiskUsage/after_container.Run (0.26s)
panic: runtime error: index out of range [0] with length 0 [recovered]
panic: runtime error: index out of range [0] with length 0
goroutine 280 [running]:
testing.tRunner.func1.2({0xb07a00, 0x40002006a8})
/usr/local/go/src/testing/testing.go:1526 +0x1c8
testing.tRunner.func1()
/usr/local/go/src/testing/testing.go:1529 +0x364
panic({0xb07a00, 0x40002006a8})
/usr/local/go/src/runtime/panic.go:884 +0x1f4
github.com/docker/docker/integration/system.TestDiskUsage.func3(0x0?, {0x0, {0x14ea4a8, 0x0, 0x0}, {0x14ea4a8, 0x0, 0x0}, {0x14ea4a8, 0x0, ...}, ...})
/go/src/github.com/docker/docker/integration/system/disk_usage_test.go:82 +0x7e4
github.com/docker/docker/integration/system.TestDiskUsage.func4(0x4000235c80?)
/go/src/github.com/docker/docker/integration/system/disk_usage_test.go:118 +0x8c
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Also remove integration-cli: `DockerAPISuite.TestContainerAPIDeleteConflict`,
which was testing the same conditions as `TestRemoveContainerRunning` in
integration/container.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Also move the validation function to live with the type definition,
which allows it to be used outside of the daemon as well.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Check that operations that could potentially perform overlayfs mounts
that could cause undefined behaviors.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
This utility was only used for a single test, and it was very limited
in functionality as it only allowed for a certain error-string to be
matched.
Let's change it into a more generic function; a helper that allows a
container to be created from a `TestContainerConfig` (which can be
constructed using `NewTestConfig`) and that returns the response from
client.ContainerCreate(), so that any result from that can be tested,
leaving it up to the test to check the results.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Introduce a NewTestConfig utility, to allow using the available utilities
for constructing a config, and use them with the regular API client.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The `client` variable was colliding with the `client` import. In some cases
the confusing `cli` name (it's not the "cli") was used. Given that such names
can easily start spreading (through copy/paste, or "code by example"), let's
make a one-time pass through all of them in this package to use the same name.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The `client` variable was colliding with the `client` import in various
files. While it didn't conflict in all files, there was inconsistency
in the naming, sometimes using the confusing `cli` name (it's not the
"cli"), and such names can easily start spreading (through copy/paste,
or "code by example").
Let's make a one-time pass through all of them in this package to use
the same name.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This test was testing the client-side validation, so might as well
move it there, and validate that the client invalidates before
trying to make an API call.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
The MediaType was changed twice in;
- b3b7eb2723 ("application/vnd.docker.plugins.v1+json" -> "application/vnd.docker.plugins.v1.1+json")
- 54587d861d ("application/vnd.docker.plugins.v1.1+json" -> "application/vnd.docker.plugins.v1.2+json")
But the (integration) tests were still using the old version, so let's
use the VersionMimeType const that's defined, and use the updated version.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The original code in container.Exec was potentially leaking the copy
goroutine when the context was cancelled or timed out. The new
`demultiplexStreams()` function won't return until the goroutine has
finished its work, and to ensure that it takes care of closing the
hijacked connection.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
I noticed this was always being skipped because of race conditions
checking the logs.
This change adds a log scanner which will look through the logs line by
line rather than allocating a big buffer.
Additionally it adds a `poll.Check` which we can use to actually wait
for the desired log entry.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
- use assert.Check to continue the test even if a check fails
- assert the total number of images returned, not only their RepoTags
- use subtests
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
"ro-non-recursive", "ro-force-recursive", and "rro" are
now removed from the legacy mount API.
CLI may still support them via the new mount API (if we want).
Follow-up to PR 45278
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
When resolving a reference that is both a Named and Digested, it could
be resolved to an image that has the same digest, but completely
different repository name.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
- Combine TestAttachWithTTY and TestAttachWithoutTTy to a single test using sub-tests
- Set up and tear-down the test-environment once
- Remove redundant client.ContainerRemove, as it's taken care of by testEnv.Clean()
- Run both tests in parallel
make TEST_FILTER=TestAttach DOCKER_GRAPHDRIVER=overlay2 TESTDEBUG=1 test-integration
Loaded image: busybox:latest
Loaded image: busybox:glibc
Loaded image: debian:bullseye-slim
Loaded image: hello-world:latest
Loaded image: arm32v7/hello-world:latest
INFO: Testing against a local daemon
=== RUN TestAttach
=== RUN TestAttach/without_TTY
=== PAUSE TestAttach/without_TTY
=== RUN TestAttach/with_TTY
=== PAUSE TestAttach/with_TTY
=== CONT TestAttach/without_TTY
=== CONT TestAttach/with_TTY
--- PASS: TestAttach (0.00s)
--- PASS: TestAttach/without_TTY (0.03s)
--- PASS: TestAttach/with_TTY (0.03s)
PASS
DONE 3 tests in 1.347s
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Calling function returned from setupTest (which calls testEnv.Clean) in
a defer block inside a test that spawns parallel subtests caused the
cleanup function to be called before any of the subtest did anything.
Change the defer expressions to use `t.Cleanup` instead to call it only
after all subtests have also finished.
This only changes tests which have parallel subtests.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
The daemon.lazyInitializeVolume() function only handles restoring Volumes
if a Driver is specified. The Container's MountPoints field may also
contain other kind of mounts (e.g., bind-mounts). Those were ignored, and
don't return an error; 1d9c8619cd/daemon/volumes.go (L243-L252C2)
However, the prepareMountPoints() assumed each MountPoint was a volume,
and logged an informational message about the volume being restored;
1d9c8619cd/daemon/mounts.go (L18-L25)
This would panic if the MountPoint was not a volume;
github.com/docker/docker/daemon.(*Daemon).prepareMountPoints(0xc00054b7b8?, 0xc0007c2500)
/root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/mounts.go:24 +0x1c0
github.com/docker/docker/daemon.(*Daemon).restore.func5(0xc0007c2500, 0x0?)
/root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:552 +0x271
created by github.com/docker/docker/daemon.(*Daemon).restore
/root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:530 +0x8d8
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x564e9be4c7c0]
This issue was introduced in 647c2a6cdd
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This adds an additional interval to be used by healthchecks during the
start period.
Typically when a container is just starting you want to check if it is
ready more quickly than a typical healthcheck might run. Without this
users have to balance between running healthchecks to frequently vs
taking a very long time to mark a container as healthy for the first
time.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Multiple daemons starting/running concurrently can collide with each
other when editing iptables rules. Most integration tests which opt into
parallelism and start daemons work around this problem by starting the
daemon with the --iptables=false option. However, some of the tests
neglect to pass the option when starting or restarting the daemon,
resulting in those tests being flaky.
Audit the integration tests which call t.Parallel() and (*Daemon).Stop()
and add --iptables=false arguments where needed.
Signed-off-by: Cory Snider <csnider@mirantis.com>
When live-restoring a container the volume driver needs be notified that
there is an active mount for the volume.
Before this change the count is zero until the container stops and the
uint64 overflows pretty much making it so the volume can never be
removed until another daemon restart.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This field was added in f0e5b3d7d8 to
account for older versions of the engine (Docker EE LTS versions), which
did not yet provide the OSType field in Docker info, and had to be manually
set using the TEST_OSTYPE env-var.
This patch removes the field in favor of the equivalent in DaemonInfo. It's
more verbose, but also less ambiguous what information we're using (i.e.,
the platform the daemon is running on, not the local platform).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Before 4bafaa00aa, if the daemon was
killed while a container was running and the container shim is killed
before the daemon is restarted, such as if the host system is
hard-rebooted, the daemon would restore the container to the stopped
state and set the exit code to 255. The aforementioned commit introduced
a regression where the container's exit code would instead be set to 0.
Fix the regression so that the exit code is once against set to 255 on
restore.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Commit 90de570cfa passed through the request
context to daemon.ContainerStop(). As a result, cancelling the context would
cancel the "graceful" stop of the container, and would proceed with forcefully
killing the container.
This patch partially reverts the changes from 90de570cfa
and breaks the context to prevent cancelling the context from cancelling the stop.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This makes the output of `docker save` fully OCI compliant.
When using the containerd image store, this code is not used. That
exporter will just use containerd's export method and should give us the
output we want for multi-arch images.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
`docker run -v /foo:/foo:ro` is now recursively read-only on kernel >= 5.12.
Automatically falls back to the legacy non-recursively read-only mount mode on kernel < 5.12.
Use `ro-non-recursive` to disable RRO.
Use `ro-force-recursive` or `rro` to explicitly enable RRO. (Fails on kernel < 5.12)
Fix issue 44978
Fix docker/for-linux issue 788
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
These changes add basic CDI integration to the docker daemon.
A cdi driver is added to handle cdi device requests. This
is gated by an experimental feature flag and is only supported on linux
This change also adds a CDISpecDirs (cdi-spec-dirs) option to the config.
This allows the default values of `/etc/cdi`, /var/run/cdi` to be overridden
which is useful for testing.
Signed-off-by: Evan Lezar <elezar@nvidia.com>
- use is.ErrorType
- replace uses of client.IsErrNotFound for errdefs.IsNotFound, as
the client no longer returns the old error-type.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Now that most uses of reexec have been replaced with non-reexec
solutions, most of the reexec.Init() calls peppered throughout the test
suites are unnecessary. Furthermore, most of the reexec.Init() calls in
test code neglects to check the return value to determine whether to
exit, which would result in the reexec'ed subprocesses proceeding to run
the tests, which would reexec another subprocess which would proceed to
run the tests, recursively. (That would explain why every reexec
callback used to unconditionally call os.Exit() instead of returning...)
Remove unneeded reexec.Init() calls from test and example code which no
longer needs it, and fix the reexec.Init() calls which are not inert to
exit after a reexec callback is invoked.
Signed-off-by: Cory Snider <csnider@mirantis.com>
This field is deprecated since 1261fe69a3,
and will now be omitted on API v1.44 and up for the `GET /images/json`,
`GET /images/{id}/json`, and `GET /system/df` endpoints.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Since cc19eba (backported to v23.0.4), the PreferredPool for docker0 is
set only when the user provides the bip config parameter or when the
default bridge already exist. That means, if a user provides the
fixed-cidr parameter on a fresh install or reboot their computer/server
without bip set, dockerd throw the following error when it starts:
> failed to start daemon: Error initializing network controller: Error
> creating default "bridge" network: failed to parse pool request for
> address space "LocalDefault" pool "" subpool "100.64.0.0/26": Invalid
> Address SubPool
See #45356.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
In versions of Docker before v1.10, this field was calculated from
the image itself and all of its parent images. Images are now stored
self-contained, and no longer use a parent-chain, making this field
an equivalent of the Size field.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The signatures of functions in containerd's errdefs packages are very
similar to those in our own, and it's easy to accidentally use the wrong
package.
This patch uses a consistent alias for all occurrences of this import.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
TestDaemonRestartKillContainers test was always executing the last case
(`container created should not be restarted`) because the iterated
variables were not copied correctly.
Capture iterated values by value correctly and rename c to tc.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Volumes created from the image config were not being pruned because the
volume service did not think they were anonymous since the code to
create passes along a generated name instead of letting the volume
service generate it.
This changes the code path to have the volume service generate the name
instead of doing it ahead of time.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Commit 3246db3755 added handling for removing
cluster volumes, but in some conditions, this resulted in errors not being
returned if the volume was in use;
docker swarm init
docker volume create foo
docker create -v foo:/foo busybox top
docker volume rm foo
This patch changes the logic for ignoring "local" volume errors if swarm
is enabled (and cluster volumes supported).
While working on this fix, I also discovered that Cluster.RemoveVolume()
did not handle the "force" option correctly; while swarm correctly handled
these, the cluster backend performs a lookup of the volume first (to obtain
its ID), which would fail if the volume didn't exist.
Before this patch:
make TEST_FILTER=TestVolumesRemoveSwarmEnabled DOCKER_GRAPHDRIVER=vfs test-integration
...
Running /go/src/github.com/docker/docker/integration/volume (arm64.integration.volume) flags=-test.v -test.timeout=10m -test.run TestVolumesRemoveSwarmEnabled
...
=== RUN TestVolumesRemoveSwarmEnabled
=== PAUSE TestVolumesRemoveSwarmEnabled
=== CONT TestVolumesRemoveSwarmEnabled
=== RUN TestVolumesRemoveSwarmEnabled/volume_in_use
volume_test.go:122: assertion failed: error is nil, not errdefs.IsConflict
volume_test.go:123: assertion failed: expected an error, got nil
=== RUN TestVolumesRemoveSwarmEnabled/volume_not_in_use
=== RUN TestVolumesRemoveSwarmEnabled/non-existing_volume
=== RUN TestVolumesRemoveSwarmEnabled/non-existing_volume_force
volume_test.go:143: assertion failed: error is not nil: Error response from daemon: volume no_such_volume not found
--- FAIL: TestVolumesRemoveSwarmEnabled (1.57s)
--- FAIL: TestVolumesRemoveSwarmEnabled/volume_in_use (0.00s)
--- PASS: TestVolumesRemoveSwarmEnabled/volume_not_in_use (0.01s)
--- PASS: TestVolumesRemoveSwarmEnabled/non-existing_volume (0.00s)
--- FAIL: TestVolumesRemoveSwarmEnabled/non-existing_volume_force (0.00s)
FAIL
With this patch:
make TEST_FILTER=TestVolumesRemoveSwarmEnabled DOCKER_GRAPHDRIVER=vfs test-integration
...
Running /go/src/github.com/docker/docker/integration/volume (arm64.integration.volume) flags=-test.v -test.timeout=10m -test.run TestVolumesRemoveSwarmEnabled
...
make TEST_FILTER=TestVolumesRemoveSwarmEnabled DOCKER_GRAPHDRIVER=vfs test-integration
...
Running /go/src/github.com/docker/docker/integration/volume (arm64.integration.volume) flags=-test.v -test.timeout=10m -test.run TestVolumesRemoveSwarmEnabled
...
=== RUN TestVolumesRemoveSwarmEnabled
=== PAUSE TestVolumesRemoveSwarmEnabled
=== CONT TestVolumesRemoveSwarmEnabled
=== RUN TestVolumesRemoveSwarmEnabled/volume_in_use
=== RUN TestVolumesRemoveSwarmEnabled/volume_not_in_use
=== RUN TestVolumesRemoveSwarmEnabled/non-existing_volume
=== RUN TestVolumesRemoveSwarmEnabled/non-existing_volume_force
--- PASS: TestVolumesRemoveSwarmEnabled (1.53s)
--- PASS: TestVolumesRemoveSwarmEnabled/volume_in_use (0.00s)
--- PASS: TestVolumesRemoveSwarmEnabled/volume_not_in_use (0.01s)
--- PASS: TestVolumesRemoveSwarmEnabled/non-existing_volume (0.00s)
--- PASS: TestVolumesRemoveSwarmEnabled/non-existing_volume_force (0.00s)
PASS
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Stopping container on Windows can sometimes take longer than 10s which
caused this test to be flaky.
Increase the timeout to 75s when running this test on Windows.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
The latest version of containerd-shim-runhcs-v1 (v0.10.0-rc.4) pulled in
with the bump to ContainerD v1.7.0-rc.3 had several changes to make it
more robust, which had the side effect of increasing the worst-case
amount of time it takes for a container to exit in the worst case.
Notably, the total timeout for shutting down a task increased from 30
seconds to 60! Increase the timeouts hardcoded in the daemon and
integration tests so that they don't give up too soon.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The CreatedAt date was determined from the volume's `_data`
directory (`/var/lib/docker/volumes/<volumename>/_data`).
However, when initializing a volume, this directory is updated,
causing the date to change.
Instead of using the `_data` directory, use its parent directory,
which is not updated afterwards, and should reflect the time that
the volume was created.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
The List Images API endpoint has accepted multiple values for the
`since` and `before` filter predicates, but thanks to Go's randomizing
of map iteration order, it would pick an arbitrary image to compare
created timestamps against. In other words, the behaviour was undefined.
Change these filter predicates to have well-defined semantics: the
logical AND of all values for each of the respective predicates. As
timestamps are a totally-ordered relation, this is exactly equivalent to
applying the newest and oldest creation timestamps for the `since` and
`before` predicates, respectively.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The migration code is in the 22.06 branch, and if we don't migrate
the only side-effect is the daemon's ID being regenerated (as a
UUID).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The existing archive implementation is not easy to reason about by
reading the source. Prepare to rewrite it by covering more edge cases in
tests. The new test cases were determined by black-box characterizing
the existing behaviour.
Signed-off-by: Cory Snider <csnider@mirantis.com>
- On Windows, we don't build and run a local test registry (we're not running
docker-in-docker), so we need to skip this test.
- On rootless, networking doesn't support this (currently)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This is accomplished by storing the distribution source in the content
labels. If the distribution source is not found then we check to the
registry to see if the digest exists in the repo, if it does exist then
the puller will use it.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This adds a new filter argument to the volume prune endpoint "all".
When this is not set, or it is a false-y value, then only anonymous
volumes are considered for pruning.
When `all` is set to a truth-y value, you get the old behavior.
This is an API change, but I think one that is what most people would
want.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Before this change restarting the daemon in live-restore with running
containers + a restart policy meant that volume refs were not restored.
This specifically happens when the container is still running *and*
there is a restart policy that would make sure the container was running
again on restart.
The bug allows volumes to be removed even though containers are
referencing them. 😱
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
integration/config/config_test.go:106:31: empty-lines: extra empty line at the end of a block (revive)
integration/secret/secret_test.go:106:31: empty-lines: extra empty line at the end of a block (revive)
integration/network/service_test.go:58:50: empty-lines: extra empty line at the end of a block (revive)
integration/network/service_test.go:401:58: empty-lines: extra empty line at the end of a block (revive)
integration/system/event_test.go:30:38: empty-lines: extra empty line at the end of a block (revive)
integration/plugin/logging/read_test.go:19:41: empty-lines: extra empty line at the end of a block (revive)
integration/service/list_test.go:30:48: empty-lines: extra empty line at the end of a block (revive)
integration/service/create_test.go:400:46: empty-lines: extra empty line at the start of a block (revive)
integration/container/logs_test.go:156:42: empty-lines: extra empty line at the end of a block (revive)
integration/container/daemon_linux_test.go:135:44: empty-lines: extra empty line at the end of a block (revive)
integration/container/restart_test.go:160:62: empty-lines: extra empty line at the end of a block (revive)
integration/container/wait_test.go:181:47: empty-lines: extra empty line at the end of a block (revive)
integration/container/restart_test.go:116:30: empty-lines: extra empty line at the end of a block (revive)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The TODO comment was in regards to allowing graphdriver plugins to
provide their own ContainerFS implementations. The ContainerFS interface
has been removed from Moby, so there is no longer anything which needs
to be figured out.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Updating test-code only; set ReadHeaderTimeout for some, or suppress the linter
error for others.
contrib/httpserver/server.go:11:12: G114: Use of net/http serve function that has no support for setting timeouts (gosec)
log.Panic(http.ListenAndServe(":80", nil))
^
integration/plugin/logging/cmd/close_on_start/main.go:42:12: G112: Potential Slowloris Attack because ReadHeaderTimeout is not configured in the http.Server (gosec)
server := http.Server{
Addr: l.Addr().String(),
Handler: mux,
}
integration/plugin/logging/cmd/discard/main.go:17:12: G112: Potential Slowloris Attack because ReadHeaderTimeout is not configured in the http.Server (gosec)
server := http.Server{
Addr: l.Addr().String(),
Handler: mux,
}
integration/plugin/logging/cmd/dummy/main.go:14:12: G112: Potential Slowloris Attack because ReadHeaderTimeout is not configured in the http.Server (gosec)
server := http.Server{
Addr: l.Addr().String(),
Handler: http.NewServeMux(),
}
integration/plugin/volumes/cmd/dummy/main.go:14:12: G112: Potential Slowloris Attack because ReadHeaderTimeout is not configured in the http.Server (gosec)
server := http.Server{
Addr: l.Addr().String(),
Handler: http.NewServeMux(),
}
testutil/fixtures/plugin/basic/basic.go:25:12: G112: Potential Slowloris Attack because ReadHeaderTimeout is not configured in the http.Server (gosec)
server := http.Server{
Addr: l.Addr().String(),
Handler: http.NewServeMux(),
}
volume/testutils/testutils.go:170:5: G114: Use of net/http serve function that has no support for setting timeouts (gosec)
go http.Serve(l, mux)
^
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Modifying the builtin Windows runtime to send the exited event
immediately upon the container's init process exiting, without first
waiting for the Compute System to shut down, perturbed the timings
enough to make TestWaitConditions flaky on that platform. Make
TestWaitConditions timing-independent by having the container wait
for input on STDIN before exiting.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Add an integration test to verify that health checks are killed on
timeout and that the output is captured.
Co-authored-by: Nicolas De Loof <nicolas.deloof@gmail.com>
Signed-off-by: Cory Snider <csnider@mirantis.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Terminating the exec process when the context is canceled has been
broken since Docker v17.11 so nobody has been able to depend upon that
behaviour in five years of releases. We are thus free from backwards-
compatibility constraints.
Co-authored-by: Nicolas De Loof <nicolas.deloof@gmail.com>
Co-authored-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
Signed-off-by: Cory Snider <csnider@mirantis.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Since runtimes can now just be containerd shims, we need to check if the
reference is possibly a containerd shim.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Contrary to popular belief, the OCI Runtime specification does not
specify the command-line API for runtimes. Looking at containerd's
architecture from the lens of the OCI Runtime spec, the _shim_ is the
OCI Runtime and runC is "just" an implementation detail of the
io.containerd.runc.v2 runtime. When one configures a non-default runtime
in Docker, what they're really doing is instructing Docker to create
containers using the io.containerd.runc.v2 runtime with a configuration
option telling the runtime that the runC binary is at some non-default
path. Consequently, only OCI runtimes which are compatible with the
io.containerd.runc.v2 shim, such as crun, can be used in this manner.
Other OCI runtimes, including kata-containers v2, come with their own
containerd shim and are not compatible with io.containerd.runc.v2.
As Docker has not historically provided a way to select a non-default
runtime which requires its own shim, runtimes such as kata-containers v2
could not be used with Docker.
Allow other containerd shims to be used with Docker; no daemon
configuration required. If the daemon is instructed to create a
container with a runtime name which does not match any of the configured
or stock runtimes, it passes the name along to containerd verbatim. A
user can start a container with the kata-containers runtime, for
example, simply by calling
docker run --runtime io.containerd.kata.v2
Runtime names which containerd would interpret as a path to an arbitrary
binary are disallowed. While handy for development and testing it is not
strictly necessary and would allow anyone with Engine API access to
trivially execute any binary on the host as root, so we have decided it
would be safest for our users if it was not allowed.
It is not yet possible to set an alternative containerd shim as the
default runtime; it can only be configured per-container.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Older versions of Go don't format comments, so committing this as
a separate commit, so that we can already make these changes before
we upgrade to Go 1.19.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
WARN [runner] The linter 'golint' is deprecated (since v1.41.0) due to: The repository of the linter has been archived by the owner. Replaced by revive.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
commit 737e8c6ab8 added validation for the wait
condition parameter, however, the default ("not-running") option was not part
of the list of valid options, resulting in a regression if the default value
was explicitly passed;
docker scan --accept-license --version
Error response from daemon: invalid condition: "not-running"
This patch adds the missing option, and adds a test to verify.
With this patch;
make BIND_DIR=. DOCKER_GRAPHDRIVER=vfs TEST_FILTER=TestWaitConditions test-integration
...
--- PASS: TestWaitConditions (0.04s)
--- PASS: TestWaitConditions/removed (1.79s)
--- PASS: TestWaitConditions/default (1.91s)
--- PASS: TestWaitConditions/next-exit (1.97s)
--- PASS: TestWaitConditions/not-running (1.99s)
PASS
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Now client have the possibility to set the console size of the executed
process immediately at the creation. This makes a difference for example
when executing commands that output some kind of text user interface
which is bounded by the console dimensions.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
1. Add integration tests for the ContainerLogs API call
Each test handle a distinct case of ContainerLogs output.
- Muxed stream, when container is started without tty
- Single stream, when container is started with tty
2. Add unit test for LogReader suite that tests concurrent logging
It checks that there are no race conditions when logging concurrently
from multiple goroutines.
Co-authored-by: Cory Snider <csnider@mirantis.com>
Signed-off-by: Cory Snider <csnider@mirantis.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Starting with the 22.06 release, buildx is the default client for
docker build, which uses BuildKit as builder.
This patch changes the default builder version as advertised by
the daemon to "2" (BuildKit), so that pre-22.06 CLIs with BuildKit
support (but no buildx installed) also default to using BuildKit
when interacting with a 22.06 (or up) daemon.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
I noticed I made a mistake in the first ping ("before swarm init"), which
was not specifying the daemon's socket path and because of that testing
against the main integration daemon (not the locally spun up daemon).
While fixing that, I wondered why the test didn't actually use the client
for the requests (to also verify the client converted the response), so
I rewrote the test to use `client.Ping()` and to verify the ping response
has the expected values set.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
These HostConfig properties were not validated until the OCI spec for the container
was created, which meant that `container run` and `docker create` would accept
invalid values, and the invalid value would not be detected until `start` was
called, returning a 500 "internal server error", as well as errors from containerd
("cleanup: failed to delete container from containerd: no such container") in the
daemon logs.
As a result, a faulty container was created, and the container state remained
in the `created` state.
This patch:
- Updates `oci.WithNamespaces()` to return the correct `errdefs.InvalidParameter`
- Updates `verifyPlatformContainerSettings()` to validate these settings, so that
an error is returned when _creating_ the container.
Before this patch:
docker run -dit --ipc=shared --name foo busybox
2a00d74e9fbb7960c4718def8f6c74fa8ee754030eeb93ee26a516e27d4d029f
docker: Error response from daemon: Invalid IPC mode: shared.
docker ps -a --filter name=foo
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2a00d74e9fbb busybox "sh" About a minute ago Created foo
After this patch:
docker run -dit --ipc=shared --name foo busybox
docker: Error response from daemon: invalid IPC mode: shared.
docker ps -a --filter name=foo
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
An integration test was added to verify the new validation, which can be run with:
make BIND_DIR=. TEST_FILTER=TestCreateInvalidHostConfig DOCKER_GRAPHDRIVER=vfs test-integration
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
On Linux the daemon was not respecting the HostConfig.ConsoleSize
property and relied on cli initializing the tty size after the container
was created. This caused a delay between container creation and
the tty actually being resized.
This is also a small change to the api description, because
HostConfig.ConsoleSize is no longer Windows-only.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
This change is in preparation of deprecating support for old manifests.
Currently the daemon's ID is based on the trust-key ID, which will be
removed once we fully deprecate support for old manifests (the trust
key is currently only used in tests).
This patch:
- looks if a trust-key is present; if so, it migrates the trust-key
ID to the new "engine-id" file within the daemon's root.
- if no trust-key is present (so in case it's a "fresh" install), we
generate a UUID instead and use that as ID.
The migration is to prevent engines from getting a new ID on upgrades;
while we don't provide any guarantees on the engine's ID, users may
expect the ID to be "stable" (not change) between upgrades.
A test has been added, which can be ran with;
make DOCKER_GRAPHDRIVER=vfs TEST_FILTER='TestConfigDaemonID' test-integration
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This was added in 93c3e6c91e, at which time only
some basic handling of non-succesful status codes was present;
93c3e6c91e/api/client/utils.go (L112-L121)
Given that since 38e6d474af non-successful status-
codes are already handled, and a 204 ("no content") status should not be an error,
this special case should no longer be needed.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Implement similar logic as is used in httputils.ReadJSON(). Before
this patch, endpoints using the ContainerDecoder would incorrectly
return a 500 (internal server error) status.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Implement a ReadJSON() utility to help reduce some code-duplication,
and to make sure we handle JSON requests consistently (e.g. always
check for the content-type).
Differences compared to current handling:
- prevent possible panic if request.Body is nil ("should never happen")
- always require Content-Type to be "application/json"
- be stricter about additional content after JSON (previously ignored)
- but, allow the body to be empty (an empty body is not invalid);
update TestContainerInvalidJSON accordingly, which was testing the
wrong expectation.
- close body after reading (some code did this)
We should consider to add a "max body size" on this function, similar to
7b9275c0da/api/server/middleware/debug.go (L27-L40)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This is a follow-up to 427c7cc5f8, which added
proxy-configuration options ("http-proxy", "https-proxy", "no-proxy") to the
dockerd cli and in `daemon.json`.
While working on documentation changes for this feature, I realised that those
options won't be "next" to each-other when formatting the daemon.json JSON, for
example using `jq` (which sorts the fields alphabetically). As it's possible that
additional proxy configuration options are added in future, I considered that
grouping these options in a struct within the JSON may help setting these options,
as well as discovering related options.
This patch introduces a "proxies" field in the JSON, which includes the
"http-proxy", "https-proxy", "no-proxy" options.
Conflict detection continues to work as before; with this patch applied:
mkdir -p /etc/docker/
echo '{"proxies":{"http-proxy":"http-config", "https-proxy":"https-config", "no-proxy": "no-proxy-config"}}' > /etc/docker/daemon.json
dockerd --http-proxy=http-flag --https-proxy=https-flag --no-proxy=no-proxy-flag --validate
unable to configure the Docker daemon with file /etc/docker/daemon.json:
the following directives are specified both as a flag and in the configuration file:
http-proxy: (from flag: http-flag, from file: http-config),
https-proxy: (from flag: https-flag, from file: https-config),
no-proxy: (from flag: no-proxy-flag, from file: no-proxy-config)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Arbitrary here does not include '', best to catch that one early as it's
almost certainly a mistake (possibly an attempt to pass a POSIX path
through this API)
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
Since this function is about to get more complicated, and change
behaviour, this establishes tests for the existing implementation.
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
This adds an additional "Swarm" header to the _ping endpoint response,
which allows a client to detect if Swarm is enabled on the daemon, without
having to call additional endpoints.
This change is not versioned in the API, and will be returned irregardless
of the API version that is used. Clients should fall back to using other
endpoints to get this information if the header is not present.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The wrapResponseError() utility converted some specific errors, but in
doing so, could hide the actual error message returned by the daemon.
In addition, starting with 38e6d474af,
HTTP status codes were already mapped to their corresponding errdefs
types on the client-side, making this conversion redundant.
This patch removes the wrapResponseError() utility; it's worth noting
that some error-messages will change slightly (as they now return the
error as returned by the daemon), but may cointain more details as
before, and in some cases prevents hiding the actual error.
Before this change:
docker container rm nosuchcontainer
Error: No such container: nosuchcontainer
docker container cp mycontainer:/no/such/path .
Error: No such container:path: mycontainer:/no/such/path
docker container cp ./Dockerfile mycontainer:/no/such/path
Error: No such container:path: mycontainer:/no/such
docker image rm nosuchimage
Error: No such image: nosuchimage
docker network rm nosuchnetwork
Error: No such network: nosuchnetwork
docker volume rm nosuchvolume
Error: No such volume: nosuchvolume
docker plugin rm nosuchplugin
Error: No such plugin: nosuchplugin
docker checkpoint rm nosuchcontainer nosuchcheckpoint
Error response from daemon: No such container: nosuchcontainer
docker checkpoint rm mycontainer nosuchcheckpoint
Error response from daemon: checkpoint nosuchcheckpoint does not exist for container mycontainer
docker service rm nosuchservice
Error: No such service: nosuchservice
docker node rm nosuchnode
Error: No such node: nosuchnode
docker config rm nosuschconfig
Error: No such config: nosuschconfig
docker secret rm nosuchsecret
Error: No such secret: nosuchsecret
After this change:
docker container rm nosuchcontainer
Error response from daemon: No such container: nosuchcontainer
docker container cp mycontainer:/no/such/path .
Error response from daemon: Could not find the file /no/such/path in container mycontainer
docker container cp ./Dockerfile mycontainer:/no/such/path
Error response from daemon: Could not find the file /no/such in container mycontainer
docker image rm nosuchimage
Error response from daemon: No such image: nosuchimage:latest
docker network rm nosuchnetwork
Error response from daemon: network nosuchnetwork not found
docker volume rm nosuchvolume
Error response from daemon: get nosuchvolume: no such volume
docker plugin rm nosuchplugin
Error response from daemon: plugin "nosuchplugin" not found
docker checkpoint rm nosuchcontainer nosuchcheckpoint
Error response from daemon: No such container: nosuchcontainer
docker checkpoint rm mycontainer nosuchcheckpoint
Error response from daemon: checkpoint nosuchcheckpoint does not exist for container mycontainer
docker service rm nosuchservice
Error response from daemon: service nosuchservice not found
docker node rm nosuchnode
Error response from daemon: node nosuchnode not found
docker config rm nosuchconfig
Error response from daemon: config nosuchconfig not found
docker secret rm nosuchsecret
Error response from daemon: secret nosuchsecret not found
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Finish the refactor which was partially completed with commit
34536c498d, passing around IdentityMapping structs instead of pairs of
[]IDMap slices.
Existing code which uses []IDMap relies on zero-valued fields to be
valid, empty mappings. So in order to successfully finish the
refactoring without introducing bugs, their replacement therefore also
needs to have a useful zero value which represents an empty mapping.
Change IdentityMapping to be a pass-by-value type so that there are no
nil pointers to worry about.
The functionality provided by the deprecated NewIDMappingsFromMaps
function is required by unit tests to to construct arbitrary
IdentityMapping values. And the daemon will always need to access the
mappings to pass them to the Linux kernel. Accommodate these use cases
by exporting the struct fields instead. BuildKit currently depends on
the UIDs and GIDs methods so we cannot get rid of them yet.
Signed-off-by: Cory Snider <csnider@mirantis.com>
This is more in line with other consts that are used for defaults, and makes it
slightly easier to consume than DefaultV2Registry, e.g. see:
https://github.com/oras-project/oras-go/blob/v1.1.0/pkg/auth/docker/resolver.go#L81-L84
Note that both the "index.docker.io" and "registry-1.docker.io" domains
are here for historic reasons and backward-compatibility. These domains
are still supported by Docker Hub (and will continue to be supported), but
there are new domains already in use, and plans to consolidate all legacy
domains to new "canonical" domains. Once those domains are decided on, we
should update these consts (but making sure to preserve compatibility with
existing installs, clients, and user configuration).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This was added in commits fc21bf280b and
0380fbff37 in support of LCOW, but was
now always set to runtime.GOOS.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Commit 0380fbff37 added the ability to pass a
--platform flag on `docker import` when importing an archive. The intent
of that commit was to allow importing a Linux rootfs on a Windows daemon
(as part of the experimental LCOW feature).
A later commit (337ba71fc1) changed some
of this code to take both OS and Architecture into account (for `docker build`
and `docker pull`), but did not yet update the `docker image import`.
This patch updates the import endpoitn to allow passing both OS and
Architecture. Note that currently only matching OSes are accepted,
and an error will be produced when (e.g.) specifying `linux` on Windows
and vice-versa.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This test is failing frequently once nodes have less disk space
available. Skipping the test for now, but we can continue looking
for a good solution.
Tracked through https://github.com/moby/moby/issues/42743
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This should help with Jenkins failing to clean up the Workspace:
- make sure "cleanup" is also called in the defer for all daemons. keeping
the daemon's storage around prevented Jenkins from cleaning up.
- close client connections and some readers (just to be sure)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The daemon can print the proxy configuration as part of error-messages,
and when reloading the daemon configuration (SIGHUP). Make sure that
the configuration is sanitized before printing.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The proxy configuration works, but looks like we're unable to connect to the
test proxy server as part of our test;
level=debug msg="Trying to pull example.org:5000/some/image from https://example.org:5000 v2"
level=warning msg="Error getting v2 registry: Get \"https://example.org:5000/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:45999: connect: connection refused"
level=info msg="Attempting next endpoint for pull after error: Get \"https://example.org:5000/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:45999: connect: connection refused"
level=error msg="Handler for POST /v1.42/images/create returned error: Get \"https://example.org:5000/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:45999: connect: connection refused"
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This allows configuring the daemon's proxy server through the daemon.json con-
figuration file or command-line flags configuration file, in addition to the
existing option (through environment variables).
Configuring environment variables on Windows to configure a service is more
complicated than on Linux, and adding alternatives for this to the daemon con-
figuration makes the configuration more transparent and easier to use.
The configuration as set through command-line flags or through the daemon.json
configuration file takes precedence over env-vars in the daemon's environment,
which allows the daemon to use a different proxy. If both command-line flags
and a daemon.json configuration option is set, an error is produced when starting
the daemon.
Note that this configuration is not "live reloadable" due to Golang's use of
`sync.Once()` for proxy configuration, which means that changing the proxy
configuration requires a restart of the daemon (reload / SIGHUP will not update
the configuration.
With this patch:
cat /etc/docker/daemon.json
{
"http-proxy": "http://proxytest.example.com:80",
"https-proxy": "https://proxytest.example.com:443"
}
docker pull busybox
Using default tag: latest
Error response from daemon: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp: lookup proxytest.example.com on 127.0.0.11:53: no such host
docker build .
Sending build context to Docker daemon 89.28MB
Step 1/3 : FROM golang:1.16-alpine AS base
Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp: lookup proxytest.example.com on 127.0.0.11:53: no such host
Integration tests were added to test the behavior:
- verify that the configuration through all means are used (env-var,
command-line flags, damon.json), and used in the expected order of
preference.
- verify that conflicting options produce an error.
Signed-off-by: Anca Iordache <anca.iordache@docker.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Looks like this test was broken from the start, and fully relied on a race
condition. (Test was added in 65ee7fff02)
The problem is in the service's command: `ls -l /etc/config || /bin/top`, which
will either:
- exit immediately if the secret is mounted correctly at `/etc/config` (which it should)
- keep running with `/bin/top` if the above failed
After the service is created, the test enters a race-condition, checking for 1
task to be running (which it ocassionally is), after which it proceeds, and looks
up the list of tasks of the service, to get the log output of `ls -l /etc/config`.
This is another race: first of all, the original filter for that task lookup did
not filter by `running`, so it would pick "any" task of the service (either failed,
running, or "completed" (successfully exited) tasks).
In the meantime though, SwarmKit kept reconciling the service, and creating new
tasks, so even if the test was able to get the ID of the correct task, that task
may already have been exited, and removed (task-limit is 5 by default), so only
if the test was "lucky", it would be able to get the logs, but of course, chances
were likely that it would be "too late", and the task already gone.
The problem can be easily reproduced when running the steps manually:
echo 'CONFIG' | docker config create myconfig -
docker service create --config source=myconfig,target=/etc/config,mode=0777 --name myservice busybox sh -c 'ls -l /etc/config || /bin/top'
The above creates the service, but it keeps retrying, because each task exits
immediately (followed by SwarmKit reconciling and starting a new task);
mjntpfkkyuuc1dpay4h00c4oo
overall progress: 0 out of 1 tasks
1/1: ready [======================================> ]
verify: Detected task failure
^COperation continuing in background.
Use `docker service ps mjntpfkkyuuc1dpay4h00c4oo` to check progress.
And checking the tasks for the service reveals that tasks exit cleanly (no error),
but _do exit_, so swarm just keeps up reconciling, and spinning up new tasks;
docker service ps myservice --no-trunc
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
2wmcuv4vffnet8nybg3he4v9n myservice.1 busybox:latest@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57 docker-desktop Ready Ready less than a second ago
5p8b006uec125iq2892lxay64 \_ myservice.1 busybox:latest@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57 docker-desktop Shutdown Complete less than a second ago
k8lpsvlak4b3nil0zfkexw61p \_ myservice.1 busybox:latest@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57 docker-desktop Shutdown Complete 6 seconds ago
vsunl5pi7e2n9ol3p89kvj6pn \_ myservice.1 busybox:latest@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57 docker-desktop Shutdown Complete 11 seconds ago
orxl8b6kt2l6dfznzzd4lij4s \_ myservice.1 busybox:latest@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57 docker-desktop Shutdown Complete 17 seconds ago
This patch changes the service's command to `sleep`, so that a successful task
(after successfully performing `ls -l /etc/config`) continues to be running until
the service is deleted. With that change, the service should (usually) reconcile
immediately, which removes the race condition, and should also make it faster :)
This patch changes the tests to use client.ServiceLogs() instead of using the
service's tasklist to directly access container logs. This should also fix some
failures that happened if some tasks failed to start before reconciling, in which
case client.TaskList() (with the current filters), could return more tasks than
anticipated (as it also contained the exited tasks);
=== RUN TestCreateServiceSecretFileMode
create_test.go:291: assertion failed: 2 (int) != 1 (int)
--- FAIL: TestCreateServiceSecretFileMode (7.88s)
=== RUN TestCreateServiceConfigFileMode
create_test.go:355: assertion failed: 2 (int) != 1 (int)
--- FAIL: TestCreateServiceConfigFileMode (7.87s)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Before this change if you assume that things work the way the test
expects them to (it does not, but lets assume for now) we aren't really
testing anything because we are testing that a container is healthy
before and after we send a signal. This will give false positives even
if there is a bug in the underlying code. Sending a signal can take any
amount of time to cause a container to exit or to trigger healthchecks
to stop or whatever.
Now lets remove the assumption that things are working as expected,
because they are not.
In this case, `top` (which is what is running in the container) is
actually exiting when it receives `USR1`.
This totally invalidates the test.
We need more control and knowledge as to what is happening in the
container to properly test this.
This change introduces a custom script which traps `USR1` and flips the
health status each time the signal is received.
We then send the signal twice so that we know the change has occurred
and check that the value has flipped so that we know the change has
actually occurred.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Commit dae652e2e5 added support for non-privileged
containers to use ICMP_PROTO (used for `ping`). This option cannot be set for
containers that have user-namespaces enabled.
However, the detection looks to be incorrect; HostConfig.UsernsMode was added
in 6993e891d1 / ee2183881b,
and the property only has meaning if the daemon is running with user namespaces
enabled. In other situations, the property has no meaning.
As a result of the above, the sysctl would only be set for containers running
with UsernsMode=host on a daemon running with user-namespaces enabled.
This patch adds a check if the daemon has user-namespaces enabled (RemappedRoot
having a non-empty value), or if the daemon is running inside a user namespace
(e.g. rootless mode) to fix the detection.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>