This test was rewritten from an integration-cli test in commit
68d9beedbe, and originally implemented in
f4942ed864, which rewrote it from a unit-
test to an integration test.
Originally, it would check for the raw JSON response from the daemon, and
check for individual fields to be present in the output, but after commit
0fd5a65428, `client.Info()` was used, and
now the response is unmarshalled into a `system.Info`.
The remainder of the test remained the same in that rewrite, and as a
result were were now effectively testing if a `system.Info` struct,
when marshalled as JSON would show all the fields (surprise: it does).
TL;DR; the test would even pass with an empty `system.Info{}` struct,
which didn't provide much coverage, as it passed without a daemon:
func TestInfoAPI(t *testing.T) {
// always shown fields
stringsToCheck := []string{
"ID",
"Containers",
"ContainersRunning",
"ContainersPaused",
"ContainersStopped",
"Images",
"LoggingDriver",
"OperatingSystem",
"NCPU",
"OSType",
"Architecture",
"MemTotal",
"KernelVersion",
"Driver",
"ServerVersion",
"SecurityOptions",
}
out := fmt.Sprintf("%+v", system.Info{})
for _, linePrefix := range stringsToCheck {
assert.Check(t, is.Contains(out, linePrefix))
}
}
This patch makes the test _slightly_ better by checking if the fields
are non-empty. More work is needed on this test though; currently it
uses the (already running) daemon, so it's hard to check for specific
fields to be correct (withouth knowing state of the daemon), but it's
not unlikely that other tests (partially) cover some of that. A TODO
comment was added to look into that (we should probably combine some
tests to prevent overlap, and make it easier to spot "gaps" as well).
While working on this, also moving the `SystemTime` into this test,
because that field is (no longer) dependent on "debug" state
(It is was actually this change that led me down this rabbit-hole)
()_()
(-.-)
'(")(")'
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Following tests are implemented in this specific commit:
- Inter-container communications for internal and non-internal
bridge networks, over IPv4 and IPv6.
- Inter-container communications using IPv6 link-local addresses for
internal and non-internal bridge networks.
- Inter-network communications for internal and non-internal bridge
networks, over IPv4 and IPv6, are disallowed.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
This commit introduces a new integration test suite aimed at testing
networking features like inter-container communication, network
isolation, port mapping, etc... and how they interact with daemon-level
and network-level parameters.
So far, there's pretty much no tests making sure our networks are well
configured: 1. there're a few tests for port mapping, but they don't
cover all use cases ; 2. there're a few tests that check if a specific
iptables rule exist, but that doesn't prevent that specific iptables
rule to be wrong in the first place.
As we're planning to refactor how iptables rules are written, and change
some of them to fix known security issues, we need a way to test all
combinations of parameters. So far, this was done by hand, which is
particularly painful and time consuming. As such, this new test suite is
foundational to upcoming work.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
This delete was originally added in b37fdc5dd1
and migrated from `deleteImages(repoName)` in commit 1e55ace875,
however, deleting `foobar-save-multi-images-test` (`foobar-save-multi-images-test:latest`)
always resulted in an error;
Error response from daemon: No such image: foobar-save-multi-images-test:latest
This patch removes the redundant image delete.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Shutting down containers on Windows can take a long time (with hyper-v),
causing this test to be flaky; seen failing on windows 2022;
=== FAIL: github.com/docker/docker/integration/image TestSaveRepoWithMultipleImages (23.16s)
save_test.go:104: timeout waiting for container to exit
Looking at the test, we run a container only to commit it, and the test
does not make changes to the container's filesystem; it only runs a container
with a custom command (`true`).
Instead of running the container, we can _create_ a container and commit it;
this simplifies the tests, and prevents having to wait for the container to
exit (before committing).
To verify:
make BIND_DIR=. DOCKER_GRAPHDRIVER=vfs TEST_FILTER=TestSaveRepoWithMultipleImages test-integration
INFO: Testing against a local daemon
=== RUN TestSaveRepoWithMultipleImages
--- PASS: TestSaveRepoWithMultipleImages (1.20s)
PASS
DONE 1 tests in 2.668s
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Adds test ensuring that additional groups set with `--group-add`
are kept on exec when container had `--user` set on run.
Regression test for https://github.com/moby/moby/issues/46712
Signed-off-by: Laura Brehm <laurabrehm@hey.com>
Having a sandbox/container-wide MacAddress field makes little sense
since a container can be connected to multiple networks at the same
time. This field is an artefact of old times where a container could be
connected to a single network only.
As we now have a way to specify per-endpoint mac address, this field is
now deprecated.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Prior to this commit, only container.Config had a MacAddress field and
it's used only for the first network the container connects to. It's a
relic of old times where custom networks were not supported.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
commit def549c8f6 passed through the context
to the daemon.ContainerStart function. As a result, restarting containers
no longer is an atomic operation, because a context cancellation could
interrupt the restart (between "stopping" and "(re)starting"), resulting
in the container being stopped, but not restarted.
Restarting a container, or more factually; making a successful request on
the `/containers/{id]/restart` endpoint, should be an atomic operation.
This patch uses a context.WithoutCancel for restart requests.
It's worth noting that daemon.containerStop already uses context.WithoutCancel,
so in that function, we'll be wrapping the context twice, but this should
likely not cause issues (just redundant for this code-path).
Before this patch, starting a container that bind-mounts the docker socket,
then restarting itself from within the container would cancel the restart
operation. The container would be stopped, but not started after that:
docker run -dit --name myself -v /var/run/docker.sock:/var/run/docker.sock docker:cli sh
docker exec myself sh -c 'docker restart myself'
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a2a741c65ff docker:cli "docker-entrypoint.s…" 26 seconds ago Exited (128) 7 seconds ago myself
With this patch: the stop still cancels the exec, but does not cancel the
restart operation, and the container is started again:
docker run -dit --name myself -v /var/run/docker.sock:/var/run/docker.sock docker:cli sh
docker exec myself sh -c 'docker restart myself'
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4393a01f7c75 docker:cli "docker-entrypoint.s…" About a minute ago Up 4 seconds myself
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Check for accurate values that may contain content sizes unknown to the
usage test in the calculation. Avoid asserting using deep equals when
only the expected value range is known to the test.
Signed-off-by: Derek McGowan <derek@mcg.dev>
We support importing images for other platforms when
using the containerd image store, so we shouldn't validate
the image OS on import.
This commit also splits the test into two, so that we can
keep running the "success" import with a custom platform tests
running w/ c8d while skipping the "error/rejection" test cases.
Signed-off-by: Laura Brehm <laurabrehm@hey.com>
This is a follow-up to 2216d3ca8d, which
implemented the StartInterval for health-checks, but did not add validation
for the minimum accepted interval;
> The time to wait between checks in nanoseconds during the start period.
> It should be 0 or at least 1000000 (1 ms). 0 means inherit.
This patch adds validation for the minimum accepted interval (1ms).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Implement a behavior from the graphdriver's export where `docker save
something` (untagged reference) would export all images matching the
specified repository.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
`docker build --squash` is an experimental feature which is not
implemented for containerd image store.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
The following changes were required:
* integration/build: progressui's signature changed in 6b8fbed01e
* builder-next: flightcontrol.Group has become a generic type in 8ffc03b8f0
* builder-next/executor: add github.com/moby/buildkit/executor/resources types, necessitated by 6e87e4b455
* builder-next: stub util/network/Namespace.Sample(), necessitated by 963f16179f
Co-authored-by: CrazyMax <crazy-max@users.noreply.github.com>
Co-authored-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
Diffing a container yielded some extra changes that come from the
files/directories that we mount inside the container (/etc/resolv.conf
for example). To avoid that we create an intermediate snapshot that has
these files, with this we can now diff the container fs with its parent
and only get the differences that were made inside the container.
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
This test checks how the layer store works, so we don't need it when we
use containerd as image store
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
The API endpoint `/containers/create` accepts several EndpointsConfig
since v1.22 but the daemon would error out in such case. This check is
moved from the daemon to the api and is now applied only for API < 1.44,
effectively allowing the daemon to create containers connected to
several networks.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Fixes#18864, #20648, #33561, #40901.
[This GH comment][1] makes clear network name uniqueness has never been
enforced due to the eventually consistent nature of Classic Swarm
datastores:
> there is no guaranteed way to check for duplicates across a cluster of
> docker hosts.
And this is further confirmed by other comments made by @mrjana in that
same issue, eg. [this one][2]:
> we want to adopt a schema which can pave the way in the future for a
> completely decentralized cluster of docker hosts (if scalability is
> needed).
This decentralized model is what Classic Swarm was trying to be. It's
been superseded since then by Docker Swarm, which has a centralized
control plane.
To circumvent this drawback, the `NetworkCreate` endpoint accepts a
`CheckDuplicate` flag. However it's not perfectly reliable as it won't
catch concurrent requests.
Due to this design decision, API clients like Compose have to implement
workarounds to make sure names are really unique (eg.
docker/compose#9585). And the daemon itself has seen a string of issues
due to that decision, including some that aren't fixed to this day (for
instance moby/moby#40901):
> The problem is, that if you specify a network for a container using
> the ID, it will add that network to the container but it will then
> change it to reference the network by using the name.
To summarize, this "feature" is broken, has no practical use and is a
source of pain for Docker users and API consumers. So let's just remove
it for _all_ API versions.
[1]: https://github.com/moby/moby/issues/18864#issuecomment-167201414
[2]: https://github.com/moby/moby/issues/18864#issuecomment-167202589
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
container.Run() should be a synchronous operation in normal circumstances;
the container is created and started, so polling after that for the
container to be in the "running" state should not be needed.
This should also prevent issues when a container (for whatever reason)
exited immediately after starting; in that case we would continue
polling for it to be running (which likely would never happen).
Let's skip the polling; if the container is not in the expected state
(i.e. exited), tests should fail as well.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The daemon would pass an EndpointCreateOption to set the interface MAC
address if the network name and the provided network mode were matching.
Obviously, if the network mode is a network ID, it won't work. To make
things worse, the network mode is never normalized if it's a partial ID.
To fix that: 1. the condition under what the container's mac-address is
applied is updated to also match the full ID; 2. the network mode is
normalized to a full ID when it's only a partial one.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
Reduce some of the boiler-plating, and by combining the tests, we skip
the testenv.Clean() in between each of the tests. Performance gain isn't
really measurable, but every bit should help :)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
container.Run should be an synchronous operation; the container should
be running after the request was made (or produce an error). Simplify
these tests, and remove the redundant polling.
These were added as part of 8f800c9415,
but no such polls were in place before the refactor, and there's no
mention of these during review of the PR, so I assume these were just
added either as a "precaution", or a result of "copy/paste" from another
test.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This test was failing frequently on Windows, where the test was waiting
for the container to exit before continuing;
=== FAIL: github.com/docker/docker/integration/container TestResizeWhenContainerNotStarted (18.69s)
resize_test.go:58: timeout hit after 10s: waiting for container to be one of (exited), currently running
It looks like this test is merely validating that a container in any non-
running state should produce an error, so there's no need to run a container
(waiting for it to stop), and just "creating" a container (which would be
in `created` state) should work for this purpose.
Looking at 8f800c9415, I see `createSimpleContainer`
and `runSimpleContainer` utilities were added, so I'm even wondering if the
original intent was to use `createSimpleContainer` for this test.
While updating, also check if we get the expected error-type, instead of
only checking for the error-message.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Integration tests will now configure clients to propagate traces as well
as create spans for all tests.
Some extra changes were needed (or desired for trace propagation) in the
test helpers to pass through tracing spans via context.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This uses otel standard environment variables to configure tracing in
the daemon.
It also adds support for propagating trace contexts in the client and
reading those from the API server.
See
https://opentelemetry.io/docs/specs/otel/configuration/sdk-environment-variables/
for details on otel environment variables.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Make sure that the content in the live-restored volume mounted in a new
container is the same as the content in the old container.
This checks if volume's _data directory doesn't get unmounted on
startup.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Define consts for the Actions we use for events, instead of "ad-hoc" strings.
Having these consts makes it easier to find where specific events are triggered,
makes the events less error-prone, and allows documenting each Action (if needed).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This type was added in 247f4796d2, and
at the time was added as an alias for string;
> api/types/events: add "Type" type for event-type enum
>
> Currently just an alias for string, but we can change it to be an
> actual type.
Now that all code uses the defined types, we should be able to make
this an actual type.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
`Daemon.getPidContainer()` was wrapping the error-message with a message
("cannot join PID of a non running container") that did not reflect the
actual reason for the error; `Daemon.GetContainer()` could either return
an invalid parameter (invalid / empty identifier), or a "not found" error
if the specified container-ID could not be found.
In the latter case, we don't want to return a "not found" error through
the API, as this would indicate that the container we're _starting_ was
not found (which is not the case), so we need to convert the error into
an `errdefs.ErrInvalidParameter` (the container-ID specified for the PID
namespace is invalid if the container doesn't exist).
This logic is similar to what we do for IPC namespaces. which received
a similar fix in c3d7a0c603.
This patch updates the error-types, and moves them into the getIpcContainer
and getPidContainer container functions, both of which should return
an "invalid parameter" if the container was not found.
It's worth noting that, while `WithNamespaces()` may return an "invalid
parameter" error, the `start` endpoint itself may _not_ be. as outlined
in commit bf1fb97575, starting a container
that has an invalid configuration should be considered an internal server
error, and is not an invalid _request_. However, for uses other than
container "start", `WithNamespaces()` should return the correct error
to allow code to handle it accordingly.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This test is currently failing with containerd-integration, which should
be looked into, but let's start with preventing it from panicking, to make
the test-failures less noisy;
--- FAIL: TestDiskUsage/after_container.Run (0.26s)
panic: runtime error: index out of range [0] with length 0 [recovered]
panic: runtime error: index out of range [0] with length 0
goroutine 280 [running]:
testing.tRunner.func1.2({0xb07a00, 0x40002006a8})
/usr/local/go/src/testing/testing.go:1526 +0x1c8
testing.tRunner.func1()
/usr/local/go/src/testing/testing.go:1529 +0x364
panic({0xb07a00, 0x40002006a8})
/usr/local/go/src/runtime/panic.go:884 +0x1f4
github.com/docker/docker/integration/system.TestDiskUsage.func3(0x0?, {0x0, {0x14ea4a8, 0x0, 0x0}, {0x14ea4a8, 0x0, 0x0}, {0x14ea4a8, 0x0, ...}, ...})
/go/src/github.com/docker/docker/integration/system/disk_usage_test.go:82 +0x7e4
github.com/docker/docker/integration/system.TestDiskUsage.func4(0x4000235c80?)
/go/src/github.com/docker/docker/integration/system/disk_usage_test.go:118 +0x8c
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Also remove integration-cli: `DockerAPISuite.TestContainerAPIDeleteConflict`,
which was testing the same conditions as `TestRemoveContainerRunning` in
integration/container.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Also move the validation function to live with the type definition,
which allows it to be used outside of the daemon as well.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Check that operations that could potentially perform overlayfs mounts
that could cause undefined behaviors.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
This utility was only used for a single test, and it was very limited
in functionality as it only allowed for a certain error-string to be
matched.
Let's change it into a more generic function; a helper that allows a
container to be created from a `TestContainerConfig` (which can be
constructed using `NewTestConfig`) and that returns the response from
client.ContainerCreate(), so that any result from that can be tested,
leaving it up to the test to check the results.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Introduce a NewTestConfig utility, to allow using the available utilities
for constructing a config, and use them with the regular API client.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The `client` variable was colliding with the `client` import. In some cases
the confusing `cli` name (it's not the "cli") was used. Given that such names
can easily start spreading (through copy/paste, or "code by example"), let's
make a one-time pass through all of them in this package to use the same name.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The `client` variable was colliding with the `client` import in various
files. While it didn't conflict in all files, there was inconsistency
in the naming, sometimes using the confusing `cli` name (it's not the
"cli"), and such names can easily start spreading (through copy/paste,
or "code by example").
Let's make a one-time pass through all of them in this package to use
the same name.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This test was testing the client-side validation, so might as well
move it there, and validate that the client invalidates before
trying to make an API call.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
The MediaType was changed twice in;
- b3b7eb2723 ("application/vnd.docker.plugins.v1+json" -> "application/vnd.docker.plugins.v1.1+json")
- 54587d861d ("application/vnd.docker.plugins.v1.1+json" -> "application/vnd.docker.plugins.v1.2+json")
But the (integration) tests were still using the old version, so let's
use the VersionMimeType const that's defined, and use the updated version.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The original code in container.Exec was potentially leaking the copy
goroutine when the context was cancelled or timed out. The new
`demultiplexStreams()` function won't return until the goroutine has
finished its work, and to ensure that it takes care of closing the
hijacked connection.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
I noticed this was always being skipped because of race conditions
checking the logs.
This change adds a log scanner which will look through the logs line by
line rather than allocating a big buffer.
Additionally it adds a `poll.Check` which we can use to actually wait
for the desired log entry.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
- use assert.Check to continue the test even if a check fails
- assert the total number of images returned, not only their RepoTags
- use subtests
Signed-off-by: Djordje Lukic <djordje.lukic@docker.com>
"ro-non-recursive", "ro-force-recursive", and "rro" are
now removed from the legacy mount API.
CLI may still support them via the new mount API (if we want).
Follow-up to PR 45278
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
When resolving a reference that is both a Named and Digested, it could
be resolved to an image that has the same digest, but completely
different repository name.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
- Combine TestAttachWithTTY and TestAttachWithoutTTy to a single test using sub-tests
- Set up and tear-down the test-environment once
- Remove redundant client.ContainerRemove, as it's taken care of by testEnv.Clean()
- Run both tests in parallel
make TEST_FILTER=TestAttach DOCKER_GRAPHDRIVER=overlay2 TESTDEBUG=1 test-integration
Loaded image: busybox:latest
Loaded image: busybox:glibc
Loaded image: debian:bullseye-slim
Loaded image: hello-world:latest
Loaded image: arm32v7/hello-world:latest
INFO: Testing against a local daemon
=== RUN TestAttach
=== RUN TestAttach/without_TTY
=== PAUSE TestAttach/without_TTY
=== RUN TestAttach/with_TTY
=== PAUSE TestAttach/with_TTY
=== CONT TestAttach/without_TTY
=== CONT TestAttach/with_TTY
--- PASS: TestAttach (0.00s)
--- PASS: TestAttach/without_TTY (0.03s)
--- PASS: TestAttach/with_TTY (0.03s)
PASS
DONE 3 tests in 1.347s
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Calling function returned from setupTest (which calls testEnv.Clean) in
a defer block inside a test that spawns parallel subtests caused the
cleanup function to be called before any of the subtest did anything.
Change the defer expressions to use `t.Cleanup` instead to call it only
after all subtests have also finished.
This only changes tests which have parallel subtests.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
The daemon.lazyInitializeVolume() function only handles restoring Volumes
if a Driver is specified. The Container's MountPoints field may also
contain other kind of mounts (e.g., bind-mounts). Those were ignored, and
don't return an error; 1d9c8619cd/daemon/volumes.go (L243-L252C2)
However, the prepareMountPoints() assumed each MountPoint was a volume,
and logged an informational message about the volume being restored;
1d9c8619cd/daemon/mounts.go (L18-L25)
This would panic if the MountPoint was not a volume;
github.com/docker/docker/daemon.(*Daemon).prepareMountPoints(0xc00054b7b8?, 0xc0007c2500)
/root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/mounts.go:24 +0x1c0
github.com/docker/docker/daemon.(*Daemon).restore.func5(0xc0007c2500, 0x0?)
/root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:552 +0x271
created by github.com/docker/docker/daemon.(*Daemon).restore
/root/rpmbuild/BUILD/src/engine/.gopath/src/github.com/docker/docker/daemon/daemon.go:530 +0x8d8
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x564e9be4c7c0]
This issue was introduced in 647c2a6cdd
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This adds an additional interval to be used by healthchecks during the
start period.
Typically when a container is just starting you want to check if it is
ready more quickly than a typical healthcheck might run. Without this
users have to balance between running healthchecks to frequently vs
taking a very long time to mark a container as healthy for the first
time.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Multiple daemons starting/running concurrently can collide with each
other when editing iptables rules. Most integration tests which opt into
parallelism and start daemons work around this problem by starting the
daemon with the --iptables=false option. However, some of the tests
neglect to pass the option when starting or restarting the daemon,
resulting in those tests being flaky.
Audit the integration tests which call t.Parallel() and (*Daemon).Stop()
and add --iptables=false arguments where needed.
Signed-off-by: Cory Snider <csnider@mirantis.com>
When live-restoring a container the volume driver needs be notified that
there is an active mount for the volume.
Before this change the count is zero until the container stops and the
uint64 overflows pretty much making it so the volume can never be
removed until another daemon restart.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This field was added in f0e5b3d7d8 to
account for older versions of the engine (Docker EE LTS versions), which
did not yet provide the OSType field in Docker info, and had to be manually
set using the TEST_OSTYPE env-var.
This patch removes the field in favor of the equivalent in DaemonInfo. It's
more verbose, but also less ambiguous what information we're using (i.e.,
the platform the daemon is running on, not the local platform).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Before 4bafaa00aa, if the daemon was
killed while a container was running and the container shim is killed
before the daemon is restarted, such as if the host system is
hard-rebooted, the daemon would restore the container to the stopped
state and set the exit code to 255. The aforementioned commit introduced
a regression where the container's exit code would instead be set to 0.
Fix the regression so that the exit code is once against set to 255 on
restore.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Commit 90de570cfa passed through the request
context to daemon.ContainerStop(). As a result, cancelling the context would
cancel the "graceful" stop of the container, and would proceed with forcefully
killing the container.
This patch partially reverts the changes from 90de570cfa
and breaks the context to prevent cancelling the context from cancelling the stop.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This makes the output of `docker save` fully OCI compliant.
When using the containerd image store, this code is not used. That
exporter will just use containerd's export method and should give us the
output we want for multi-arch images.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
`docker run -v /foo:/foo:ro` is now recursively read-only on kernel >= 5.12.
Automatically falls back to the legacy non-recursively read-only mount mode on kernel < 5.12.
Use `ro-non-recursive` to disable RRO.
Use `ro-force-recursive` or `rro` to explicitly enable RRO. (Fails on kernel < 5.12)
Fix issue 44978
Fix docker/for-linux issue 788
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
These changes add basic CDI integration to the docker daemon.
A cdi driver is added to handle cdi device requests. This
is gated by an experimental feature flag and is only supported on linux
This change also adds a CDISpecDirs (cdi-spec-dirs) option to the config.
This allows the default values of `/etc/cdi`, /var/run/cdi` to be overridden
which is useful for testing.
Signed-off-by: Evan Lezar <elezar@nvidia.com>
- use is.ErrorType
- replace uses of client.IsErrNotFound for errdefs.IsNotFound, as
the client no longer returns the old error-type.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Now that most uses of reexec have been replaced with non-reexec
solutions, most of the reexec.Init() calls peppered throughout the test
suites are unnecessary. Furthermore, most of the reexec.Init() calls in
test code neglects to check the return value to determine whether to
exit, which would result in the reexec'ed subprocesses proceeding to run
the tests, which would reexec another subprocess which would proceed to
run the tests, recursively. (That would explain why every reexec
callback used to unconditionally call os.Exit() instead of returning...)
Remove unneeded reexec.Init() calls from test and example code which no
longer needs it, and fix the reexec.Init() calls which are not inert to
exit after a reexec callback is invoked.
Signed-off-by: Cory Snider <csnider@mirantis.com>
This field is deprecated since 1261fe69a3,
and will now be omitted on API v1.44 and up for the `GET /images/json`,
`GET /images/{id}/json`, and `GET /system/df` endpoints.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Since cc19eba (backported to v23.0.4), the PreferredPool for docker0 is
set only when the user provides the bip config parameter or when the
default bridge already exist. That means, if a user provides the
fixed-cidr parameter on a fresh install or reboot their computer/server
without bip set, dockerd throw the following error when it starts:
> failed to start daemon: Error initializing network controller: Error
> creating default "bridge" network: failed to parse pool request for
> address space "LocalDefault" pool "" subpool "100.64.0.0/26": Invalid
> Address SubPool
See #45356.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
In versions of Docker before v1.10, this field was calculated from
the image itself and all of its parent images. Images are now stored
self-contained, and no longer use a parent-chain, making this field
an equivalent of the Size field.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The signatures of functions in containerd's errdefs packages are very
similar to those in our own, and it's easy to accidentally use the wrong
package.
This patch uses a consistent alias for all occurrences of this import.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
TestDaemonRestartKillContainers test was always executing the last case
(`container created should not be restarted`) because the iterated
variables were not copied correctly.
Capture iterated values by value correctly and rename c to tc.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>