Since cc19eba (backported to v23.0.4), the PreferredPool for docker0 is
set only when the user provides the bip config parameter or when the
default bridge already exist. That means, if a user provides the
fixed-cidr parameter on a fresh install or reboot their computer/server
without bip set, dockerd throw the following error when it starts:
> failed to start daemon: Error initializing network controller: Error
> creating default "bridge" network: failed to parse pool request for
> address space "LocalDefault" pool "" subpool "100.64.0.0/26": Invalid
> Address SubPool
See #45356.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
In versions of Docker before v1.10, this field was calculated from
the image itself and all of its parent images. Images are now stored
self-contained, and no longer use a parent-chain, making this field
an equivalent of the Size field.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The signatures of functions in containerd's errdefs packages are very
similar to those in our own, and it's easy to accidentally use the wrong
package.
This patch uses a consistent alias for all occurrences of this import.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
TestDaemonRestartKillContainers test was always executing the last case
(`container created should not be restarted`) because the iterated
variables were not copied correctly.
Capture iterated values by value correctly and rename c to tc.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Volumes created from the image config were not being pruned because the
volume service did not think they were anonymous since the code to
create passes along a generated name instead of letting the volume
service generate it.
This changes the code path to have the volume service generate the name
instead of doing it ahead of time.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Commit 3246db3755 added handling for removing
cluster volumes, but in some conditions, this resulted in errors not being
returned if the volume was in use;
docker swarm init
docker volume create foo
docker create -v foo:/foo busybox top
docker volume rm foo
This patch changes the logic for ignoring "local" volume errors if swarm
is enabled (and cluster volumes supported).
While working on this fix, I also discovered that Cluster.RemoveVolume()
did not handle the "force" option correctly; while swarm correctly handled
these, the cluster backend performs a lookup of the volume first (to obtain
its ID), which would fail if the volume didn't exist.
Before this patch:
make TEST_FILTER=TestVolumesRemoveSwarmEnabled DOCKER_GRAPHDRIVER=vfs test-integration
...
Running /go/src/github.com/docker/docker/integration/volume (arm64.integration.volume) flags=-test.v -test.timeout=10m -test.run TestVolumesRemoveSwarmEnabled
...
=== RUN TestVolumesRemoveSwarmEnabled
=== PAUSE TestVolumesRemoveSwarmEnabled
=== CONT TestVolumesRemoveSwarmEnabled
=== RUN TestVolumesRemoveSwarmEnabled/volume_in_use
volume_test.go:122: assertion failed: error is nil, not errdefs.IsConflict
volume_test.go:123: assertion failed: expected an error, got nil
=== RUN TestVolumesRemoveSwarmEnabled/volume_not_in_use
=== RUN TestVolumesRemoveSwarmEnabled/non-existing_volume
=== RUN TestVolumesRemoveSwarmEnabled/non-existing_volume_force
volume_test.go:143: assertion failed: error is not nil: Error response from daemon: volume no_such_volume not found
--- FAIL: TestVolumesRemoveSwarmEnabled (1.57s)
--- FAIL: TestVolumesRemoveSwarmEnabled/volume_in_use (0.00s)
--- PASS: TestVolumesRemoveSwarmEnabled/volume_not_in_use (0.01s)
--- PASS: TestVolumesRemoveSwarmEnabled/non-existing_volume (0.00s)
--- FAIL: TestVolumesRemoveSwarmEnabled/non-existing_volume_force (0.00s)
FAIL
With this patch:
make TEST_FILTER=TestVolumesRemoveSwarmEnabled DOCKER_GRAPHDRIVER=vfs test-integration
...
Running /go/src/github.com/docker/docker/integration/volume (arm64.integration.volume) flags=-test.v -test.timeout=10m -test.run TestVolumesRemoveSwarmEnabled
...
make TEST_FILTER=TestVolumesRemoveSwarmEnabled DOCKER_GRAPHDRIVER=vfs test-integration
...
Running /go/src/github.com/docker/docker/integration/volume (arm64.integration.volume) flags=-test.v -test.timeout=10m -test.run TestVolumesRemoveSwarmEnabled
...
=== RUN TestVolumesRemoveSwarmEnabled
=== PAUSE TestVolumesRemoveSwarmEnabled
=== CONT TestVolumesRemoveSwarmEnabled
=== RUN TestVolumesRemoveSwarmEnabled/volume_in_use
=== RUN TestVolumesRemoveSwarmEnabled/volume_not_in_use
=== RUN TestVolumesRemoveSwarmEnabled/non-existing_volume
=== RUN TestVolumesRemoveSwarmEnabled/non-existing_volume_force
--- PASS: TestVolumesRemoveSwarmEnabled (1.53s)
--- PASS: TestVolumesRemoveSwarmEnabled/volume_in_use (0.00s)
--- PASS: TestVolumesRemoveSwarmEnabled/volume_not_in_use (0.01s)
--- PASS: TestVolumesRemoveSwarmEnabled/non-existing_volume (0.00s)
--- PASS: TestVolumesRemoveSwarmEnabled/non-existing_volume_force (0.00s)
PASS
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Stopping container on Windows can sometimes take longer than 10s which
caused this test to be flaky.
Increase the timeout to 75s when running this test on Windows.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
The latest version of containerd-shim-runhcs-v1 (v0.10.0-rc.4) pulled in
with the bump to ContainerD v1.7.0-rc.3 had several changes to make it
more robust, which had the side effect of increasing the worst-case
amount of time it takes for a container to exit in the worst case.
Notably, the total timeout for shutting down a task increased from 30
seconds to 60! Increase the timeouts hardcoded in the daemon and
integration tests so that they don't give up too soon.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The CreatedAt date was determined from the volume's `_data`
directory (`/var/lib/docker/volumes/<volumename>/_data`).
However, when initializing a volume, this directory is updated,
causing the date to change.
Instead of using the `_data` directory, use its parent directory,
which is not updated afterwards, and should reflect the time that
the volume was created.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
The List Images API endpoint has accepted multiple values for the
`since` and `before` filter predicates, but thanks to Go's randomizing
of map iteration order, it would pick an arbitrary image to compare
created timestamps against. In other words, the behaviour was undefined.
Change these filter predicates to have well-defined semantics: the
logical AND of all values for each of the respective predicates. As
timestamps are a totally-ordered relation, this is exactly equivalent to
applying the newest and oldest creation timestamps for the `since` and
`before` predicates, respectively.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The migration code is in the 22.06 branch, and if we don't migrate
the only side-effect is the daemon's ID being regenerated (as a
UUID).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The existing archive implementation is not easy to reason about by
reading the source. Prepare to rewrite it by covering more edge cases in
tests. The new test cases were determined by black-box characterizing
the existing behaviour.
Signed-off-by: Cory Snider <csnider@mirantis.com>
- On Windows, we don't build and run a local test registry (we're not running
docker-in-docker), so we need to skip this test.
- On rootless, networking doesn't support this (currently)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This is accomplished by storing the distribution source in the content
labels. If the distribution source is not found then we check to the
registry to see if the digest exists in the repo, if it does exist then
the puller will use it.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This adds a new filter argument to the volume prune endpoint "all".
When this is not set, or it is a false-y value, then only anonymous
volumes are considered for pruning.
When `all` is set to a truth-y value, you get the old behavior.
This is an API change, but I think one that is what most people would
want.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Before this change restarting the daemon in live-restore with running
containers + a restart policy meant that volume refs were not restored.
This specifically happens when the container is still running *and*
there is a restart policy that would make sure the container was running
again on restart.
The bug allows volumes to be removed even though containers are
referencing them. 😱
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
integration/config/config_test.go:106:31: empty-lines: extra empty line at the end of a block (revive)
integration/secret/secret_test.go:106:31: empty-lines: extra empty line at the end of a block (revive)
integration/network/service_test.go:58:50: empty-lines: extra empty line at the end of a block (revive)
integration/network/service_test.go:401:58: empty-lines: extra empty line at the end of a block (revive)
integration/system/event_test.go:30:38: empty-lines: extra empty line at the end of a block (revive)
integration/plugin/logging/read_test.go:19:41: empty-lines: extra empty line at the end of a block (revive)
integration/service/list_test.go:30:48: empty-lines: extra empty line at the end of a block (revive)
integration/service/create_test.go:400:46: empty-lines: extra empty line at the start of a block (revive)
integration/container/logs_test.go:156:42: empty-lines: extra empty line at the end of a block (revive)
integration/container/daemon_linux_test.go:135:44: empty-lines: extra empty line at the end of a block (revive)
integration/container/restart_test.go:160:62: empty-lines: extra empty line at the end of a block (revive)
integration/container/wait_test.go:181:47: empty-lines: extra empty line at the end of a block (revive)
integration/container/restart_test.go:116:30: empty-lines: extra empty line at the end of a block (revive)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The TODO comment was in regards to allowing graphdriver plugins to
provide their own ContainerFS implementations. The ContainerFS interface
has been removed from Moby, so there is no longer anything which needs
to be figured out.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Updating test-code only; set ReadHeaderTimeout for some, or suppress the linter
error for others.
contrib/httpserver/server.go:11:12: G114: Use of net/http serve function that has no support for setting timeouts (gosec)
log.Panic(http.ListenAndServe(":80", nil))
^
integration/plugin/logging/cmd/close_on_start/main.go:42:12: G112: Potential Slowloris Attack because ReadHeaderTimeout is not configured in the http.Server (gosec)
server := http.Server{
Addr: l.Addr().String(),
Handler: mux,
}
integration/plugin/logging/cmd/discard/main.go:17:12: G112: Potential Slowloris Attack because ReadHeaderTimeout is not configured in the http.Server (gosec)
server := http.Server{
Addr: l.Addr().String(),
Handler: mux,
}
integration/plugin/logging/cmd/dummy/main.go:14:12: G112: Potential Slowloris Attack because ReadHeaderTimeout is not configured in the http.Server (gosec)
server := http.Server{
Addr: l.Addr().String(),
Handler: http.NewServeMux(),
}
integration/plugin/volumes/cmd/dummy/main.go:14:12: G112: Potential Slowloris Attack because ReadHeaderTimeout is not configured in the http.Server (gosec)
server := http.Server{
Addr: l.Addr().String(),
Handler: http.NewServeMux(),
}
testutil/fixtures/plugin/basic/basic.go:25:12: G112: Potential Slowloris Attack because ReadHeaderTimeout is not configured in the http.Server (gosec)
server := http.Server{
Addr: l.Addr().String(),
Handler: http.NewServeMux(),
}
volume/testutils/testutils.go:170:5: G114: Use of net/http serve function that has no support for setting timeouts (gosec)
go http.Serve(l, mux)
^
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>