Modifying the builtin Windows runtime to send the exited event
immediately upon the container's init process exiting, without first
waiting for the Compute System to shut down, perturbed the timings
enough to make TestWaitConditions flaky on that platform. Make
TestWaitConditions timing-independent by having the container wait
for input on STDIN before exiting.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Add an integration test to verify that health checks are killed on
timeout and that the output is captured.
Co-authored-by: Nicolas De Loof <nicolas.deloof@gmail.com>
Signed-off-by: Cory Snider <csnider@mirantis.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Terminating the exec process when the context is canceled has been
broken since Docker v17.11 so nobody has been able to depend upon that
behaviour in five years of releases. We are thus free from backwards-
compatibility constraints.
Co-authored-by: Nicolas De Loof <nicolas.deloof@gmail.com>
Co-authored-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
Signed-off-by: Cory Snider <csnider@mirantis.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Since runtimes can now just be containerd shims, we need to check if the
reference is possibly a containerd shim.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Contrary to popular belief, the OCI Runtime specification does not
specify the command-line API for runtimes. Looking at containerd's
architecture from the lens of the OCI Runtime spec, the _shim_ is the
OCI Runtime and runC is "just" an implementation detail of the
io.containerd.runc.v2 runtime. When one configures a non-default runtime
in Docker, what they're really doing is instructing Docker to create
containers using the io.containerd.runc.v2 runtime with a configuration
option telling the runtime that the runC binary is at some non-default
path. Consequently, only OCI runtimes which are compatible with the
io.containerd.runc.v2 shim, such as crun, can be used in this manner.
Other OCI runtimes, including kata-containers v2, come with their own
containerd shim and are not compatible with io.containerd.runc.v2.
As Docker has not historically provided a way to select a non-default
runtime which requires its own shim, runtimes such as kata-containers v2
could not be used with Docker.
Allow other containerd shims to be used with Docker; no daemon
configuration required. If the daemon is instructed to create a
container with a runtime name which does not match any of the configured
or stock runtimes, it passes the name along to containerd verbatim. A
user can start a container with the kata-containers runtime, for
example, simply by calling
docker run --runtime io.containerd.kata.v2
Runtime names which containerd would interpret as a path to an arbitrary
binary are disallowed. While handy for development and testing it is not
strictly necessary and would allow anyone with Engine API access to
trivially execute any binary on the host as root, so we have decided it
would be safest for our users if it was not allowed.
It is not yet possible to set an alternative containerd shim as the
default runtime; it can only be configured per-container.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Older versions of Go don't format comments, so committing this as
a separate commit, so that we can already make these changes before
we upgrade to Go 1.19.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
WARN [runner] The linter 'golint' is deprecated (since v1.41.0) due to: The repository of the linter has been archived by the owner. Replaced by revive.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
commit 737e8c6ab8 added validation for the wait
condition parameter, however, the default ("not-running") option was not part
of the list of valid options, resulting in a regression if the default value
was explicitly passed;
docker scan --accept-license --version
Error response from daemon: invalid condition: "not-running"
This patch adds the missing option, and adds a test to verify.
With this patch;
make BIND_DIR=. DOCKER_GRAPHDRIVER=vfs TEST_FILTER=TestWaitConditions test-integration
...
--- PASS: TestWaitConditions (0.04s)
--- PASS: TestWaitConditions/removed (1.79s)
--- PASS: TestWaitConditions/default (1.91s)
--- PASS: TestWaitConditions/next-exit (1.97s)
--- PASS: TestWaitConditions/not-running (1.99s)
PASS
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Now client have the possibility to set the console size of the executed
process immediately at the creation. This makes a difference for example
when executing commands that output some kind of text user interface
which is bounded by the console dimensions.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
1. Add integration tests for the ContainerLogs API call
Each test handle a distinct case of ContainerLogs output.
- Muxed stream, when container is started without tty
- Single stream, when container is started with tty
2. Add unit test for LogReader suite that tests concurrent logging
It checks that there are no race conditions when logging concurrently
from multiple goroutines.
Co-authored-by: Cory Snider <csnider@mirantis.com>
Signed-off-by: Cory Snider <csnider@mirantis.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Starting with the 22.06 release, buildx is the default client for
docker build, which uses BuildKit as builder.
This patch changes the default builder version as advertised by
the daemon to "2" (BuildKit), so that pre-22.06 CLIs with BuildKit
support (but no buildx installed) also default to using BuildKit
when interacting with a 22.06 (or up) daemon.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
I noticed I made a mistake in the first ping ("before swarm init"), which
was not specifying the daemon's socket path and because of that testing
against the main integration daemon (not the locally spun up daemon).
While fixing that, I wondered why the test didn't actually use the client
for the requests (to also verify the client converted the response), so
I rewrote the test to use `client.Ping()` and to verify the ping response
has the expected values set.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
These HostConfig properties were not validated until the OCI spec for the container
was created, which meant that `container run` and `docker create` would accept
invalid values, and the invalid value would not be detected until `start` was
called, returning a 500 "internal server error", as well as errors from containerd
("cleanup: failed to delete container from containerd: no such container") in the
daemon logs.
As a result, a faulty container was created, and the container state remained
in the `created` state.
This patch:
- Updates `oci.WithNamespaces()` to return the correct `errdefs.InvalidParameter`
- Updates `verifyPlatformContainerSettings()` to validate these settings, so that
an error is returned when _creating_ the container.
Before this patch:
docker run -dit --ipc=shared --name foo busybox
2a00d74e9fbb7960c4718def8f6c74fa8ee754030eeb93ee26a516e27d4d029f
docker: Error response from daemon: Invalid IPC mode: shared.
docker ps -a --filter name=foo
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2a00d74e9fbb busybox "sh" About a minute ago Created foo
After this patch:
docker run -dit --ipc=shared --name foo busybox
docker: Error response from daemon: invalid IPC mode: shared.
docker ps -a --filter name=foo
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
An integration test was added to verify the new validation, which can be run with:
make BIND_DIR=. TEST_FILTER=TestCreateInvalidHostConfig DOCKER_GRAPHDRIVER=vfs test-integration
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
On Linux the daemon was not respecting the HostConfig.ConsoleSize
property and relied on cli initializing the tty size after the container
was created. This caused a delay between container creation and
the tty actually being resized.
This is also a small change to the api description, because
HostConfig.ConsoleSize is no longer Windows-only.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
This change is in preparation of deprecating support for old manifests.
Currently the daemon's ID is based on the trust-key ID, which will be
removed once we fully deprecate support for old manifests (the trust
key is currently only used in tests).
This patch:
- looks if a trust-key is present; if so, it migrates the trust-key
ID to the new "engine-id" file within the daemon's root.
- if no trust-key is present (so in case it's a "fresh" install), we
generate a UUID instead and use that as ID.
The migration is to prevent engines from getting a new ID on upgrades;
while we don't provide any guarantees on the engine's ID, users may
expect the ID to be "stable" (not change) between upgrades.
A test has been added, which can be ran with;
make DOCKER_GRAPHDRIVER=vfs TEST_FILTER='TestConfigDaemonID' test-integration
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This was added in 93c3e6c91e, at which time only
some basic handling of non-succesful status codes was present;
93c3e6c91e/api/client/utils.go (L112-L121)
Given that since 38e6d474af non-successful status-
codes are already handled, and a 204 ("no content") status should not be an error,
this special case should no longer be needed.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Implement similar logic as is used in httputils.ReadJSON(). Before
this patch, endpoints using the ContainerDecoder would incorrectly
return a 500 (internal server error) status.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Implement a ReadJSON() utility to help reduce some code-duplication,
and to make sure we handle JSON requests consistently (e.g. always
check for the content-type).
Differences compared to current handling:
- prevent possible panic if request.Body is nil ("should never happen")
- always require Content-Type to be "application/json"
- be stricter about additional content after JSON (previously ignored)
- but, allow the body to be empty (an empty body is not invalid);
update TestContainerInvalidJSON accordingly, which was testing the
wrong expectation.
- close body after reading (some code did this)
We should consider to add a "max body size" on this function, similar to
7b9275c0da/api/server/middleware/debug.go (L27-L40)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This is a follow-up to 427c7cc5f8, which added
proxy-configuration options ("http-proxy", "https-proxy", "no-proxy") to the
dockerd cli and in `daemon.json`.
While working on documentation changes for this feature, I realised that those
options won't be "next" to each-other when formatting the daemon.json JSON, for
example using `jq` (which sorts the fields alphabetically). As it's possible that
additional proxy configuration options are added in future, I considered that
grouping these options in a struct within the JSON may help setting these options,
as well as discovering related options.
This patch introduces a "proxies" field in the JSON, which includes the
"http-proxy", "https-proxy", "no-proxy" options.
Conflict detection continues to work as before; with this patch applied:
mkdir -p /etc/docker/
echo '{"proxies":{"http-proxy":"http-config", "https-proxy":"https-config", "no-proxy": "no-proxy-config"}}' > /etc/docker/daemon.json
dockerd --http-proxy=http-flag --https-proxy=https-flag --no-proxy=no-proxy-flag --validate
unable to configure the Docker daemon with file /etc/docker/daemon.json:
the following directives are specified both as a flag and in the configuration file:
http-proxy: (from flag: http-flag, from file: http-config),
https-proxy: (from flag: https-flag, from file: https-config),
no-proxy: (from flag: no-proxy-flag, from file: no-proxy-config)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Arbitrary here does not include '', best to catch that one early as it's
almost certainly a mistake (possibly an attempt to pass a POSIX path
through this API)
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
Since this function is about to get more complicated, and change
behaviour, this establishes tests for the existing implementation.
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
This adds an additional "Swarm" header to the _ping endpoint response,
which allows a client to detect if Swarm is enabled on the daemon, without
having to call additional endpoints.
This change is not versioned in the API, and will be returned irregardless
of the API version that is used. Clients should fall back to using other
endpoints to get this information if the header is not present.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The wrapResponseError() utility converted some specific errors, but in
doing so, could hide the actual error message returned by the daemon.
In addition, starting with 38e6d474af,
HTTP status codes were already mapped to their corresponding errdefs
types on the client-side, making this conversion redundant.
This patch removes the wrapResponseError() utility; it's worth noting
that some error-messages will change slightly (as they now return the
error as returned by the daemon), but may cointain more details as
before, and in some cases prevents hiding the actual error.
Before this change:
docker container rm nosuchcontainer
Error: No such container: nosuchcontainer
docker container cp mycontainer:/no/such/path .
Error: No such container:path: mycontainer:/no/such/path
docker container cp ./Dockerfile mycontainer:/no/such/path
Error: No such container:path: mycontainer:/no/such
docker image rm nosuchimage
Error: No such image: nosuchimage
docker network rm nosuchnetwork
Error: No such network: nosuchnetwork
docker volume rm nosuchvolume
Error: No such volume: nosuchvolume
docker plugin rm nosuchplugin
Error: No such plugin: nosuchplugin
docker checkpoint rm nosuchcontainer nosuchcheckpoint
Error response from daemon: No such container: nosuchcontainer
docker checkpoint rm mycontainer nosuchcheckpoint
Error response from daemon: checkpoint nosuchcheckpoint does not exist for container mycontainer
docker service rm nosuchservice
Error: No such service: nosuchservice
docker node rm nosuchnode
Error: No such node: nosuchnode
docker config rm nosuschconfig
Error: No such config: nosuschconfig
docker secret rm nosuchsecret
Error: No such secret: nosuchsecret
After this change:
docker container rm nosuchcontainer
Error response from daemon: No such container: nosuchcontainer
docker container cp mycontainer:/no/such/path .
Error response from daemon: Could not find the file /no/such/path in container mycontainer
docker container cp ./Dockerfile mycontainer:/no/such/path
Error response from daemon: Could not find the file /no/such in container mycontainer
docker image rm nosuchimage
Error response from daemon: No such image: nosuchimage:latest
docker network rm nosuchnetwork
Error response from daemon: network nosuchnetwork not found
docker volume rm nosuchvolume
Error response from daemon: get nosuchvolume: no such volume
docker plugin rm nosuchplugin
Error response from daemon: plugin "nosuchplugin" not found
docker checkpoint rm nosuchcontainer nosuchcheckpoint
Error response from daemon: No such container: nosuchcontainer
docker checkpoint rm mycontainer nosuchcheckpoint
Error response from daemon: checkpoint nosuchcheckpoint does not exist for container mycontainer
docker service rm nosuchservice
Error response from daemon: service nosuchservice not found
docker node rm nosuchnode
Error response from daemon: node nosuchnode not found
docker config rm nosuchconfig
Error response from daemon: config nosuchconfig not found
docker secret rm nosuchsecret
Error response from daemon: secret nosuchsecret not found
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Finish the refactor which was partially completed with commit
34536c498d, passing around IdentityMapping structs instead of pairs of
[]IDMap slices.
Existing code which uses []IDMap relies on zero-valued fields to be
valid, empty mappings. So in order to successfully finish the
refactoring without introducing bugs, their replacement therefore also
needs to have a useful zero value which represents an empty mapping.
Change IdentityMapping to be a pass-by-value type so that there are no
nil pointers to worry about.
The functionality provided by the deprecated NewIDMappingsFromMaps
function is required by unit tests to to construct arbitrary
IdentityMapping values. And the daemon will always need to access the
mappings to pass them to the Linux kernel. Accommodate these use cases
by exporting the struct fields instead. BuildKit currently depends on
the UIDs and GIDs methods so we cannot get rid of them yet.
Signed-off-by: Cory Snider <csnider@mirantis.com>
This is more in line with other consts that are used for defaults, and makes it
slightly easier to consume than DefaultV2Registry, e.g. see:
https://github.com/oras-project/oras-go/blob/v1.1.0/pkg/auth/docker/resolver.go#L81-L84
Note that both the "index.docker.io" and "registry-1.docker.io" domains
are here for historic reasons and backward-compatibility. These domains
are still supported by Docker Hub (and will continue to be supported), but
there are new domains already in use, and plans to consolidate all legacy
domains to new "canonical" domains. Once those domains are decided on, we
should update these consts (but making sure to preserve compatibility with
existing installs, clients, and user configuration).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This was added in commits fc21bf280b and
0380fbff37 in support of LCOW, but was
now always set to runtime.GOOS.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Commit 0380fbff37 added the ability to pass a
--platform flag on `docker import` when importing an archive. The intent
of that commit was to allow importing a Linux rootfs on a Windows daemon
(as part of the experimental LCOW feature).
A later commit (337ba71fc1) changed some
of this code to take both OS and Architecture into account (for `docker build`
and `docker pull`), but did not yet update the `docker image import`.
This patch updates the import endpoitn to allow passing both OS and
Architecture. Note that currently only matching OSes are accepted,
and an error will be produced when (e.g.) specifying `linux` on Windows
and vice-versa.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This test is failing frequently once nodes have less disk space
available. Skipping the test for now, but we can continue looking
for a good solution.
Tracked through https://github.com/moby/moby/issues/42743
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This should help with Jenkins failing to clean up the Workspace:
- make sure "cleanup" is also called in the defer for all daemons. keeping
the daemon's storage around prevented Jenkins from cleaning up.
- close client connections and some readers (just to be sure)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The daemon can print the proxy configuration as part of error-messages,
and when reloading the daemon configuration (SIGHUP). Make sure that
the configuration is sanitized before printing.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The proxy configuration works, but looks like we're unable to connect to the
test proxy server as part of our test;
level=debug msg="Trying to pull example.org:5000/some/image from https://example.org:5000 v2"
level=warning msg="Error getting v2 registry: Get \"https://example.org:5000/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:45999: connect: connection refused"
level=info msg="Attempting next endpoint for pull after error: Get \"https://example.org:5000/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:45999: connect: connection refused"
level=error msg="Handler for POST /v1.42/images/create returned error: Get \"https://example.org:5000/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:45999: connect: connection refused"
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This allows configuring the daemon's proxy server through the daemon.json con-
figuration file or command-line flags configuration file, in addition to the
existing option (through environment variables).
Configuring environment variables on Windows to configure a service is more
complicated than on Linux, and adding alternatives for this to the daemon con-
figuration makes the configuration more transparent and easier to use.
The configuration as set through command-line flags or through the daemon.json
configuration file takes precedence over env-vars in the daemon's environment,
which allows the daemon to use a different proxy. If both command-line flags
and a daemon.json configuration option is set, an error is produced when starting
the daemon.
Note that this configuration is not "live reloadable" due to Golang's use of
`sync.Once()` for proxy configuration, which means that changing the proxy
configuration requires a restart of the daemon (reload / SIGHUP will not update
the configuration.
With this patch:
cat /etc/docker/daemon.json
{
"http-proxy": "http://proxytest.example.com:80",
"https-proxy": "https://proxytest.example.com:443"
}
docker pull busybox
Using default tag: latest
Error response from daemon: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp: lookup proxytest.example.com on 127.0.0.11:53: no such host
docker build .
Sending build context to Docker daemon 89.28MB
Step 1/3 : FROM golang:1.16-alpine AS base
Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp: lookup proxytest.example.com on 127.0.0.11:53: no such host
Integration tests were added to test the behavior:
- verify that the configuration through all means are used (env-var,
command-line flags, damon.json), and used in the expected order of
preference.
- verify that conflicting options produce an error.
Signed-off-by: Anca Iordache <anca.iordache@docker.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Looks like this test was broken from the start, and fully relied on a race
condition. (Test was added in 65ee7fff02)
The problem is in the service's command: `ls -l /etc/config || /bin/top`, which
will either:
- exit immediately if the secret is mounted correctly at `/etc/config` (which it should)
- keep running with `/bin/top` if the above failed
After the service is created, the test enters a race-condition, checking for 1
task to be running (which it ocassionally is), after which it proceeds, and looks
up the list of tasks of the service, to get the log output of `ls -l /etc/config`.
This is another race: first of all, the original filter for that task lookup did
not filter by `running`, so it would pick "any" task of the service (either failed,
running, or "completed" (successfully exited) tasks).
In the meantime though, SwarmKit kept reconciling the service, and creating new
tasks, so even if the test was able to get the ID of the correct task, that task
may already have been exited, and removed (task-limit is 5 by default), so only
if the test was "lucky", it would be able to get the logs, but of course, chances
were likely that it would be "too late", and the task already gone.
The problem can be easily reproduced when running the steps manually:
echo 'CONFIG' | docker config create myconfig -
docker service create --config source=myconfig,target=/etc/config,mode=0777 --name myservice busybox sh -c 'ls -l /etc/config || /bin/top'
The above creates the service, but it keeps retrying, because each task exits
immediately (followed by SwarmKit reconciling and starting a new task);
mjntpfkkyuuc1dpay4h00c4oo
overall progress: 0 out of 1 tasks
1/1: ready [======================================> ]
verify: Detected task failure
^COperation continuing in background.
Use `docker service ps mjntpfkkyuuc1dpay4h00c4oo` to check progress.
And checking the tasks for the service reveals that tasks exit cleanly (no error),
but _do exit_, so swarm just keeps up reconciling, and spinning up new tasks;
docker service ps myservice --no-trunc
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
2wmcuv4vffnet8nybg3he4v9n myservice.1 busybox:latest@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57 docker-desktop Ready Ready less than a second ago
5p8b006uec125iq2892lxay64 \_ myservice.1 busybox:latest@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57 docker-desktop Shutdown Complete less than a second ago
k8lpsvlak4b3nil0zfkexw61p \_ myservice.1 busybox:latest@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57 docker-desktop Shutdown Complete 6 seconds ago
vsunl5pi7e2n9ol3p89kvj6pn \_ myservice.1 busybox:latest@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57 docker-desktop Shutdown Complete 11 seconds ago
orxl8b6kt2l6dfznzzd4lij4s \_ myservice.1 busybox:latest@sha256:f7ca5a32c10d51aeda3b4d01c61c6061f497893d7f6628b92f822f7117182a57 docker-desktop Shutdown Complete 17 seconds ago
This patch changes the service's command to `sleep`, so that a successful task
(after successfully performing `ls -l /etc/config`) continues to be running until
the service is deleted. With that change, the service should (usually) reconcile
immediately, which removes the race condition, and should also make it faster :)
This patch changes the tests to use client.ServiceLogs() instead of using the
service's tasklist to directly access container logs. This should also fix some
failures that happened if some tasks failed to start before reconciling, in which
case client.TaskList() (with the current filters), could return more tasks than
anticipated (as it also contained the exited tasks);
=== RUN TestCreateServiceSecretFileMode
create_test.go:291: assertion failed: 2 (int) != 1 (int)
--- FAIL: TestCreateServiceSecretFileMode (7.88s)
=== RUN TestCreateServiceConfigFileMode
create_test.go:355: assertion failed: 2 (int) != 1 (int)
--- FAIL: TestCreateServiceConfigFileMode (7.87s)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Before this change if you assume that things work the way the test
expects them to (it does not, but lets assume for now) we aren't really
testing anything because we are testing that a container is healthy
before and after we send a signal. This will give false positives even
if there is a bug in the underlying code. Sending a signal can take any
amount of time to cause a container to exit or to trigger healthchecks
to stop or whatever.
Now lets remove the assumption that things are working as expected,
because they are not.
In this case, `top` (which is what is running in the container) is
actually exiting when it receives `USR1`.
This totally invalidates the test.
We need more control and knowledge as to what is happening in the
container to properly test this.
This change introduces a custom script which traps `USR1` and flips the
health status each time the signal is received.
We then send the signal twice so that we know the change has occurred
and check that the value has flipped so that we know the change has
actually occurred.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Commit dae652e2e5 added support for non-privileged
containers to use ICMP_PROTO (used for `ping`). This option cannot be set for
containers that have user-namespaces enabled.
However, the detection looks to be incorrect; HostConfig.UsernsMode was added
in 6993e891d1 / ee2183881b,
and the property only has meaning if the daemon is running with user namespaces
enabled. In other situations, the property has no meaning.
As a result of the above, the sysctl would only be set for containers running
with UsernsMode=host on a daemon running with user-namespaces enabled.
This patch adds a check if the daemon has user-namespaces enabled (RemappedRoot
having a non-empty value), or if the daemon is running inside a user namespace
(e.g. rootless mode) to fix the detection.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The io/ioutil package has been deprecated in Go 1.16. This commit
replaces the existing io/ioutil functions with their new definitions in
io and os packages.
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Update the frozen images to also be based on Debian bullseye. Using the "slim"
variant (which looks to have all we're currently using), and remove the
buildpack-dep frozen image.
The buildpack-dep image is quite large, and it looks like we only use it to
compile some C binaries, which should work fine on a regular debian image;
docker build -t debian:bullseye-slim-gcc -<<EOF
FROM debian:bullseye-slim
RUN apt-get update && apt-get install -y gcc libc6-dev --no-install-recommends
EOF
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
debian bullseye-slim-gcc 1851750242af About a minute ago 255MB
buildpack-deps bullseye fe8fece98de2 2 days ago 834MB
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Otherwise errors within this function will all show to be at the line
number of the utility, instead of where it failed in the test:
=== RUN TestDaemonDefaultNetworkPools
service_test.go:23: assertion failed:
Command: ip link delete docker0
ExitCode: 127
Error: exec: "ip": executable file not found in $PATH
Stdout:
Stderr:
Failures:
ExitCode was 127 expected 0
Expected no error
=== RUN TestDaemonRestartWithExistingNetwork
service_test.go:23: assertion failed:
Command: ip link delete docker0
ExitCode: 127
Error: exec: "ip": executable file not found in $PATH
Stdout:
Stderr:
Failures:
ExitCode was 127 expected 0
Expected no error
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This allows containers to use the embedded default profile if a different
default is set (e.g. "unconfined") in the daemon configuration. Without this
option, users would have to copy the default profile to a file in order to
use the default.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Commit b237189e6c implemented an option to
set the default seccomp profile in the daemon configuration. When that PR
was reviewed, it was discussed to have the option accept the path to a custom
profile JSON file; https://github.com/moby/moby/pull/26276#issuecomment-253546966
However, in the implementation, the special "unconfined" value was not taken into
account. The "unconfined" value is meant to disable seccomp (more factually:
run with an empty profile).
While it's likely possible to achieve this by creating a file with an an empty
(`{}`) profile, and passing the path to that file, it's inconsistent with the
`--security-opt seccomp=unconfined` option on `docker run` and `docker create`,
which is both confusing, and makes it harder to use (especially on Docker Desktop,
where there's no direct access to the VM's filesystem).
This patch adds the missing check for the special "unconfined" value.
Co-authored-by: Tianon Gravi <admwiggin@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Let clients choose object types to compute disk usage of.
Signed-off-by: Roman Volosatovs <roman.volosatovs@docker.com>
Co-authored-by: Sebastiaan van Stijn <github@gone.nl>
`/containers/<name>/copy` endpoint was deprecated in 1.8 and errors
since 1.12. See https://github.com/moby/moby/pull/22149 for more info.
Signed-off-by: Roman Volosatovs <roman.volosatovs@docker.com>
Discovered a few instances, where loop variable is incorrectly used
within a test closure, which is marked as parallel.
Few of these were actually loops over singleton slices, therefore the issue
might not have surfaced there (yet), but it is good to fix there as
well, as this is an incorrect pattern used across different tests.
Signed-off-by: Roman Volosatovs <roman.volosatovs@docker.com>
Fixes#36911
If config file is invalid we'll exit anyhow, so this just prevents
the daemon from starting if the configuration is fine.
Mainly useful for making config changes and restarting the daemon
iff the config is valid.
Signed-off-by: Rich Horwood <rjhorwood@apple.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Anca Iordache <anca.iordache@docker.com>
pkg/devicemapper/devmapper.go:383:28: S1039: unnecessary use of fmt.Sprintf (gosimple)
if err := task.setMessage(fmt.Sprintf("@cancel_deferred_remove")); err != nil {
^
integration/plugin/graphdriver/external_test.go:321:18: S1039: unnecessary use of fmt.Sprintf (gosimple)
http.Error(w, fmt.Sprintf("missing id"), 409)
^
integration-cli/docker_api_stats_test.go:70:31: S1039: unnecessary use of fmt.Sprintf (gosimple)
_, body, err := request.Get(fmt.Sprintf("/info"))
^
integration-cli/docker_cli_build_test.go:4547:19: S1039: unnecessary use of fmt.Sprintf (gosimple)
"--build-arg", fmt.Sprintf("FOO1=fromcmd"),
^
integration-cli/docker_cli_build_test.go:4548:19: S1039: unnecessary use of fmt.Sprintf (gosimple)
"--build-arg", fmt.Sprintf("FOO2="),
^
integration-cli/docker_cli_build_test.go:4549:19: S1039: unnecessary use of fmt.Sprintf (gosimple)
"--build-arg", fmt.Sprintf("FOO3"), // set in env
^
integration-cli/docker_cli_build_test.go:4668:32: S1039: unnecessary use of fmt.Sprintf (gosimple)
cli.WithFlags("--build-arg", fmt.Sprintf("tag=latest")))
^
integration-cli/docker_cli_build_test.go:4690:32: S1039: unnecessary use of fmt.Sprintf (gosimple)
cli.WithFlags("--build-arg", fmt.Sprintf("baz=abc")))
^
pkg/jsonmessage/jsonmessage_test.go:255:4: S1039: unnecessary use of fmt.Sprintf (gosimple)
fmt.Sprintf("ID: status\n"),
^
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
=== RUN TestServicePlugin
plugin_test.go:42: assertion failed: error is not nil: error building basic plugin bin: no required module provides package github.com/docker/docker/testutil/fixtures/plugin/basic: go.mod file not found in current directory or any parent directory; see 'go help modules'
: exit status 1
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Updates the certificates to account for current versions of Go expecting
SANs to be used instead of the Common Name field:
FAIL: s390x.integration.plugin.authz TestAuthZPluginTLS (0.53s)
[2020-07-26T09:36:58.638Z] authz_plugin_test.go:132: assertion failed:
error is not nil: error during connect: Get "https://localhost:4271/v1.41/version":
x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The VolumesService did not have information wether or not a volume
was _created_ or if a volume already existed in the driver, and
the existing volume was used.
As a result, multiple "create" events could be generated for the
same volume. For example:
1. Run `docker events` in a shell to start listening for events
2. Create a volume:
docker volume create myvolume
3. Start a container that uses that volume:
docker run -dit -v myvolume:/foo busybox
4. Check the events that were generated:
2021-02-15T18:49:55.874621004+01:00 volume create myvolume (driver=local)
2021-02-15T18:50:11.442759052+01:00 volume create myvolume (driver=local)
2021-02-15T18:50:11.487104176+01:00 container create 45112157c8b1382626bf5e01ef18445a4c680f3846c5e32d01775dddee8ca6d1 (image=busybox, name=gracious_hypatia)
2021-02-15T18:50:11.519288102+01:00 network connect a19f6bb8d44ff84d478670fa4e34c5bf5305f42786294d3d90e790ac74b6d3e0 (container=45112157c8b1382626bf5e01ef18445a4c680f3846c5e32d01775dddee8ca6d1, name=bridge, type=bridge)
2021-02-15T18:50:11.526407799+01:00 volume mount myvolume (container=45112157c8b1382626bf5e01ef18445a4c680f3846c5e32d01775dddee8ca6d1, destination=/foo, driver=local, propagation=, read/write=true)
2021-02-15T18:50:11.864134043+01:00 container start 45112157c8b1382626bf5e01ef18445a4c680f3846c5e32d01775dddee8ca6d1 (image=busybox, name=gracious_hypatia)
5. Notice that a "volume create" event is created twice;
- once when `docker volume create` was ran
- once when `docker run ...` was ran
This patch moves the generation of (most) events to the volume _store_, and only
generates an event if the volume did not yet exist.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This was changed as part of a refactor to use containerd dist code. The
problem is the OCI media types are not compatible with older versions of
Docker.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This test has been flaky for a long time, failing with:
--- FAIL: TestInspect (12.04s)
inspect_test.go:39: timeout hit after 10s: waiting for tasks to enter run state. task failed with error: task: non-zero exit (1)
While looking through logs, noticed tasks were started, entering RUNNING stage,
and then exited, to be started again.
state.transition="STARTING->RUNNING"
...
msg="fatal task error" error="task: non-zero exit (1)"
...
state.transition="RUNNING->FAILED"
Looking for possible reasons, first considering network issues (possibly we ran
out of IP addresses or networking not cleaned up), then I spotted the issue.
The service is started with;
Command: []string{"/bin/top"},
Args: []string{"-u", "root"},
The `-u root` is not an argument for the service, but for `/bin/top`. While the
Ubuntu/Debian/GNU version `top` has a -u/-U option;
docker run --rm ubuntu:20.04 top -h 2>&1 | grep '\-u'
top -hv | -bcEHiOSs1 -d secs -n max -u|U user -p pid(s) -o field -w [cols]
The *busybox* version of top does not:
docker run --rm busybox top --help 2>&1 | grep '\-u'
So running `top -u root` would cause the task to fail;
docker run --rm busybox top -u root
top: invalid option -- u
...
echo $?
1
As a result, the service went into a crash-loop, and because the `poll.WaitOn()`
was running with a short interval, in many cases would _just_ find the RUNNING
state, perform the `service inspect`, and pass, but in other cases, it would not
be that lucky, and continue polling untill we reached the 10 seconds timeout,
and mark the test as failed.
Looking for history of this option (was it previously using a different image?) I
found this was added in 6cd6d8646a, but probably
just missed during review.
Given that the option is only set to have "something" to inspect, I replaced
the `-u root` with `-d 5`, which makes top refresh with a 5 second interval.
Note that there is another test (`TestServiceListWithStatuses) that uses the same
spec, however, that test is skipped based on API version of the test-daemon, and
(to be looked into), when performing that check, no API version is known, causing
the test to (always?) be skipped:
=== RUN TestServiceListWithStatuses
--- SKIP: TestServiceListWithStatuses (0.00s)
list_test.go:34: versions.LessThan(testEnv.DaemonInfo.ServerVersion, "1.41")
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Some tests were using domain names that were intended to be "fake", but are
actually registered domain names (such as domain.com, registry.com, mytest.com).
Even though we were not actually making connections to these domains, it's
better to use domains that are designated for testing/examples in RFC2606:
https://tools.ietf.org/html/rfc2606
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Rootlesskit doesn't currently handle IPv6 addresses, causing TestNetworkLoopbackNat
and TestNetworkNat to fail;
Error starting userland proxy:
error while calling PortManager.AddPort(): listen tcp: address :::8080: too many colons in address
This patch:
- Updates `getExternalAddress()` to pick IPv4 address if both IPv6 and IPv4 are found
- Update TestNetworkNat to net.JoinHostPort(), so that square brackets are used for
IPv6 addresses (e.g. `[::]:8080`)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This currently doesn't make a difference, because load.FrozenImagesLinux()
currently loads all frozen images, not just the specified one, but in case
that is fixed/implemented at some point.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This fixes a panic when an admin specifies a custom default runtime,
when a plugin is started the shim config is nil.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Commit f2f5106c92 added this test to verify loading
of images that were built with user-namespaces enabled.
However, because this test spins up a new daemon, not the daemon that's set up by
the test-suite's `TestMain()` (which loads the frozen images).
As a result, the `debian:bullseye` image was pulled from Docker Hub when running
the test;
Calling POST /v1.41/images/load?quiet=1
Applying tar in /go/src/github.com/docker/docker/bundles/test-integration/TestBuildUserNamespaceValidateCapabilitiesAreV2/d4d366b15997b/root/165536.165536/overlay2/3f7f9375197667acaf7bc810b34689c21f8fed9c52c6765c032497092ca023d6/diff" storage-driver=overlay
Applied tar sha256:845f0e5159140e9dbcad00c0326c2a506fbe375aa1c229c43f082867d283149c to 3f7f9375197667acaf7bc810b34689c21f8fed9c52c6765c032497092ca023d6, size: 5922359
Calling POST /v1.41/build?buildargs=null&cachefrom=null&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=&labels=null&memory=0&memswap=0&networkmode=&rm=0&shmsize=0&t=capabilities%3A1.0&target=&ulimits=null&version=
Trying to pull debian from https://registry-1.docker.io v2
Fetching manifest from remote" digest="sha256:f169dbadc9021fc0b08e371d50a772809286a167f62a8b6ae86e4745878d283d" error="<nil>" remote="docker.io/library/debian:bullseye
Pulling ref from V2 registry: debian:bullseye
...
This patch updates `TestBuildUserNamespaceValidateCapabilitiesAreV2` to load the
frozen image. `StartWithBusybox` is also changed to `Start`, because the test
is not using the busybox image, so there's no need to load it.
In a followup, we should probably add some utilities to make this easier to set up
(and to allow passing the list frozen images that we want to load, without having
to "hard-code" the image name to load).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Using `d.Kill()` with rootless mode causes the restarted daemon to not
be able to start containerd (it times out).
Originally this was SIGKILLing the daemon because we were hoping to not
have to manipulate on disk state, but since we need to anyway we can
shut it down normally.
I also tested this to ensure the test fails correctly without the fix
that the test was added to check for.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Adds a test case for the case where dockerd gets stuck on startup due to
hanging `daemon.shutdownContainer`
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
When building images in a user-namespaced container, v3 capabilities are
stored including the root UID of the creator of the user-namespace.
This UID does not make sense outside the build environment however. If
the image is run in a non-user-namespaced runtime, or if a user-namespaced
runtime uses a different UID, the capabilities requested by the effective
bit will not be honoured by `execve(2)` due to this mismatch.
Instead, we convert v3 capabilities to v2, dropping the root UID on the
fly.
Signed-off-by: Eric Mountain <eric.mountain@datadoghq.com>
Capabilities are serialised in VFS_CAP_REVISION_3 when an image is
built in a user-namespaced daemon, instead of VFS_CAP_REVISION_2.
This adds a test for this, though it's currently wired to fail if
the capabilities are serialised in VFS_CAP_REVISION_2 instead in this
situation, since this is unexpected.
Signed-off-by: Eric Mountain <eric.mountain@datadoghq.com>
This adds a regression test for https://github.com/moby/moby/issues/24595
make DOCKER_GRAPHDRIVER=vfs TEST_FILTER='TestNetworkList' test-integration
INFO: Testing against a local daemon
=== RUN TestNetworkList
=== RUN TestNetworkList//networks
=== PAUSE TestNetworkList//networks
=== RUN TestNetworkList//networks/
=== PAUSE TestNetworkList//networks/
=== CONT TestNetworkList//networks
=== CONT TestNetworkList//networks/
--- PASS: TestNetworkList (0.05s)
--- PASS: TestNetworkList//networks/ (0.01s)
--- PASS: TestNetworkList//networks (0.01s)
PASS
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The free disk space on the Windows RS5 CI nodes appears to be just the
right size that the TestBuildWCOWSandboxSize test can generate 21GB of
layers, and then a 21GB sandbox inside a container, and then runs out of
space while committing the layer.
Helpfully, this failure is distinguishable in the logs from a failure
when the sandbox is too small, so we can do that.
TODO: Revert this if-and-when the Windows-RS5 CI nodes have more free
space.
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
This test validates that `RUN` and `COPY` both target a read-write
sandbox on Windows that is configured according to the daemon's
`storage-opts` setting.
Sadly, this is a slow test, so we need to bump the timeout to 60 minutes
from the default of 10 minutes.
Signed-off-by: Paul "TBBle" Hampson <Paul.Hampson@Pobox.com>
This fixes a regression based on expectations of the runtime:
```
docker pull arm32v7/alpine
docker run arm32v7/alpine
```
Without this change, the `docker run` will fail due to platform
matching on non-arm32v7 systems, even though the image could run
(assuming the system is setup correctly).
This also emits a warning to make sure that the user is aware that a
platform that does not match the default platform of the system is being
run, for the cases like:
```
docker pull --platform armhf busybox
docker run busybox
```
Not typically an issue if the requests are done together like that, but
if the image was already there and someone did `docker run` without an
explicit `--platform`, they may very well be expecting to run a native
version of the image instead of the armhf one.
This warning does add some extra noise in the case of platform specific
images being run, such as `arm32v7/alpine`, but this can be supressed by
explicitly setting the platform.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
After dicussing with maintainers, it was decided putting the burden of
providing the full cap list on the client is not a good design.
Instead we decided to follow along with the container API and use cap
add/drop.
This brings in the changes already merged into swarmkit.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
In order to run tests at mips64el device.
Now official-images has supported the following images for mips64el.
buildpack-deps:stretch
buildpack-deps:buster
debian:stretch
debian:buster
But official-images does not support the following images for mips64el.
debian:jessie
buildpack-deps:jessie
Signed-off-by: wanghuaiqing <wanghuaiqing@loongson.cn>
If the container specified in `--volumes-from` did not exist, the
API returned a 404 status, which was interpreted by the CLI as the
specified _image_ to be missing (even if that was not the case).
This patch changes these error to return a 400 (bad request);
Before this change:
# make sure the image is present
docker pull busybox
docker create --volumes-from=nosuchcontainer busybox
# Unable to find image 'busybox:latest' locally
# latest: Pulling from library/busybox
# Digest: sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209
# Status: Image is up to date for busybox:latest
# Error response from daemon: No such container: nosuchcontainer
After this change:
# make sure the image is present
docker pull busybox
docker create --volumes-from=nosuchcontainer busybox
# Error response from daemon: No such container: nosuchcontainer
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Usage: DOCKER_BUILD_ARGS="--build-arg CONTAINERD_COMMIT=master --build-arg RUNC_COMMIT=master" DOCKER_EXPERIMENTAL=1 TEST_SKIP_INTEGRATION_CLI=1 make test-integration
Depends on containerd master (v1.4) and runc master (v1.0.0-rc91).
Currently `TEST_SKIP_INTEGRATION_CLI=1` must be specified.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
The initial implementation followed the Swarm API, where
PidsLimit is located in ContainerSpec. This is not the
desired place for this property, so moving the field to
TaskTemplate.Resources in our API.
A similar change should be made in the SwarmKit API (likely
keeping the old field for backward compatibility, because
it was merged some releases back)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The test case panics like that:
> build_test.go:381: assertion failed: 3 (int) != 1 (int)
> panic: runtime error: index out of range [2] with length 1 [recovered]
> panic: runtime error: index out of range [2] with length 1
The fix is trivial.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Support for PidsLimit was added to SwarmKit in docker/swarmkit/pull/2415,
but never exposed through the Docker remove API.
This patch exposes the feature in the repote API.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When a container is left running after the daemon exits (e.g. the daemon
is SIGKILL'd or crashes), it should stop any running containers when the
daemon starts back up.
What actually happens is the daemon only sends the container's
configured stop signal and does not check if it has exited.
If the container does not actually exit then it is left running.
This fixes this unexpected behavior by calling the same function to shut
down the container that the daemon shutdown process does.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This removes the use of the old distribution code in the plugin packages
and replaces it with containerd libraries for plugin pushes and pulls.
Additionally it uses a content store from containerd which seems like
it's compatible with the old "basicBlobStore" in the plugin package.
This is being used locally isntead of through the containerd client for
now.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This enables image lookup when creating a container to fail when the
reference exists but it is for the wrong platform. This prevents trying
to run an image for the wrong platform, as can be the case with, for
example binfmt_misc+qemu.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Switch to moby/sys/mount and mountinfo. Keep the pkg/mount for potential
outside users.
This commit was generated by the following bash script:
```
set -e -u -o pipefail
for file in $(git grep -l 'docker/docker/pkg/mount"' | grep -v ^pkg/mount); do
sed -i -e 's#/docker/docker/pkg/mount"#/moby/sys/mount"#' \
-e 's#mount\.\(GetMounts\|Mounted\|Info\|[A-Za-z]*Filter\)#mountinfo.\1#g' \
$file
goimports -w $file
done
```
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Metrics collectors generally don't need the daemon to prime the stats
with something to compare since they already have something to compare
with.
Before this change, the API does 2 collection cycles (which takes
roughly 2s) in order to provide comparison for CPU usage over 1s. This
was primarily added so that `docker stats --no-stream` had something to
compare against.
Really the CLI should have just made a 2nd call and done the comparison
itself rather than forcing it on all API consumers.
That ship has long sailed, though.
With this change, clients can set an option to just pull a single stat,
which is *at least* a full second faster:
Old:
```
time curl --unix-socket
/go/src/github.com/docker/docker/bundles/test-integration-shell/docker.sock
http://./containers/test/stats?stream=false\&one-shot=false > /dev/null
2>&1
real0m1.864s
user0m0.005s
sys0m0.007s
time curl --unix-socket
/go/src/github.com/docker/docker/bundles/test-integration-shell/docker.sock
http://./containers/test/stats?stream=false\&one-shot=false > /dev/null
2>&1
real0m1.173s
user0m0.010s
sys0m0.006s
```
New:
```
time curl --unix-socket
/go/src/github.com/docker/docker/bundles/test-integration-shell/docker.sock
http://./containers/test/stats?stream=false\&one-shot=true > /dev/null
2>&1
real0m0.680s
user0m0.008s
sys0m0.004s
time curl --unix-socket
/go/src/github.com/docker/docker/bundles/test-integration-shell/docker.sock
http://./containers/test/stats?stream=false\&one-shot=true > /dev/null
2>&1
real0m0.156s
user0m0.007s
sys0m0.007s
```
This fixes issues with downstreams ability to use the stats API to
collect metrics.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Configuration over the API per container is intentionally left out for
the time being, but is supported to configure the default from the
daemon config.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit cbecf48bc352e680a5390a7ca9cff53098cd16d7)
Signed-off-by: Madhu Venugopal <madhu@docker.com>
This supplements any log driver which does not support reads with a
custom read implementation that uses a local file cache.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit d675e2bf2b75865915c7a4552e00802feeb0847f)
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Dont assign the --bip value directly to the subnet
for the default bridge. Instead use the network value
from the ParseCIDR output
Addresses: https://github.com/moby/moby/issues/40392
Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
Go 1.14 adds quotes around the url in the error returned:
=== FAIL: arm64.integration.system TestLoginFailsWithBadCredentials (0.27s)
TestLoginFailsWithBadCredentials: login_test.go:27: assertion failed: expected error "Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized: incorrect username or password", got "Error response from daemon: Get \"https://registry-1.docker.io/v2/\": unauthorized: incorrect username or password"
Error response from daemon: Get "https://registry-1.docker.io/v2/": unauthorized: incorrect username or password
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This makes sure that things like `--tmpfs` mounts over an anonymous
volume don't create volumes uneccessarily.
One method only checks mountpoints, the other checks both mountpoints
and tmpfs... the usage of these should likely be consolidated.
Ideally, processing for `--tmpfs` mounts would get merged in with the
rest of the mount parsing. I opted not to do that for this change so the
fix is minimal and can potentially be backported with fewer changes of
breaking things.
Merging the mount processing for tmpfs can be handled in a followup.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Docker Desktop (on MAC and Windows hosts) allows containers
running inside a Linux VM to connect to the host using
the host.docker.internal DNS name, which is implemented by
VPNkit (DNS proxy on the host)
This PR allows containers to connect to Linux hosts
by appending a special string "host-gateway" to --add-host
e.g. "--add-host=host.docker.internal:host-gateway" which adds
host.docker.internal DNS entry in /etc/hosts and maps it to host-gateway-ip
This PR also add a daemon flag call host-gateway-ip which defaults to
the default bridge IP
Docker Desktop will need to set this field to the Host Proxy IP
so DNS requests for host.docker.internal can be routed to VPNkit
Addresses: https://github.com/docker/for-linux/issues/264
Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
Adds support for ReplicatedJob and GlobalJob service modes. These modes
allow running service which execute tasks that exit upon success,
instead of daemon-type tasks.
Signed-off-by: Drew Erny <drew.erny@docker.com>
Bumps the vendoring of github.com/docker/swarmkit to the above commit,
which is the current master at commit time.
Most notably, this includes a change making the ingress network respect
the default address pool. Because of this change, a change to network
integration tests was needed.
Signed-off-by: Drew Erny <drew.erny@docker.com>
The tests starts a new daemon, but attempts to run it with overlay2,
and using a unix:// socket, which doesn't really work on Windows.
40155 tried to disable such tests but missed two of them.
They are being disabled with this change.
Signed-off-by: vikrambirsingh <vikrambir.singh@docker.com>
Premature check for OS type means that the test
will never even get to run on other OS types. This
will cause it to be always flagged as a failure on
such OS types.
Signed-off-by: vikrambirsingh <vikrambir.singh@docker.com>
- Updated TestInfoSecurityOptions to not rely on CLI output. Note that this
test should be migrated to the integration suite, but that suite does not yet
have checks for "Seccomp" and "AppArmor"
- TestInfoAPIWarnings: don't start with busybox because we're not running containers in this test
- Migrate TestInfoDebug to integration suite
- Migrate TestInsecureRegistries to integration suite (renamed to TestInfoInsecureRegistries)
- Migrate TestRegistryMirrors to integration suite (renamed to TestInfoRegistryMirrors)
- Migrate TestInfoDiscoveryBackend to integration suite
- Migrate TestInfoDiscoveryInvalidAdvertise to integration suite
- Migrate TestInfoDiscoveryAdvertiseInterfaceName to integration suite
- Remove TestInfoFormat, which is testing the CLI functionality, and there is an
existing test in docker/cli (TestFormatInfo) covering this
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Adds a new ServiceStatus field to the Service object, which includes the
running and desired task counts. This new field is gated behind a
"status" query parameter.
Signed-off-by: Drew Erny <drew.erny@docker.com>
This feature was used by docker build --stream and it was kept experimental.
Users of this endpoint should enable BuildKit anyway by setting Version to BuilderBuildKit.
Signed-off-by: Tibor Vass <tibor@docker.com>
```
integration/plugin/graphdriver/external_test.go:427:2: SA5001: should check returned error before deferring responseReader.Close() (staticcheck)
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
```
internal/test/environment/environment.go:37:23: `useing` is a misspelling of `using`(misspell)
integration/container/wait_test.go:49:9: `waitres` is a misspelling of `waiters`(misspell)
integration/container/wait_test.go:95:9: `waitres` is a misspelling of `waiters`(misspell)
integration-cli/docker_api_containers_test.go:1042:7: `waitres` is a misspelling of `waiters`(misspell)
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The linter marked an issue because testMacvlanMultiSubnet was not used.
Re-enabling the test showed that there was a typo in the assert, causing the
test to fail:
```
--- FAIL: TestDockerNetworkMacvlan/MultiSubnet (4.74s)
macvlan_test.go:243: assertion failed: 2001:db8:abc4::254 (c3.NetworkSettings.Networks["dualstackbridge"].IPv6Gateway string) != 2001:db8.abc4::254 (string)
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Trying to link to a non-existing container is not valid, and should return an
"invalid parameter" (400) error. Returning a "not found" error in this situation
would make the client report the container's image could not be found.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
1. Modify comments added in 5858a99267
Windows Volume GUID path format is: \\?\Volume{<GUID Value>}\<path>
Rewrote the example given in comments to conform to the format..
2. Remove two redundant asserts[assert.NilError]. They are redundant
because the last statement will not change the value of err.
Signed-off-by: Vikram bir Singh <vikrambir.singh@docker.com>
1.This commit replaces serviceRunningCount with
swarm.RunningTasksCount to accurately check if the
service is running with the accurate number of instances
or not. serviceRunningCount was only checking the ServiceList
and was not checking if the tasks were running or not
This adds a safe barrier to execute docker network inspect
commands for overlay networks which get created
asynchronously via Swarm
2. Make sure client connections are closed
3. Make sure every service and network name is unique
4. Make sure services and networks are cleaned up
Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
This fixes a regression introduced in 6d87f19142,
causing `COPY --from` to fail if the target directory does not exist:
```
FROM mcr.microsoft.com/windows/servercore:ltsc2019 as s1
RUN echo "Hello World" > /hello
FROM mcr.microsoft.com/windows/servercore:ltsc2019
COPY --from=s1 /hello /hello/another/world
```
Would produce an error:
```
Step 4/4 : COPY --from=s1 /hello /hello/another/world
failed to copy files: mkdir \\?: The filename, directory name, or volume label syntax is incorrect.
```
The cause for this was that Go's `os.MkdirAll()` does not support/detect volume GUID paths
(`\\?\Volume{dae8d3ac-b9a1-11e9-88eb-e8554b2ba1db}\hello\another}`), and as a result
attempted to create the volume as a directory (`\\?`), causing it to fail.
This patch replaces `os.MkdirAll()` with our own `system.MkdirAll()` function, which
is capable of detecting GUID volumes.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
1. Use `go list` to get list of integration dirs to build. This means we
do not need to have a valid `.go` in every subdirectory and also
filters out other dirs like "bundles" which may have been created.
2. Add option to specify custom flags for integration and
integration-cli. This is needed so both suites can be run AND set
custom flags... since the cli suite does not support standard go
flags.
3. Add options to skip an entire integration suite.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This test is failing on Windows currently:
```
11:59:47 --- FAIL: TestHealthKillContainer (8.12s)
11:59:47 health_test.go:57: assertion failed: error is not nil: Error response from daemon: Invalid signal: SIGUSR1
``
That test was added recently in https://github.com/moby/moby/pull/39454, but
rewritten in a commit in the same PR:
f8aef6a92f
In that rewrite, there were some changes:
- originally it was skipped on Windows, but the rewritten test doesn't have that skip:
```go
testRequires(c, DaemonIsLinux) // busybox doesn't work on Windows
```
- the original test used `SIGINT`, but the new one uses `SIGUSR1`
Analysis:
- The Error bubbles up from: 8e610b2b55/pkg/signal/signal.go (L29-L44)
- Interestingly; `ContainerKill` should validate if a signal is valid for the given platform, but somehow we don't hit that part; f1b5612f20/daemon/kill.go (L40-L48)
- Windows only looks to support 2 signals currently 8e610b2b55/pkg/signal/signal_windows.go (L17-L26)
- Upstream Golang looks to define `SIGINT` as well; 77f9b2728e/src/runtime/defs_windows.go (L44)
- This looks like the current list of Signals upstream in Go; 3b58ed4ad3/windows/types_windows.go (L52-L67)
```go
const (
// More invented values for signals
SIGHUP = Signal(0x1)
SIGINT = Signal(0x2)
SIGQUIT = Signal(0x3)
SIGILL = Signal(0x4)
SIGTRAP = Signal(0x5)
SIGABRT = Signal(0x6)
SIGBUS = Signal(0x7)
SIGFPE = Signal(0x8)
SIGKILL = Signal(0x9)
SIGSEGV = Signal(0xb)
SIGPIPE = Signal(0xd)
SIGALRM = Signal(0xe)
SIGTERM = Signal(0xf)
)
```
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The session endpoint is no longer experimental since
01c9e7082e, so we don't
need to start an experimental daemon.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Looks like TestServiceWithDefaultAddressPoolInit is failing
randomly in CI. I am not able to reproduce the issue locally
but this has been reported few times. So I tried to modify
code and see if I can fix the random failure.
Signed-off-by: selansen <elango.siva@docker.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>