We attached the JSON flag to the wrong AST node, causing Docker to treat
the exec form ["binary", "arg"] as if the shell form "binary arg" had
been used. This failed if "ls" was not present.
Added a test to detect this.
Fixes#26174
Signed-off-by: Thomas Leonard <thomas.leonard@docker.com>
This fix tries to address the issue raised in comment:
https://github.com/docker/docker/pull/25943#discussion_r76843081
Previously, the validation for `ip6` is done by checking ParseIP().To16().
However, in case an IPv4 address or an IPv4-mapped Ipv6 address has been
provided, the validation will pass (should fail).
This fix first check if `--ip6` is passed with a valid IP address and returns
error for invalid IP addresses. It then check if an IPv4 or IPv4-mapped Ipv6
address is passed, and return error accordingly.
This fix adds two more cases in the tests. One for IPv4 address passed to `--ip6`
and another for Ipv4-mapped IPv6 address passed to `--ip6`. In both cases,
without this fix the validation will pass through.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This fix is a follow-up of 26154. I did a grep on `integration-cli` and
found out that there are several tests in `docker_cli_daemon_test.go`
that still use `NewDaemon` instread of `DockerDaemonSuite`.
This fix changes related tests from DockerSuite to DockerDaemonSuite in
`docker_cli_daemon_test.go`.
With this fix, now `NewDaemon` is only called from `SetUpTest` on
various DockerXXXSuite. That should help maintain the test code base.
This fix is related to the comments in:
26115
24533.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This fix tries to add the registry mirrors information in the output
of `docker info`.
In our active deployment, we use registry mirrors to help speeding
up the `docker pull`. In the current output of `docker info`,
registry mirrors is not present though it is available in `/info` API.
I think it makes sense to add the registry mirrors to `docker info`,
similiar to insecure registries.
This fix adds the registry mirrors to the output of `docker info`:
```
Registry Mirrors:
https://192.168.1.2/http://registry.mirror.com:5000/
```
An integration test has been added to cover the changes.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This fix tries to address the issue raised in 26090 where
remote API `POST /services/(id or name)/update` cannot
use `name` to update. This is not consistent with the
documentation of the remote API.
This fix fixes this issue by performing a lookup with `getService`
in case `name` instead of `id` is used in API.
This fix adds an integration test to cover the changes.
This fix fixes 26090.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
As explained in the test comment itself, the error message may vary on
different platform.
This patch adds another condition found while testing w/o network
access on RHEL based distros (Fedora, RHEL, CentOS).
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
This fix tries to address the issue raised in 25304 to support
`--group-add` and `--group-rm` in `docker service create`.
This fix adds `--group-add` to `docker service create` and `docker service update`,
adds `--group-rm` to `docker service update`.
This fix updates docs for `docker service create` and `docker service update`:
1. Add `--group-add` to `docker service create` and `docker service update`
2. Add `--group-rm` to `docker service update`
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
There are cases such as migrating from classic overlay network to the
swarm-mode networking (without kv-store), such a mechanism to allow
disconnecting a container even when a network isnt available will be
useful.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
`TestDaemonDiscoveryBackendConfigReload` was doing some wierd things
with files, creating a file (directly in `./integration-cli`), removing
it, then creating a new file.
This is just weird, so fixed it to use a single file, file will go into
a temp dir so it doesn't pollute integration-cli.
It was also blindly sending a SIGHUP to the daemon process then sleeping
for 3 seconds. This is racey, and slow.
Change this to look for the daemon-reload event in the event stream.
Reload logic is moved to a separate function and blocks (w/ a timeout)
waiting for the reload event to fire.
Runtime of the test is now ~0.5s on my machine, where as it was a
minimum of 3s due to the `time.Sleep` before.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This fix tries to fix the issue raised in 25863 where `--ip` value
is not validated for `docker create`. As a result, the IP address
passed by `--ip` is not used for `docker create` (ignored silently).
This fix adds validation in the daemon so that `--ip` and `--ip6`
are properly validated for `docker create`.
An integration test has been added to cover the changes.
This fix fixes 25863.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Support args to RunCommand
Fix docker help text test.
Fix for ipv6 tests.
Fix TLSverify option.
Fix TestDaemonDiscoveryBackendConfigReload
Use tempfile for another test.
Restore missing flag.
Fix tests for removal of shlex.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Currently start command will invoke getExitCode - which is based on
Inspect API - to get returned exit code after container exits.
There's two race conditions here:
if container is started with Restart Policy, there's chance that the
container is restarted quickly before it calls getExitCode, under such
circumstance, the exit code is wrong.
if container is started with --rm, it's possible that container is
removed before getExitCode, in this situation, you can't get correct
exit code either.
Replace getExitCode with waitExitOrRemoved can solve this problem.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
This fix tries to address the issue raised in 25927 where
the HTTP headers have been chaged when AUthZ plugin is in
place.
This issue is that in `FlushAll` (`pkg/authorization/response.go`),
the headers have been written (with `WriteHeader`) before all the
headers have bee copied.
This fix fixes the issue by placing `WriteHeader` after.
A test has been added to cover the changes.`
This fix fixes 25927
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This fix tries to address the issue in 25000 where `docker stats`
will not show network stats with `NetworkDisabled=true`.
The `NetworkDisabled=true` could be either invoked through
remote API, or through `docker daemon -b none`.
The issue was that when `NetworkDisabled=true` either by API or
by daemon config, there is no SandboxKey for container so an error
will be returned.
This fix fixes this issue by skipping obtaining SandboxKey if
`NetworkDisabled=true`.
Additional test has bee added to cover the changes.
This fix fixes 25000.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Fix delete containers and make sure it prints errors correctly.
Rename Result.Fails to Result.Assert()
Create a constant for the default expected.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Remove some run functions and replace them with the unified run command.
Remove DockerCmdWithStdoutStderr
Remove many duplicate runCommand functions.
Also add dockerCmdWithResult()
Allow Result.Assert() to ignore the error message if an exit status is expected.
Fix race in DockerSuite.TestDockerInspectMultipleNetwork
Fix flaky test DockerSuite.TestRunInteractiveWithRestartPolicy
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Right now docker puts swarm's control socket into the docker root dir
(e.g. /var/lib/docker).
This can cause some nasty issues with path length being > 108
characters, especially in our CI environment.
Since we already have some other state going in the daemon's exec root
(libcontainerd and libnetwork), I think it makes sense to move the
control socket to this location, especially since there are other unix
sockets being created here by docker so it must always be at a path that
works.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This flag has been deprecated in version below 1.10 so it's safe to
remove now, according to our deprecation policy.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
This fix tries to address the issue in raised #23367 where an out-of-band
volume driver deletion leaves some data in docker. This prevent the
reuse of deleted volume names (by out-of-band volume driver like flocker).
This fix adds a `--force` field in `docker volume rm` to forcefully purge
the data of the volume that has already been deleted.
Related documentations have been updated.
This fix is tested manually with flocker, as is specified in #23367.
An integration test has also been added for the scenario described.
This fix fixes#23367.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Fix#24803 as this had been failing sometimes.
As the parallel tests are probably genuine failures, and
had already been cut down, I will re-create these specifically
as a parallel execution test with no seccomp to make the
cause clearer.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
Image inspect doesn't have a "size" query parameter.
The client sent this (when doing `docker inspect --size`),
but was unused in the daemon.
This removes the unused parameter, and a test that
didn't actually test anything.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
"--restart" and "--rm" are conflict options, if a container is started
with AutoRemove flag, we should forbid the update action for its Restart
Policy.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
This device is not required by the OCI spec.
The rationale for this was linked to docker/docker#2393
So a non functional /dev/fuse was created, and actual fuse use still is
required to add the device explicitly. However even old versions of the JVM
on Ubuntu 12.04 no longer require the fuse package, and this is all not
needed.
See also https://github.com/opencontainers/runc/pull/983 although this
change alone stops the fuse device being created.
Tested and does not change actual ability to use fuse.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
Since we added labels for volume, it is desired to have
filter support label for volume
Closes: #21416
Signed-off-by: Kai Qiang Wu(Kennan) <wkqwu@cn.ibm.com>
This fix tries to address the issue raised in 25375 where
`service update --publish-add` returns an error if the exact
same value is repeated (idempotent).
This fix use a map to filter out repeated port configs so
that `--publish-add` does not error out.
An integration test has been added.
This fix fixes 25375.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Use `OnTimeout` callback on test timeouts to trigger a stack dump for
running daemons. This will help analyze stuck tests.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
If AutoRemove is set, wait until client get `destroy` events, or get
`detach` events that implies container is detached but not stopped.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
`--rm` is a client side flag which caused lots of problems:
1. if client lost connection to daemon, including client crash or be
killed, there's no way to clean garbage container.
2. if docker stop a `--rm` container, this container won't be
autoremoved.
3. if docker daemon restart, container is also left over.
4. bug: `docker run --rm busybox fakecmd` will exit without cleanup.
In a word, client side `--rm` flag isn't sufficient for garbage
collection. Move the `--rm` flag to daemon will be more reasonable.
What this commit do is:
1. implement a `--rm` on daemon side, adding one flag `AutoRemove` into
HostConfig.
2. Allow `run --rm -d`, no conflicting `--rm` and `-d` any more,
auto-remove can work on detach mode.
3. `docker restart` a `--rm` container will succeed, the container won't
be autoremoved.
This commit will help a lot for daemon to do garbage collection for
temporary containers.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
The memory should always be smaller than memoryswap,
we should error out with message that user know how
to do rather than just an invalid argument error if
user update the memory limit bigger than already set
memory swap.
Signed-off-by: Lei Jitang <leijitang@huawei.com>
This fix tries to address the issue raised in #23498 to allow unset
`--entrypoint` in `docker run` or `docker create`.
This fix checks the flag `--entrypoint` and, in case `--entrypoint=` (`""`)
is passed, unset the Entrypoint during the container run.
Additional integration tests have been created to cover changes in this fix.
This fix fixes#23498.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This fix tries to address the issue raised in 25374 where the
output of `docker ps --filter` is in random order and
not deterministic.
This fix sorts the list of containers by creation time so that the
output is deterministic.
An integration test has been added.
This fix fixes 25374.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This test is already covered in the individual graph driver tests and it
adds 15s to the test run without adding value. The original idea was to
test max number of layers, this is fulfilled by the graph drivers.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Fixes: #25073
Update kernel memory on running containers without initialized
is forbidden only on kernel version older than 4.6.
Signed-off-by: Qiang Huang <h.huangqiang@huawei.com>
Since 24237 has been merged, it is not necessary to require network
for swarm integration tests (`integration-cli/docker_api_swarm_test.go`)
any more.
This fix removes testRequires(c, Network) from swarm integration
tests.
This fix could be verified by disable networking, and all related
tests pass.
This fix is related to 24547, 24490, 24237.
This fix fixes 24547.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
On Ubuntu and Debian there is a sysctl which allows to block
clone(CLONE_NEWUSER) via "sysctl kernel.unprivileged_userns_clone=0"
for unprivileged users that do not have CAP_SYS_ADMIN.
See: https://lists.ubuntu.com/archives/kernel-team/2016-January/067926.html
The DockerSuite.TestRunSeccompUnconfinedCloneUserns testcase fails if
"kernel.unprivileged_userns_clone" is set to 0:
docker_cli_run_unix_test.go:1040:
c.Fatalf("expected clone userns with --security-opt seccomp=unconfined
to succeed, got %s: %v", out, err)
... Error: expected clone userns with --security-opt seccomp=unconfined
to succeed, got clone failed: Operation not permitted
: exit status 1
So add a check and skip the testcase if kernel.unprivileged_userns_clone is 0.
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Instead of using the bundles dir, which may be mounted in and ultimately
break if using b2d/d4mac/d4win. This makes it much easier to collect
test daemon logs and more natural for use d4mac/d4win users to run tests.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Rather than conflict with the unexposed task model, change the names of
the object-oriented task display to `docker <object> ps`. The command
works identically to `docker service tasks`. This change is superficial.
This provides a more sensical docker experience while not trampling on
the task model that may be introduced as a top-level command at a later
date.
The following is an example of the display using `docker service ps`
with a service named `condescending_cori`:
```
$ docker service ps condescending_cori
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
e2cd9vqb62qjk38lw65uoffd2 condescending_cori.1 condescending_cori alpine Running 13 minutes ago Running 6c6d232a5d0e
```
The following shows the output for the node on which the command is
running:
```console
$ docker node ps self
ID NAME SERVICE IMAGE LAST STATE DESIRED STATE NODE
b1tpbi43k1ibevg2e94bmqo0s mad_kalam.1 mad_kalam apline Accepted 2 seconds ago Accepted 6c6d232a5d0e
e2cd9vqb62qjk38lw65uoffd2 condescending_cori.1 condescending_cori alpine Running 12 minutes ago Running 6c6d232a5d0e
4x609m5o0qyn0kgpzvf0ad8x5 furious_davinci.1 furious_davinci redis Running 32 minutes ago Running 6c6d232a5d0e
```
Signed-off-by: Stephen J Day <stephen.day@docker.com>
While testing #24510 I noticed that 32 bit syscalls were incorrectly being
blocked and we did not have a test for this, so adding one.
This is only tested on amd64 as it is the only architecture that
reliably supports 32 bit code execution, others only do sometimes.
There is no 32 bit libc in the buildpack-deps so we cannot build
32 bit C code easily so use the simplest assembly program which
just calls the exit syscall.
Signed-off-by: Justin Cormack <justin.cormack@docker.com>
Ensure convergence before moving on with testing leader election
conditions. This reduce the flakiness of this test when run in different
environments.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
If no restart delay is specified for a swarm service, the default
restart delay is 5 seconds. This is a reasonable value for actual
deployments - one example of where it's useful is that if a bad image is
specified, the orchestrator will wait 5 seconds between attempts to
restart it instead of restarting it in a tight loop.
In integration tests, this 5 second delay is dead time. The tests run
faster if the delay is reduced. Set it to 100 ms to avoid the waste of
time.
This appears to speed up a few tests:
DockerSwarmSuite.TestApiSwarmForceNewCluster 37.241s -> 34.323s
DockerSwarmSuite.TestApiSwarmRestartCluster 22.038s -> 15.545s
DockerSwarmSuite.TestApiSwarmServicesMultipleAgents 24.456s -> 19.853s
DockerSwarmSuite.TestApiSwarmServicesStateReporting 19.240s -> 10.049s
...a small step towards making the Swarm integration tests run in a
reasonable amount of time.
Also, change the update delay for the rolling update test from 8 seconds
to 4 seconds, which should be sufficient to differentiate between
batches of updated tasks. This reduces the runtime for
DockerSwarmSuite.TestApiSwarmServicesUpdate from 28s to 20s.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
This is an attempt to fix the flaky test of TestSwarmNodeTaskListFilter in 25029.
Basically this fix adds a check to wait until 3 containers has already up,
before processing `node tasks ...`.
This might fix 25029.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
In `TestApiSwarmRestartCluster`, it's calling `checkClusterHealth`.
`checkClusterHealth` calls `d.info()`, which will return an error if
there is no cluster leader... problem is `checkClusterHealth` is doing a
nil error assertion w/o giving any time for a leader to be elected.
This moves the `d.info()` call into a `waitAndAssert` using the default
reconciliation timeout.
It also moves some other checks into a `waitAndAssert` to give the
cluster enough time to come back up.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Remove the swarm inspect command and use docker info instead to display
swarm information if the current node is a manager.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
This changes the default behavior so that rolling updates will not
proceed once an updated task fails to start, or stops running during the
update. Users can use docker service inspect --pretty servicename to see
the update status, and if it pauses due to a failure, it will explain
that the update is paused, and show the task ID that caused it to pause.
It also shows the time since the update started.
A new --update-on-failure=(pause|continue) flag selects the
behavior. Pause means the update stops once a task fails, continue means
the old behavior of continuing the update anyway.
In the future this will be extended with additional behaviors like
automatic rollback, and flags controlling parameters like how many tasks
need to fail for the update to stop proceeding. This is a minimal
solution for 1.12.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
There are currently problems with "swarm init" and "swarm join" when an
explicit --listen-addr flag is not provided. swarmkit defaults to
finding the IP address associated with the default route, and in cloud
setups this is often the wrong choice.
Introduce a notion of "advertised address", with the client flag
--advertise-addr, and the daemon flag --swarm-default-advertise-addr to
provide a default. The default listening address is now 0.0.0.0, but a
valid advertised address must be detected or specified.
If no explicit advertised address is specified, error out if there is
more than one usable candidate IP address on the system. This requires a
user to explicitly choose instead of letting swarmkit make the wrong
choice. For the purposes of this autodetection, we ignore certain
interfaces that are unlikely to be relevant (currently docker*).
The user is also required to choose a listen address on swarm init if
they specify an explicit advertise address that is a hostname or an IP
address that's not local to the system. This is a requirement for
overlay networking.
Also support specifying interface names to --listen-addr,
--advertise-addr, and the daemon flag --swarm-default-advertise-addr.
This will fail if the interface has multiple IP addresses (unless it has
a single IPv4 address and a single IPv6 address - then we resolve the
tie in favor of IPv4).
This change also exposes the node's externally-reachable address in
docker info, as requested by #24017.
Make corresponding API and CLI docs changes.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
This fix tries to address the issue raised in 24912 where docker
build only consists of the current step without overall total steps.
This fix adds the overall total steps so that end user could follow
the progress of the docker build.
An additonal test has been added to cover the changes.
This fix fixes 24912.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This renames the `rotate_xxx` flags to camelBack, for
consistency with other API query-params, such as
`detachKeys`, `noOverwriteDirNonDir`, and `fromImage`.
Also makes this flag accept a wider range of boolean
values ("0", "1", "true", "false"), and throw an error
if an invalid value is passed.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>