This caused a race condition where AutoRemove could be restored before
container was considered for restart and made autoremove containers
impossible to restart.
```
$ make DOCKER_GRAPHDRIVER=vfs BIND_DIR=. TEST_FILTER='TestContainerWithAutoRemoveCanBeRestarted' TESTFLAGS='-test.count 1' test-integration
...
=== RUN TestContainerWithAutoRemoveCanBeRestarted
=== RUN TestContainerWithAutoRemoveCanBeRestarted/kill
=== RUN TestContainerWithAutoRemoveCanBeRestarted/stop
--- PASS: TestContainerWithAutoRemoveCanBeRestarted (1.61s)
--- PASS: TestContainerWithAutoRemoveCanBeRestarted/kill (0.70s)
--- PASS: TestContainerWithAutoRemoveCanBeRestarted/stop (0.86s)
PASS
DONE 3 tests in 3.062s
```
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Before this change, there is no way to know if container (runtime)
resources have been cleaned up unless you actually remove the container.
This change allows callers of the wait API or the events API to know
that all runtime resources for the container are released (e.g. IP
addresses).
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Also move c.Lock() below containerd delete task, as it doesn't seem that
there is any necessity to hold the container lock while containerd is
killing the task.
This fixes a potential edge-case where containerd delete task hangs, and
thereafter all operations on the container would hang forever, as this
function is holding onto the container lock.
Signed-off-by: Cam <gh@sparr.email>
In this case, we are sending a signal to the container (typically this
would be SIGKILL or SIGTERM, but could be any signal), but container
reports that the process does not exist.
At the point this code is happening, dockerd thinks that the container
is running, but containerd reports that it is not.
Since containerd reports that it is not running, try to collect the exit
status of the container from containerd, and mark the container as
stopped in dockerd.
Repro this problem like so:
```
id=$(docker run -d busybox top)
pkill containerd && pkill top
docker stop $id
```
Without this change, `docker stop $id` will first try to send SIGTERM,
wait for exit, then try SIGKILL.
Because the process doesn't exist to begin with, no signal is sent, and
so nothing happens.
Since we won't receive any event here to process, the container can
never be marked as stopped until the daemon is restarted.
With the change `docker stop` succeeds immediately (since the process is
already stopped) and we mark the container as stopped. We handle the
case as if we missed a exit event.
There are definitely some other places in the stack that could use some
improvement here, but this helps people get out of a sticky situation.
With io.containerd.runc.v2, no event is ever recieved by docker because
the shim quits trying to send the event.
With io.containerd.runtime.v1.linux the TastExit event is sent before
dockerd can reconnect to the event stream and we miss the event.
No matter what, we shouldn't be reliant on the shim doing the right
thing here, nor can we rely on a steady event stream.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
The last check for err != nil is not needed as err is always non-nil
there. Remove the check.
Also, no need to explicitly define `var err error` here.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Reported by govet linter:
> daemon/monitor.go:57:9: lostcancel: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak (govet)
> ctx, _ := context.WithTimeout(context.Background(), 2*time.Second)
> ^
> daemon/monitor.go:128:9: lostcancel: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak (govet)
> ctx, _ := context.WithTimeout(context.Background(), 2*time.Second)
> ^
Fixes: b5f288 ("Handle blocked I/O of exec'd processes")
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Before this change we just accept that any error is "not found" and it
could be something else, but even if it it is just a "not found" kind of
error this should be dealt with from the container store and not the
event processor.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Fixes#39427
This always sends the exec exit events even when the exec fails to find
the binary. A standard 127 exit status is sent in this situation.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
This is the second part to
https://github.com/containerd/containerd/pull/3361 and will help process
delete not block forever when the process exists but the I/O was
inherited by a subprocess that lives on.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Signed-off-by: John Howard <jhoward@microsoft.com>
This is the first step in refactoring moby (dockerd) to use containerd on Windows.
Similar to the current model in Linux, this adds the option to enable it for runtime.
It does not switch the graphdriver to containerd snapshotters.
- Refactors libcontainerd to a series of subpackages so that either a
"local" containerd (1) or a "remote" (2) containerd can be loaded as opposed
to conditional compile as "local" for Windows and "remote" for Linux.
- Updates libcontainerd such that Windows has an option to allow the use of a
"remote" containerd. Here, it communicates over a named pipe using GRPC.
This is currently guarded behind the experimental flag, an environment variable,
and the providing of a pipename to connect to containerd.
- Infrastructure pieces such as under pkg/system to have helper functions for
determining whether containerd is being used.
(1) "local" containerd is what the daemon on Windows has used since inception.
It's not really containerd at all - it's simply local invocation of HCS APIs
directly in-process from the daemon through the Microsoft/hcsshim library.
(2) "remote" containerd is what docker on Linux uses for it's runtime. It means
that there is a separate containerd service running, and docker communicates over
GRPC to it.
To try this out, you will need to start with something like the following:
Window 1:
containerd --log-level debug
Window 2:
$env:DOCKER_WINDOWS_CONTAINERD=1
dockerd --experimental -D --containerd \\.\pipe\containerd-containerd
You will need the following binary from github.com/containerd/containerd in your path:
- containerd.exe
You will need the following binaries from github.com/Microsoft/hcsshim in your path:
- runhcs.exe
- containerd-shim-runhcs-v1.exe
For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`.
This is no different to the current requirements. However, you may need updated binaries,
particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit
binaries are somewhat out of date.
Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required
functionality needed for docker. This will come in time - this is a baby (although large)
step to migrating Docker on Windows to containerd.
Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use
HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable.
This abstraction is done in HCSShim. (Referring specifically to runtime)
Note the LCOW graphdriver still uses HCS v1 APIs regardless.
Note also that this does not migrate docker to use containerd snapshotters
rather than graphdrivers. This needs to be done in conjunction with Linux also
doing the same switch.
When manually stopping a container with a restart-policy, the container
would show as "restarting" in `docker ps` whereas its actual state
is "exited".
Stopping a container with a restart policy shows the container as "restarting"
docker run -d --name test --restart unless-stopped busybox false
docker stop test
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7e07409fa1d3 busybox "false" 5 minutes ago Restarting (1) 4 minutes ago test
However, inspecting the same container shows that it's exited:
docker inspect test --format '{{ json .State }}'
{
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 1,
"Error": "",
"StartedAt": "2019-02-14T13:26:27.6091648Z",
"FinishedAt": "2019-02-14T13:26:27.689427Z"
}
And killing the container confirms this;
docker kill test
Error response from daemon: Cannot kill container: test: Container 7e07409fa1d36dc8d8cb8f25cf12ee1168ad9040183b85fafa73ee2c1fcf9361 is not running
docker run -d --name test --restart unless-stopped busybox false
docker stop test
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d0595237054a busybox "false" 5 minutes ago Restarting (1) 4 minutes ago exit
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
In particular, these two:
> daemon/daemon_unix.go:1129: Wrapf format %v reads arg #1, but call has 0 args
> daemon/kill.go:111: Warn call has possible formatting directive %s
and a few more.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: John Howard <jhoward@microsoft.com>
While debugging #32838, it was found (https://github.com/moby/moby/issues/32838#issuecomment-356005845) that the utility VM in some circumstances was crashing. Unfortunately, this was silently thrown away, and as far as the build step (also applies to docker run) was concerned, the exit code was zero and the error was thrown away. Windows containers operate differently to containers on Linux, and there can be legitimate system errors during container shutdown after the init process exits. This PR handles this and passes the error all the way back to the client, and correctly causes a build step running a container which hits a system error to fail, rather than blindly trying to keep going, assuming all is good, and get a subsequent failure on a commit.
With this change, assuming an error occurs, here's an example of a failure which previous was reported as a commit error:
```
The command 'powershell -Command $ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue'; Install-WindowsFeature -Name Web-App-Dev ; Install-WindowsFeature -Name ADLDS; Install-WindowsFeature -Name Web-Mgmt-Compat; Install-WindowsFeature -Name Web-Mgmt-Service; Install-WindowsFeature -Name Web-Metabase; Install-WindowsFeature -Name Web-Lgcy-Scripting; Install-WindowsFeature -Name Web-WMI; Install-WindowsFeature -Name Web-WHC; Install-WindowsFeature -Name Web-Scripting-Tools; Install-WindowsFeature -Name Web-Net-Ext45; Install-WindowsFeature -Name Web-ASP; Install-WindowsFeature -Name Web-ISAPI-Ext; Install-WindowsFeature -Name Web-ISAPI-Filter; Install-WindowsFeature -Name Web-Default-Doc; Install-WindowsFeature -Name Web-Dir-Browsing; Install-WindowsFeature -Name Web-Http-Errors; Install-WindowsFeature -Name Web-Static-Content; Install-WindowsFeature -Name Web-Http-Redirect; Install-WindowsFeature -Name Web-DAV-Publishing; Install-WindowsFeature -Name Web-Health; Install-WindowsFeature -Name Web-Http-Logging; Install-WindowsFeature -Name Web-Custom-Logging; Install-WindowsFeature -Name Web-Log-Libraries; Install-WindowsFeature -Name Web-Request-Monitor; Install-WindowsFeature -Name Web-Http-Tracing; Install-WindowsFeature -Name Web-Stat-Compression; Install-WindowsFeature -Name Web-Dyn-Compression; Install-WindowsFeature -Name Web-Security; Install-WindowsFeature -Name Web-Windows-Auth; Install-WindowsFeature -Name Web-Basic-Auth; Install-WindowsFeature -Name Web-Url-Auth; Install-WindowsFeature -Name Web-WebSockets; Install-WindowsFeature -Name Web-AppInit; Install-WindowsFeature -Name NET-WCF-HTTP-Activation45; Install-WindowsFeature -Name NET-WCF-Pipe-Activation45; Install-WindowsFeature -Name NET-WCF-TCP-Activation45;' returned a non-zero code: 4294967295: container shutdown failed: container ba9c65054d42d4830fb25ef55e4ab3287550345aa1a2bb265df4e5bfcd79c78a encountered an error during WaitTimeout: failure in a Windows system call: The compute system exited unexpectedly. (0xc0370106)
```
Without this change, it would be incorrectly reported such as in this comment: https://github.com/moby/moby/issues/32838#issuecomment-309621097
```
Step 3/8 : ADD buildtools C:/buildtools
re-exec error: exit status 1: output: time="2017-06-20T11:37:38+10:00" level=error msg="hcsshim::ImportLayer failed in Win32: The system cannot find the path specified. (0x3) layerId=\\\\?\\C:\\ProgramData\\docker\\windowsfilter\\b41d28c95f98368b73fc192cb9205700e21
6691495c1f9ac79b9b04ec4923ea2 flavour=1 folder=C:\\Windows\\TEMP\\hcs232661915"
hcsshim::ImportLayer failed in Win32: The system cannot find the path specified. (0x3) layerId=\\?\C:\ProgramData\docker\windowsfilter\b41d28c95f98368b73fc192cb9205700e216691495c1f9ac79b9b04ec4923ea2 flavour=1 folder=C:\Windows\TEMP\hcs232661915
```
- Fix OOM event updating healthchecks and persisting container state
without locks
- Fix healthchecks being updated without locks on container stop
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Reuse existing structures and rely on json serialization to deep copy
Container objects.
Also consolidate all "save" operations on container.CheckpointTo, which
now both saves a serialized json to disk, and replicates state to the
ACID in-memory store.
Signed-off-by: Fabio Kung <fabio.kung@gmail.com>
Replicate relevant mutations to the in-memory ACID store. Readers will
then be able to query container state without locking.
Signed-off-by: Fabio Kung <fabio.kung@gmail.com>
Description:
When docker is in startup process and containerd sends an "process exit" event to docker.
If the container config '--restart=always', restartmanager will start this container very soon.
But some initialization is not done, e.g. `daemon.netController`,when visit, docker would panic.
Signed-off-by: Wentao Zhang <zhangwentao234@huawei.com>
If a container mount the socket the daemon is listening on into
container while the daemon is being shutdown, the socket will
not exist on the host, then daemon will assume it's a directory
and create it on the host, this will cause the daemon can't start
next time.
fix issue https://github.com/moby/moby/issues/30348
To reproduce this issue, you can add following code
```
--- a/daemon/oci_linux.go
+++ b/daemon/oci_linux.go
@@ -8,6 +8,7 @@ import (
"sort"
"strconv"
"strings"
+ "time"
"github.com/Sirupsen/logrus"
"github.com/docker/docker/container"
@@ -666,7 +667,8 @@ func (daemon *Daemon) createSpec(c *container.Container) (*libcontainerd.Spec, e
if err := daemon.setupIpcDirs(c); err != nil {
return nil, err
}
-
+ fmt.Printf("===please stop the daemon===\n")
+ time.Sleep(time.Second * 2)
ms, err := daemon.setupMounts(c)
if err != nil {
return nil, err
```
step1 run a container which has `--restart always` and `-v /var/run/docker.sock:/sock`
```
$ docker run -ti --restart always -v /var/run/docker.sock:/sock busybox
/ #
```
step2 exit the the container
```
/ # exit
```
and kill the daemon when you see
```
===please stop the daemon===
```
in the daemon log
The daemon can't restart again and fail with `can't create unix socket /var/run/docker.sock: is a directory`.
Signed-off-by: Lei Jitang <leijitang@huawei.com>
Before this, if `forceRemove` is set the container data will be removed
no matter what, including if there are issues with removing container
on-disk state (rw layer, container root).
In practice this causes a lot of issues with leaked data sitting on
disk that users are not able to clean up themselves.
This is particularly a problem while the `EBUSY` errors on remove are so
prevalent. So for now let's not keep this behavior.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Container state counts are used for reporting in the `/info` endpoint.
Currently when `/info` is called, each container is iterated over and
the containers 'StateString()' is called. This is not very efficient
with lots of containers, and is also racey since `StateString()` is not
using a mutex and the mutex is not otherwise locked.
We could just lock the container mutex, but this is proven to be
problematic since there are frequent deadlock scenarios and we should
always have the `/info` endpoint available since this endpoint is used
to get general information about the docker host.
Really, these metrics on `/info` should be deprecated. But until then,
we can just keep a running tally in memory for each of the reported
states.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
After running the test suite with the race detector enabled I found
these gems that need to be fixed.
This is just round one, sadly lost my test results after I built the
binary to test this... (whoops)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
`StreamConfig` carries with it a dep on libcontainerd, which is used by
other projects, but libcontainerd doesn't compile on all platforms, so
move it to `github.com/docker/docker/container/stream`
Signed-off-by: Brian Goff <cpuguy83@gmail.com>