I noticed this log being logged as an error, but the kill logic actually
proceeds after this (doing a "direct" kill instead). While usually containers
are expected to be exiting within the given timeout, I don't think this
needs to be logged as an error (an error is returned after we fail to
kill the container).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The github.com/containerd/containerd/log package was moved to a separate
module, which will also be used by upcoming (patch) releases of containerd.
This patch moves our own uses of the package to use the new module.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
While there's not much we can do if we failed to store a snapshot of the
container's state, let's log the error in case it happens in stad of discarding.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Daemon.handleContainerExit() returns an error if snapshotting the container's
state to disk fails. There's not much we can do with the error if it occurs,
but let's log the error if that happens, instead of discarding it.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Define consts for the Actions we use for events, instead of "ad-hoc" strings.
Having these consts makes it easier to find where specific events are triggered,
makes the events less error-prone, and allows documenting each Action (if needed).
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The latest version of containerd-shim-runhcs-v1 (v0.10.0-rc.4) pulled in
with the bump to ContainerD v1.7.0-rc.3 had several changes to make it
more robust, which had the side effect of increasing the worst-case
amount of time it takes for a container to exit in the worst case.
Notably, the total timeout for shutting down a task increased from 30
seconds to 60! Increase the timeouts hardcoded in the daemon and
integration tests so that they don't give up too soon.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The containerd client is very chatty at the best of times. Because the
libcontained API is stateless and references containers and processes by
string ID for every method call, the implementation is essentially
forced to use the containerd client in a way which amplifies the number
of redundant RPCs invoked to perform any operation. The libcontainerd
remote implementation has to reload the containerd container, task
and/or process metadata for nearly every operation. This in turn
amplifies the number of context switches between dockerd and containerd
to perform any container operation or handle a containerd event,
increasing the load on the system which could otherwise be allocated to
workloads.
Overhaul the libcontainerd interface to reduce the impedance mismatch
with the containerd client so that the containerd client can be used
more efficiently. Split the API out into container, task and process
interfaces which the consumer is expected to retain so that
libcontainerd can retain state---especially the analogous containerd
client objects---without having to manage any state-store inside the
libcontainerd client.
Signed-off-by: Cory Snider <csnider@mirantis.com>
- remove isErrNoSuchProcess() in favor of a plain errors.As()
- errNoSuchProcess.Error(): remove punctuation
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This allows the postContainersKill() handler to pass values as-is. As part of
the rewrite, I also moved the daemon.GetContainer(name) call later in the
function, so that we can fail early if an invalid signal is passed, before
doing the (heavier) fetching of the container.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This code assumes that we missed an exit event since the container is
still marked as running in Docker but attempts to signal the process in
containerd returns a "process not found" error.
There is a case where the event wasn't missed, just that it hasn't been
processed yet.
This change tries to work around that possibility by waiting to see if
the container is eventually marked as stopped. It uses the container's
configured stop timeout for this.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
In this case, we are sending a signal to the container (typically this
would be SIGKILL or SIGTERM, but could be any signal), but container
reports that the process does not exist.
At the point this code is happening, dockerd thinks that the container
is running, but containerd reports that it is not.
Since containerd reports that it is not running, try to collect the exit
status of the container from containerd, and mark the container as
stopped in dockerd.
Repro this problem like so:
```
id=$(docker run -d busybox top)
pkill containerd && pkill top
docker stop $id
```
Without this change, `docker stop $id` will first try to send SIGTERM,
wait for exit, then try SIGKILL.
Because the process doesn't exist to begin with, no signal is sent, and
so nothing happens.
Since we won't receive any event here to process, the container can
never be marked as stopped until the daemon is restarted.
With the change `docker stop` succeeds immediately (since the process is
already stopped) and we mark the container as stopped. We handle the
case as if we missed a exit event.
There are definitely some other places in the stack that could use some
improvement here, but this helps people get out of a sticky situation.
With io.containerd.runc.v2, no event is ever recieved by docker because
the shim quits trying to send the event.
With io.containerd.runtime.v1.linux the TastExit event is sent before
dockerd can reconnect to the event stream and we miss the event.
No matter what, we shouldn't be reliant on the shim doing the right
thing here, nor can we rely on a steady event stream.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Docker daemon always stops healthcheck before sending signal to a
container now. However, when we use "docker kill" to send signals
other than SIGTERM or SIGKILL to a container, such as SIGINT,
daemon still stops container health check though container process
handles the signal normally and continues to work.
Signed-off-by: Ruilin Li <liruilin4@huawei.com>
Signed-off-by: John Howard <jhoward@microsoft.com>
This is the first step in refactoring moby (dockerd) to use containerd on Windows.
Similar to the current model in Linux, this adds the option to enable it for runtime.
It does not switch the graphdriver to containerd snapshotters.
- Refactors libcontainerd to a series of subpackages so that either a
"local" containerd (1) or a "remote" (2) containerd can be loaded as opposed
to conditional compile as "local" for Windows and "remote" for Linux.
- Updates libcontainerd such that Windows has an option to allow the use of a
"remote" containerd. Here, it communicates over a named pipe using GRPC.
This is currently guarded behind the experimental flag, an environment variable,
and the providing of a pipename to connect to containerd.
- Infrastructure pieces such as under pkg/system to have helper functions for
determining whether containerd is being used.
(1) "local" containerd is what the daemon on Windows has used since inception.
It's not really containerd at all - it's simply local invocation of HCS APIs
directly in-process from the daemon through the Microsoft/hcsshim library.
(2) "remote" containerd is what docker on Linux uses for it's runtime. It means
that there is a separate containerd service running, and docker communicates over
GRPC to it.
To try this out, you will need to start with something like the following:
Window 1:
containerd --log-level debug
Window 2:
$env:DOCKER_WINDOWS_CONTAINERD=1
dockerd --experimental -D --containerd \\.\pipe\containerd-containerd
You will need the following binary from github.com/containerd/containerd in your path:
- containerd.exe
You will need the following binaries from github.com/Microsoft/hcsshim in your path:
- runhcs.exe
- containerd-shim-runhcs-v1.exe
For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`.
This is no different to the current requirements. However, you may need updated binaries,
particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit
binaries are somewhat out of date.
Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required
functionality needed for docker. This will come in time - this is a baby (although large)
step to migrating Docker on Windows to containerd.
Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use
HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable.
This abstraction is done in HCSShim. (Referring specifically to runtime)
Note the LCOW graphdriver still uses HCS v1 APIs regardless.
Note also that this does not migrate docker to use containerd snapshotters
rather than graphdrivers. This needs to be done in conjunction with Linux also
doing the same switch.
In particular, these two:
> daemon/daemon_unix.go:1129: Wrapf format %v reads arg #1, but call has 0 args
> daemon/kill.go:111: Warn call has possible formatting directive %s
and a few more.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
With the contianerd 1.0 migration we now have strongly typed errors that
we can check for process not found.
We also had some bad error checks looking for `ESRCH` which would only
be returned from `unix.Kill` and never from containerd even though we
were checking containerd responses for it.
Fixes some race conditions around process handling and our error checks
that could lead to errors that propagate up to the user that should not.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
- Fix OOM event updating healthchecks and persisting container state
without locks
- Fix healthchecks being updated without locks on container stop
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Use strongly typed errors to set HTTP status codes.
Error interfaces are defined in the api/errors package and errors
returned from controllers are checked against these interfaces.
Errors can be wraeped in a pkg/errors.Causer, as long as somewhere in the
line of causes one of the interfaces is implemented. The special error
interfaces take precedence over Causer, meaning if both Causer and one
of the new error interfaces are implemented, the Causer is not
traversed.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
When a container is paused, signals are sent once the container has been
unpaused.
Instead of forcing the user to unpause a container before they can ever
send a signal, allow the user to send the signals, and in the case of a
stop signal, automatically unpause the container afterwards.
This is much safer than unpausing the container first then sending a
signal (what a user is currently forced to do), as the container may be
paused for very good reasons and should not be unpaused except for
stopping.
Note that not even SIGKILL is possible while a process is paused,
but it is killed the instant it is unpaused.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This fixes an issue where if a stop signal is set, and a user sends
SIGKILL, `container.ExitOnNext()` is not set, thus causing the container
to restart.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This patch adds the untilRemoved option to the ContainerWait API which
allows the client to wait until the container is not only exited but
also removed.
This patch also adds some more CLI integration tests for waiting for a
created container and waiting with the new --until-removed flag.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Handle detach sequence in CLI
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Update Container Wait Conditions
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Apply container wait changes to API 1.30
The set of changes to the containerWait API missed the cut for the
Docker 17.05 release (API version 1.29). This patch bumps the version
checks to use 1.30 instead.
This patch also makes a minor update to a testfile which was added to
the builder/dockerfile package.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Remove wait changes from CLI
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Address minor nits on wait changes
- Changed the name of the tty Proxy wrapper to `escapeProxy`
- Removed the unnecessary Error() method on container.State
- Fixes a typo in comment (repeated word)
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Use router.WithCancel in the containerWait handler
This handler previously added this functionality manually but now uses
the existing wrapper which does it for us.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Add WaitCondition constants to api/types/container
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Address more ContainerWait review comments
- Update ContainerWait backend interface to not return pointer values
for container.StateStatus type.
- Updated container state's Wait() method comments to clarify that a
context MUST be used for cancelling the request, setting timeouts,
and to avoid goroutine leaks.
- Removed unnecessary buffering when making channels in the client's
ContainerWait methods.
- Renamed result and error channels in client's ContainerWait methods
to clarify that only a single result or error value would be sent
on the channel.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Move container.WaitCondition type to separate file
... to avoid conflict with swagger-generated code for API response
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Address more ContainerWait review comments
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
This patch consolidates the two WaitStop and WaitWithContext methods
on the container.State type. Now there is a single method, Wait, which
takes a context and a bool specifying whether to wait for not just a
container exit but also removal.
The behavior has been changed slightly so that a wait call during a
Created state will not return immediately but instead wait for the
container to be started and then exited.
The interface has been changed to no longer block, but instead returns
a channel on which the caller can receive a *StateStatus value which
indicates the ExitCode or an error if there was one (like a context
timeout or state transition error).
These changes have been propagated through the rest of the deamon to
preserve all other existing behavior.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
When user try to restart a restarting container, docker client report
error: "container is already active", and container will be stopped
instead be restarted which is seriously wrong.
What's more critical is that when user try to start this container
again, it will always fail.
This error can also be reproduced with a `docker stop`+`docker start`.
And this commit will fix the bug.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
This fixes an issue that caused the client to hang forever if the
process died before the code arrived to exit the `Kill` function.
Signed-off-by: David Calavera <david.calavera@gmail.com>