Replicate relevant mutations to the in-memory ACID store. Readers will
then be able to query container state without locking.
Signed-off-by: Fabio Kung <fabio.kung@gmail.com>
Description:
When docker is in startup process and containerd sends an "process exit" event to docker.
If the container config '--restart=always', restartmanager will start this container very soon.
But some initialization is not done, e.g. `daemon.netController`,when visit, docker would panic.
Signed-off-by: Wentao Zhang <zhangwentao234@huawei.com>
If a container mount the socket the daemon is listening on into
container while the daemon is being shutdown, the socket will
not exist on the host, then daemon will assume it's a directory
and create it on the host, this will cause the daemon can't start
next time.
fix issue https://github.com/moby/moby/issues/30348
To reproduce this issue, you can add following code
```
--- a/daemon/oci_linux.go
+++ b/daemon/oci_linux.go
@@ -8,6 +8,7 @@ import (
"sort"
"strconv"
"strings"
+ "time"
"github.com/Sirupsen/logrus"
"github.com/docker/docker/container"
@@ -666,7 +667,8 @@ func (daemon *Daemon) createSpec(c *container.Container) (*libcontainerd.Spec, e
if err := daemon.setupIpcDirs(c); err != nil {
return nil, err
}
-
+ fmt.Printf("===please stop the daemon===\n")
+ time.Sleep(time.Second * 2)
ms, err := daemon.setupMounts(c)
if err != nil {
return nil, err
```
step1 run a container which has `--restart always` and `-v /var/run/docker.sock:/sock`
```
$ docker run -ti --restart always -v /var/run/docker.sock:/sock busybox
/ #
```
step2 exit the the container
```
/ # exit
```
and kill the daemon when you see
```
===please stop the daemon===
```
in the daemon log
The daemon can't restart again and fail with `can't create unix socket /var/run/docker.sock: is a directory`.
Signed-off-by: Lei Jitang <leijitang@huawei.com>
Before this, if `forceRemove` is set the container data will be removed
no matter what, including if there are issues with removing container
on-disk state (rw layer, container root).
In practice this causes a lot of issues with leaked data sitting on
disk that users are not able to clean up themselves.
This is particularly a problem while the `EBUSY` errors on remove are so
prevalent. So for now let's not keep this behavior.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Container state counts are used for reporting in the `/info` endpoint.
Currently when `/info` is called, each container is iterated over and
the containers 'StateString()' is called. This is not very efficient
with lots of containers, and is also racey since `StateString()` is not
using a mutex and the mutex is not otherwise locked.
We could just lock the container mutex, but this is proven to be
problematic since there are frequent deadlock scenarios and we should
always have the `/info` endpoint available since this endpoint is used
to get general information about the docker host.
Really, these metrics on `/info` should be deprecated. But until then,
we can just keep a running tally in memory for each of the reported
states.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
After running the test suite with the race detector enabled I found
these gems that need to be fixed.
This is just round one, sadly lost my test results after I built the
binary to test this... (whoops)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
`StreamConfig` carries with it a dep on libcontainerd, which is used by
other projects, but libcontainerd doesn't compile on all platforms, so
move it to `github.com/docker/docker/container/stream`
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Before this, container's auto-removal after exit is done in a goroutine,
this commit will get ContainerRm out of the goroutine.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
During a docker exec, if no TTY is specified, the code would still leave stdin open instead of closing it. This change adds handling for the execConfig TTY bool that mirrors what is already done for container config. (Updated this change to be Windows only.)
Signed-off-by: Stefan J. Wernli <swernli@microsoft.com>
This PR adds support for user-defined health-check probes for Docker
containers. It adds a `HEALTHCHECK` instruction to the Dockerfile syntax plus
some corresponding "docker run" options. It can be used with a restart policy
to automatically restart a container if the check fails.
The `HEALTHCHECK` instruction has two forms:
* `HEALTHCHECK [OPTIONS] CMD command` (check container health by running a command inside the container)
* `HEALTHCHECK NONE` (disable any healthcheck inherited from the base image)
The `HEALTHCHECK` instruction tells Docker how to test a container to check that
it is still working. This can detect cases such as a web server that is stuck in
an infinite loop and unable to handle new connections, even though the server
process is still running.
When a container has a healthcheck specified, it has a _health status_ in
addition to its normal status. This status is initially `starting`. Whenever a
health check passes, it becomes `healthy` (whatever state it was previously in).
After a certain number of consecutive failures, it becomes `unhealthy`.
The options that can appear before `CMD` are:
* `--interval=DURATION` (default: `30s`)
* `--timeout=DURATION` (default: `30s`)
* `--retries=N` (default: `1`)
The health check will first run **interval** seconds after the container is
started, and then again **interval** seconds after each previous check completes.
If a single run of the check takes longer than **timeout** seconds then the check
is considered to have failed.
It takes **retries** consecutive failures of the health check for the container
to be considered `unhealthy`.
There can only be one `HEALTHCHECK` instruction in a Dockerfile. If you list
more than one then only the last `HEALTHCHECK` will take effect.
The command after the `CMD` keyword can be either a shell command (e.g. `HEALTHCHECK
CMD /bin/check-running`) or an _exec_ array (as with other Dockerfile commands;
see e.g. `ENTRYPOINT` for details).
The command's exit status indicates the health status of the container.
The possible values are:
- 0: success - the container is healthy and ready for use
- 1: unhealthy - the container is not working correctly
- 2: starting - the container is not ready for use yet, but is working correctly
If the probe returns 2 ("starting") when the container has already moved out of the
"starting" state then it is treated as "unhealthy" instead.
For example, to check every five minutes or so that a web-server is able to
serve the site's main page within three seconds:
HEALTHCHECK --interval=5m --timeout=3s \
CMD curl -f http://localhost/ || exit 1
To help debug failing probes, any output text (UTF-8 encoded) that the command writes
on stdout or stderr will be stored in the health status and can be queried with
`docker inspect`. Such output should be kept short (only the first 4096 bytes
are stored currently).
When the health status of a container changes, a `health_status` event is
generated with the new status. The health status is also displayed in the
`docker ps` output.
Signed-off-by: Thomas Leonard <thomas.leonard@docker.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Removing the call to Shutdown from within Signal in order to rely on waitExit handling the exit of the process.
Signed-off-by: Stefan J. Wernli <swernli@microsoft.com>
When container is automatically restarted based on restart policy,
docker events can't get "start" event but only get "die" event, this is
not consistent with previous behavior. This commit will add "start"
event back.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
So other packages don't need to import the daemon package when they
want to use this struct.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Signed-off-by: Tibor Vass <tibor@docker.com>
This is a small configuration struct used in two scenarios:
1. To attach I/O pipes to a running containers.
2. To attach to execution processes inside running containers.
Although they are similar, keeping the struct in the same package
than exec and container can generate cycled dependencies if we
move any of them outside the daemon, like we want to do
with the container.
Signed-off-by: David Calavera <david.calavera@gmail.com>
The purpose of this PR is for users to distinguish Docker errors from
contained command errors.
This PR modifies 'docker run' exit codes to follow the chroot standard
for exit codes.
Exit status:
125 if 'docker run' itself fails
126 if contained command cannot be invoked
127 if contained command cannot be found
the exit status otherwise
Signed-off-by: Sally O'Malley <somalley@redhat.com>
Although having a request ID available throughout the codebase is very
valuable, the impact of requiring a Context as an argument to every
function in the codepath of an API request, is too significant and was
not properly understood at the time of the review.
Furthermore, mixing API-layer code with non-API-layer code makes the
latter usable only by API-layer code (one that has a notion of Context).
This reverts commit de41640435, reversing
changes made to 7daeecd42d.
Signed-off-by: Tibor Vass <tibor@docker.com>
Conflicts:
api/server/container.go
builder/internals.go
daemon/container_unix.go
daemon/create.go
This PR adds a "request ID" to each event generated, the 'docker events'
stream now looks like this:
```
2015-09-10T15:02:50.000000000-07:00 [reqid: c01e3534ddca] de7c5d4ca927253cf4e978ee9c4545161e406e9b5a14617efb52c658b249174a: (from ubuntu) create
```
Note the `[reqID: c01e3534ddca]` part, that's new.
Each HTTP request will generate its own unique ID. So, if you do a
`docker build` you'll see a series of events all with the same reqID.
This allow for log processing tools to determine which events are all related
to the same http request.
I didn't propigate the context to all possible funcs in the daemon,
I decided to just do the ones that needed it in order to get the reqID
into the events. I'd like to have people review this direction first, and
if we're ok with it then I'll make sure we're consistent about when
we pass around the context - IOW, make sure that all funcs at the same level
have a context passed in even if they don't call the log funcs - this will
ensure we're consistent w/o passing it around for all calls unnecessarily.
ping @icecrime @calavera @crosbymichael
Signed-off-by: Doug Davis <dug@us.ibm.com>
Using @mavenugo's patch for enabling the libcontainer pre-start hook to
be used for network namespace initialization (correcting the conflict
with user namespaces); updated the boolean check to the more generic
SupportsHooks() name, and fixed the hook state function signature.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
- some method names were changed to have a 'Locking' suffix, as the
downcased versions already existed, and the existing functions simply
had locks around the already downcased version.
- deleting unused functions
- package comment
- magic numbers replaced by golang constants
- comments all over
Signed-off-by: Morgan Bauer <mbauer@us.ibm.com>
Since the failure count of container will increase by 1 every time it
exits successfully, the compare in function shouldRestart() will stop
container to restart by the last time.
Signed-off-by: Hu Keping <hukeping@huawei.com>
Function shouldRestart() checks the restart policy and records the
debug info and there should be two arguments in the log.Debugf().
Prior to the this patch, the logs were something like this:
- client: $ docker run --restart=on-failure:3 ubuntu /bin/sh -c 'exit 1'
- daemon: INFO[0168] ...
DEBU[0168] stopping restart of container %!s(int=3) because maximum
failure could of %!d(MISSING) has been reached
INFO[0086] ...
Btw, fix a spelling error in the same file:
- cotnainer -> container
----------------------------------------
Signed-off-by: Hu Keping <hukeping@huawei.com>
Since the containers can handle the out of memory kernel kills gracefully, docker
will only provide out of memory information as an additional metadata as part of
container status.
Docker-DCO-1.1-Signed-off-by: Vishnu Kannan <vishnuk@google.com> (github: vishh)
Reset the time increment if the container's execution time is greater
than 10s or else as a container runs and is restarted the time will grow
overtime.
Signed-off-by: Michael Crosby <michael@docker.com>
We need to do this so that when a user asks docker to stop the container
and it is currently in the restart loop we don't want to have to wait
for the duration of the restart time increment before ack. the stop.
Signed-off-by: Michael Crosby <michael@docker.com>