Since commit e9b9e4ace2 has landed, there is a chance that
container.RWLayer is nil (due to some half-removed container). Let's
check the pointer before use to avoid any potential nil pointer
dereferences, resulting in a daemon crash.
Note that even without the abovementioned commit, it's better to perform
an extra check (even it's totally redundant) rather than to have a
possibility of a daemon crash. In other words, better be safe than
sorry.
[v2: add a test case for daemon.getInspectData]
[v3: add a check for container.Dead and a special error for the case]
Fixes: e9b9e4ace2
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
The goal of this refactor is to make it easier to integrate buildkit
and containerd snapshotters.
Commit is used from two places (api and build), each calls it
with distinct arguments. Refactored to pull out the common commit
logic and provide different interfaces for each consumer.
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Current implementaion of docke daemon doesn't pass down the
`--oom-kill-disable` option specified by the end user to the containerd
when spawning a new docker instance with help from `runc` component, which
results in the `--oom-kill-disable` doesn't work no matter the flag is `true`
or `false`.
This PR will fix this issue reported by #36090
Signed-off-by: Dennis Chen <dennis.chen@arm.com>
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
Attachable networks are networks created on the cluster which can then
be attached to by non-swarm containers. These networks are lazily
created on the node that wants to attach to that network.
When no container is currently attached to one of these networks on a
node, and then multiple containers which want that network are started
concurrently, this can cause a race condition in the network attachment
where essentially we try to attach the same network to the node twice.
To easily reproduce this issue you must use a multi-node cluster with a
worker node that has lots of CPUs (I used a 36 CPU node).
Repro steps:
1. On manager, `docker network create -d overlay --attachable test`
2. On worker, `docker create --restart=always --network test busybox
top`, many times... 200 is a good number (but not much more due to
subnet size restrictions)
3. Restart the daemon
When the daemon restarts, it will attempt to start all those containers
simultaneously. Note that you could try to do this yourself over the API,
but it's harder to trigger due to the added latency from going over
the API.
The error produced happens when the daemon tries to start the container
upon allocating the network resources:
```
attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
```
What happens here is the worker makes a network attachment request to
the manager. This is an async call which in the happy case would cause a
task to be placed on the node, which the worker is waiting for to get
the network configuration.
In the case of this race, the error ocurrs on the manager like this:
```
task allocation failure" error="failed during network allocation for task n7bwwwbymj2o2h9asqkza8gom: failed to allocate network IP for task n7bwwwbymj2o2h9asqkza8gom network rj4szie2zfauqnpgh4eri1yue: could not find an available IP" module=node node.id=u3489c490fx1df8onlyfo1v6e
```
The task is not created and the worker times out waiting for the task.
---
The mitigation for this is to make sure that only one attachment reuest
is in flight for a given network at a time *when the network doesn't
already exist on the node*. If the network already exists on the node
there is no need for synchronization because the network is already
allocated and on the node so there is no need to request it from the
manager.
This basically comes down to a race with `Find(network) ||
Create(network)` without any sort of syncronization.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This fix tries to address the issue raised in 36139 where
ExitCode and PID does not show up in Task.Status.ContainerStatus
The issue was caused by `json:",omitempty"` in PID and ExitCode
which interprate 0 as null.
This is confusion as ExitCode 0 does have a meaning.
This fix removes `json:",omitempty"` in ExitCode and PID,
but changes ContainerStatus to pointer so that ContainerStatus
does not show up at all if no content. If ContainerStatus
does have a content, then ExitCode and PID will show up (even if
they are 0).
This fix fixes 36139.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
ReleaseRWLayer can and should only be called once (unless it returns
an error), but might be called twice in case of a failure from
`system.EnsureRemoveAll(container.Root)`. This results in the
following error:
> Error response from daemon: driver "XXX" failed to remove root filesystem for YYY: layer not retained
The obvious fix is to set container.RWLayer to nil as soon as
ReleaseRWLayer() succeeds.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
This fix tries to address the issue raised in 36042
where secret and config are not configured with the
specified file mode.
This fix update the file mode so that it is not impacted
with umask.
Additional tests have been added.
This fix fixes 36042.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
You don't need to resolve the symlink for the exec as long as the
process is to keep running during execution.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
This change sets an explicit mount propagation for the daemon root.
This is useful for people who need to bind mount the docker daemon root
into a container.
Since bind mounting the daemon root should only ever happen with at
least `rlsave` propagation (to prevent the container from holding
references to mounts making it impossible for the daemon to clean up its
resources), we should make sure the user is actually able to this.
Most modern systems have shared root (`/`) propagation by default
already, however there are some cases where this may not be so
(e.g. potentially docker-in-docker scenarios, but also other cases).
So this just gives the daemon a little more control here and provides
a more uniform experience across different systems.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This fix tries to address the issue raised in 36083 where
`network inspect` does not show Created time if the network is
created in swarm scope.
The issue was that Created was not converted from swarm api.
This fix addresses the issue.
An unit test has been added.
This fix fixes 36083.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This fix tries to address the issue raised in 33661 where
network alias does not work when connect to a network the second time.
This fix address the issue.
This fix fixes 33661.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This fix tries to address the issue raised in 35752
where container start will trigger a crash if EndpointSettings is nil.
This fix adds the validation to make sure EndpointSettings != nil
This fix fixes 35752.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
When succesfully reloading the daemon configuration, print a message
in the logs with the active configuration:
INFO[2018-01-15T15:36:20.901688317Z] Got signal to reload configuration, reloading from: /etc/docker/daemon.json
INFO[2018-01-14T02:23:48.782769942Z] Reloaded configuration: {"mtu":1500,"pidfile":"/var/run/docker.pid","data-root":"/var/lib/docker","exec-root":"/var/run/docker","group":"docker","deprecated-key-path":"/etc/docker/key.json","max-concurrent-downloads":3,"max-concurrent-uploads":5,"shutdown-timeout":15,"debug":true,"hosts":["unix:///var/run/docker.sock"],"log-level":"info","swarm-default-advertise-addr":"","metrics-addr":"","log-driver":"json-file","ip":"0.0.0.0","icc":true,"iptables":true,"ip-forward":true,"ip-masq":true,"userland-proxy":true,"disable-legacy-registry":true,"experimental":false,"network-control-plane-mtu":1500,"runtimes":{"runc":{"path":"docker-runc"}},"default-runtime":"runc","oom-score-adjust":-500,"default-shm-size":67108864,"default-ipc-mode":"shareable"}
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This fix carries PR 34248: Added tag log option to json-logger
This fix changes to use RawAttrs based on review feedback.
This fix fixes 19803, this fix closes 34248.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Fixes#19803
Updated the json-logger to utilize the common log option
'tag' that can define container/image information to include
as part of logging.
When the 'tag' log option is not included, there is no change
to the log content via the json-logger. When the 'tag' log option
is included, the tag will be parsed as a template and the result
will be stored within each log entry as the attribute 'tag'.
Update: Removing test added to integration_cli as those have been deprecated.
Update: Using proper test calls (require and assert) in jsonfilelog_test.go based on review.
Update: Added new unit test configs for logs with tag. Updated unit test error checking.
Update: Cleanup check in jsonlogbytes_test.go to match pending changes in PR #34946.
Update: Merging to correct conflicts from PR #34946.
Signed-off-by: bonczj <josh.bonczkowski@gmail.com>