2015-07-30 21:01:53 +00:00
|
|
|
// Package daemon exposes the functions that occur on the host server
|
|
|
|
// that the Docker daemon is running.
|
|
|
|
//
|
|
|
|
// In implementing the various functions of the daemon, there is often
|
|
|
|
// a method-specific struct for configuring the runtime behavior.
|
2018-02-05 21:05:59 +00:00
|
|
|
package daemon // import "github.com/docker/docker/daemon"
|
2013-01-19 00:13:39 +00:00
|
|
|
|
|
|
|
import (
|
2017-03-30 20:52:40 +00:00
|
|
|
"context"
|
2013-01-19 00:13:39 +00:00
|
|
|
"fmt"
|
2015-12-17 17:35:24 +00:00
|
|
|
"net"
|
2019-04-02 18:37:44 +00:00
|
|
|
"net/url"
|
2014-04-28 21:36:04 +00:00
|
|
|
"os"
|
2015-09-04 00:51:04 +00:00
|
|
|
"path"
|
2015-01-16 19:48:25 +00:00
|
|
|
"path/filepath"
|
2014-07-30 06:51:43 +00:00
|
|
|
"runtime"
|
2014-04-28 21:36:04 +00:00
|
|
|
"sync"
|
|
|
|
"time"
|
|
|
|
|
2018-05-23 19:15:21 +00:00
|
|
|
"github.com/containerd/containerd"
|
|
|
|
"github.com/containerd/containerd/defaults"
|
|
|
|
"github.com/containerd/containerd/pkg/dialer"
|
2021-06-18 09:01:24 +00:00
|
|
|
"github.com/containerd/containerd/pkg/userns"
|
2018-09-14 23:07:16 +00:00
|
|
|
"github.com/containerd/containerd/remotes/docker"
|
2016-09-06 18:18:12 +00:00
|
|
|
"github.com/docker/docker/api/types"
|
|
|
|
containertypes "github.com/docker/docker/api/types/container"
|
2017-05-31 00:02:11 +00:00
|
|
|
"github.com/docker/docker/api/types/swarm"
|
2018-02-07 20:52:47 +00:00
|
|
|
"github.com/docker/docker/builder"
|
2015-11-12 19:55:17 +00:00
|
|
|
"github.com/docker/docker/container"
|
2017-01-23 11:23:07 +00:00
|
|
|
"github.com/docker/docker/daemon/config"
|
2022-07-05 14:33:39 +00:00
|
|
|
ctrd "github.com/docker/docker/daemon/containerd"
|
2015-04-03 22:17:49 +00:00
|
|
|
"github.com/docker/docker/daemon/events"
|
2021-06-25 10:01:27 +00:00
|
|
|
_ "github.com/docker/docker/daemon/graphdriver/register" // register graph drivers
|
2018-02-07 20:52:47 +00:00
|
|
|
"github.com/docker/docker/daemon/images"
|
2016-11-14 18:53:53 +00:00
|
|
|
"github.com/docker/docker/daemon/logger"
|
2017-08-29 06:49:26 +00:00
|
|
|
"github.com/docker/docker/daemon/network"
|
2017-01-04 17:01:59 +00:00
|
|
|
"github.com/docker/docker/daemon/stats"
|
2015-11-18 22:20:54 +00:00
|
|
|
dmetadata "github.com/docker/docker/distribution/metadata"
|
2017-01-04 17:01:59 +00:00
|
|
|
"github.com/docker/docker/dockerversion"
|
2021-06-25 10:01:27 +00:00
|
|
|
"github.com/docker/docker/errdefs"
|
2015-07-20 17:57:15 +00:00
|
|
|
"github.com/docker/docker/image"
|
2015-11-18 22:20:54 +00:00
|
|
|
"github.com/docker/docker/layer"
|
Windows: Experimental: Allow containerd for runtime
Signed-off-by: John Howard <jhoward@microsoft.com>
This is the first step in refactoring moby (dockerd) to use containerd on Windows.
Similar to the current model in Linux, this adds the option to enable it for runtime.
It does not switch the graphdriver to containerd snapshotters.
- Refactors libcontainerd to a series of subpackages so that either a
"local" containerd (1) or a "remote" (2) containerd can be loaded as opposed
to conditional compile as "local" for Windows and "remote" for Linux.
- Updates libcontainerd such that Windows has an option to allow the use of a
"remote" containerd. Here, it communicates over a named pipe using GRPC.
This is currently guarded behind the experimental flag, an environment variable,
and the providing of a pipename to connect to containerd.
- Infrastructure pieces such as under pkg/system to have helper functions for
determining whether containerd is being used.
(1) "local" containerd is what the daemon on Windows has used since inception.
It's not really containerd at all - it's simply local invocation of HCS APIs
directly in-process from the daemon through the Microsoft/hcsshim library.
(2) "remote" containerd is what docker on Linux uses for it's runtime. It means
that there is a separate containerd service running, and docker communicates over
GRPC to it.
To try this out, you will need to start with something like the following:
Window 1:
containerd --log-level debug
Window 2:
$env:DOCKER_WINDOWS_CONTAINERD=1
dockerd --experimental -D --containerd \\.\pipe\containerd-containerd
You will need the following binary from github.com/containerd/containerd in your path:
- containerd.exe
You will need the following binaries from github.com/Microsoft/hcsshim in your path:
- runhcs.exe
- containerd-shim-runhcs-v1.exe
For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`.
This is no different to the current requirements. However, you may need updated binaries,
particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit
binaries are somewhat out of date.
Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required
functionality needed for docker. This will come in time - this is a baby (although large)
step to migrating Docker on Windows to containerd.
Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use
HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable.
This abstraction is done in HCSShim. (Referring specifically to runtime)
Note the LCOW graphdriver still uses HCS v1 APIs regardless.
Note also that this does not migrate docker to use containerd snapshotters
rather than graphdrivers. This needs to be done in conjunction with Linux also
doing the same switch.
2019-01-08 22:30:52 +00:00
|
|
|
libcontainerdtypes "github.com/docker/docker/libcontainerd/types"
|
2021-05-28 00:15:56 +00:00
|
|
|
"github.com/docker/docker/libnetwork"
|
|
|
|
"github.com/docker/docker/libnetwork/cluster"
|
|
|
|
nwconfig "github.com/docker/docker/libnetwork/config"
|
2021-06-25 10:01:27 +00:00
|
|
|
"github.com/docker/docker/pkg/fileutils"
|
2015-10-08 15:51:41 +00:00
|
|
|
"github.com/docker/docker/pkg/idtools"
|
2016-10-07 20:53:14 +00:00
|
|
|
"github.com/docker/docker/pkg/plugingetter"
|
daemon: load and cache sysInfo on initialization
The `daemon.RawSysInfo()` function can be a heavy operation, as it collects
information about all cgroups on the host, networking, AppArmor, Seccomp, etc.
While looking at our code, I noticed that various parts in the code call this
function, potentially even _multiple times_ per container, for example, it is
called from:
- `verifyPlatformContainerSettings()`
- `oci.WithCgroups()` if the daemon has `cpu-rt-period` or `cpu-rt-runtime` configured
- in `ContainerDecoder.DecodeConfig()`, which is called on boith `container create` and `container commit`
Given that this information is not expected to change during the daemon's
lifecycle, and various information coming from this (such as seccomp and
apparmor status) was already cached, we may as well load it once, and cache
the results in the daemon instance.
This patch updates `daemon.RawSysInfo()` to use a `sync.Once()` so that
it's only executed once for the daemon's lifecycle.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-01-07 11:54:47 +00:00
|
|
|
"github.com/docker/docker/pkg/sysinfo"
|
2015-05-15 23:34:26 +00:00
|
|
|
"github.com/docker/docker/pkg/system"
|
2017-01-04 17:01:59 +00:00
|
|
|
"github.com/docker/docker/plugin"
|
2017-07-14 20:45:32 +00:00
|
|
|
pluginexec "github.com/docker/docker/plugin/executor/containerd"
|
2017-01-26 00:54:18 +00:00
|
|
|
refstore "github.com/docker/docker/reference"
|
2015-03-31 23:21:37 +00:00
|
|
|
"github.com/docker/docker/registry"
|
2014-07-24 22:19:50 +00:00
|
|
|
"github.com/docker/docker/runconfig"
|
2018-03-22 21:11:03 +00:00
|
|
|
volumesservice "github.com/docker/docker/volume/service"
|
2021-06-25 10:01:27 +00:00
|
|
|
"github.com/moby/buildkit/util/resolver"
|
2022-03-18 15:01:18 +00:00
|
|
|
resolverconfig "github.com/moby/buildkit/util/resolver/config"
|
2020-09-10 20:15:40 +00:00
|
|
|
"github.com/moby/locker"
|
2016-12-12 23:05:53 +00:00
|
|
|
"github.com/pkg/errors"
|
2021-06-25 10:01:27 +00:00
|
|
|
"github.com/sirupsen/logrus"
|
|
|
|
"go.etcd.io/bbolt"
|
2018-12-04 16:44:45 +00:00
|
|
|
"golang.org/x/sync/semaphore"
|
2021-06-24 17:06:17 +00:00
|
|
|
"golang.org/x/sync/singleflight"
|
2021-06-25 10:01:27 +00:00
|
|
|
"google.golang.org/grpc"
|
|
|
|
"google.golang.org/grpc/backoff"
|
2022-03-10 21:31:21 +00:00
|
|
|
"google.golang.org/grpc/credentials/insecure"
|
2015-11-14 00:59:01 +00:00
|
|
|
)
|
|
|
|
|
2017-08-24 18:48:16 +00:00
|
|
|
// Daemon holds information about the Docker daemon.
|
|
|
|
type Daemon struct {
|
2022-03-02 10:43:33 +00:00
|
|
|
id string
|
2021-06-10 08:06:04 +00:00
|
|
|
repository string
|
|
|
|
containers container.Store
|
2022-09-21 10:20:13 +00:00
|
|
|
containersReplica *container.ViewDB
|
2022-05-10 19:59:00 +00:00
|
|
|
execCommands *container.ExecStore
|
2022-06-28 12:09:10 +00:00
|
|
|
imageService ImageService
|
2021-06-10 08:06:04 +00:00
|
|
|
configStore *config.Config
|
|
|
|
statsCollector *stats.Collector
|
|
|
|
defaultLogConfig containertypes.LogConfig
|
2022-03-02 10:43:33 +00:00
|
|
|
registryService registry.Service
|
2021-06-10 08:06:04 +00:00
|
|
|
EventsService *events.Events
|
|
|
|
netController libnetwork.NetworkController
|
|
|
|
volumes *volumesservice.VolumesService
|
|
|
|
root string
|
daemon: load and cache sysInfo on initialization
The `daemon.RawSysInfo()` function can be a heavy operation, as it collects
information about all cgroups on the host, networking, AppArmor, Seccomp, etc.
While looking at our code, I noticed that various parts in the code call this
function, potentially even _multiple times_ per container, for example, it is
called from:
- `verifyPlatformContainerSettings()`
- `oci.WithCgroups()` if the daemon has `cpu-rt-period` or `cpu-rt-runtime` configured
- in `ContainerDecoder.DecodeConfig()`, which is called on boith `container create` and `container commit`
Given that this information is not expected to change during the daemon's
lifecycle, and various information coming from this (such as seccomp and
apparmor status) was already cached, we may as well load it once, and cache
the results in the daemon instance.
This patch updates `daemon.RawSysInfo()` to use a `sync.Once()` so that
it's only executed once for the daemon's lifecycle.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-01-07 11:54:47 +00:00
|
|
|
sysInfoOnce sync.Once
|
|
|
|
sysInfo *sysinfo.SysInfo
|
2021-06-10 08:06:04 +00:00
|
|
|
shutdown bool
|
2022-03-14 19:24:29 +00:00
|
|
|
idMapping idtools.IdentityMapping
|
2021-06-10 08:06:04 +00:00
|
|
|
PluginStore *plugin.Store // TODO: remove
|
2018-02-02 22:18:46 +00:00
|
|
|
pluginManager *plugin.Manager
|
|
|
|
linkIndex *linkIndex
|
2018-05-23 19:15:21 +00:00
|
|
|
containerdCli *containerd.Client
|
Windows: Experimental: Allow containerd for runtime
Signed-off-by: John Howard <jhoward@microsoft.com>
This is the first step in refactoring moby (dockerd) to use containerd on Windows.
Similar to the current model in Linux, this adds the option to enable it for runtime.
It does not switch the graphdriver to containerd snapshotters.
- Refactors libcontainerd to a series of subpackages so that either a
"local" containerd (1) or a "remote" (2) containerd can be loaded as opposed
to conditional compile as "local" for Windows and "remote" for Linux.
- Updates libcontainerd such that Windows has an option to allow the use of a
"remote" containerd. Here, it communicates over a named pipe using GRPC.
This is currently guarded behind the experimental flag, an environment variable,
and the providing of a pipename to connect to containerd.
- Infrastructure pieces such as under pkg/system to have helper functions for
determining whether containerd is being used.
(1) "local" containerd is what the daemon on Windows has used since inception.
It's not really containerd at all - it's simply local invocation of HCS APIs
directly in-process from the daemon through the Microsoft/hcsshim library.
(2) "remote" containerd is what docker on Linux uses for it's runtime. It means
that there is a separate containerd service running, and docker communicates over
GRPC to it.
To try this out, you will need to start with something like the following:
Window 1:
containerd --log-level debug
Window 2:
$env:DOCKER_WINDOWS_CONTAINERD=1
dockerd --experimental -D --containerd \\.\pipe\containerd-containerd
You will need the following binary from github.com/containerd/containerd in your path:
- containerd.exe
You will need the following binaries from github.com/Microsoft/hcsshim in your path:
- runhcs.exe
- containerd-shim-runhcs-v1.exe
For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`.
This is no different to the current requirements. However, you may need updated binaries,
particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit
binaries are somewhat out of date.
Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required
functionality needed for docker. This will come in time - this is a baby (although large)
step to migrating Docker on Windows to containerd.
Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use
HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable.
This abstraction is done in HCSShim. (Referring specifically to runtime)
Note the LCOW graphdriver still uses HCS v1 APIs regardless.
Note also that this does not migrate docker to use containerd snapshotters
rather than graphdrivers. This needs to be done in conjunction with Linux also
doing the same switch.
2019-01-08 22:30:52 +00:00
|
|
|
containerd libcontainerdtypes.Client
|
2018-02-02 22:18:46 +00:00
|
|
|
defaultIsolation containertypes.Isolation // Default isolation mode on Windows
|
|
|
|
clusterProvider cluster.Provider
|
|
|
|
cluster Cluster
|
|
|
|
genericResources []swarm.GenericResource
|
|
|
|
metricsPluginListener net.Listener
|
2022-06-02 14:36:53 +00:00
|
|
|
ReferenceStore refstore.Store
|
2016-09-02 13:20:54 +00:00
|
|
|
|
2017-01-04 17:01:59 +00:00
|
|
|
machineMemory uint64
|
|
|
|
|
2016-09-02 13:20:54 +00:00
|
|
|
seccompProfile []byte
|
|
|
|
seccompProfilePath string
|
2017-04-12 20:59:59 +00:00
|
|
|
|
2021-06-24 17:06:17 +00:00
|
|
|
usage singleflight.Group
|
|
|
|
|
|
|
|
pruneRunning int32
|
|
|
|
hosts map[string]bool // hosts stores the addresses the daemon is listening on
|
|
|
|
startupDone chan struct{}
|
2017-08-29 06:49:26 +00:00
|
|
|
|
Fix race in attachable network attachment
Attachable networks are networks created on the cluster which can then
be attached to by non-swarm containers. These networks are lazily
created on the node that wants to attach to that network.
When no container is currently attached to one of these networks on a
node, and then multiple containers which want that network are started
concurrently, this can cause a race condition in the network attachment
where essentially we try to attach the same network to the node twice.
To easily reproduce this issue you must use a multi-node cluster with a
worker node that has lots of CPUs (I used a 36 CPU node).
Repro steps:
1. On manager, `docker network create -d overlay --attachable test`
2. On worker, `docker create --restart=always --network test busybox
top`, many times... 200 is a good number (but not much more due to
subnet size restrictions)
3. Restart the daemon
When the daemon restarts, it will attempt to start all those containers
simultaneously. Note that you could try to do this yourself over the API,
but it's harder to trigger due to the added latency from going over
the API.
The error produced happens when the daemon tries to start the container
upon allocating the network resources:
```
attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
```
What happens here is the worker makes a network attachment request to
the manager. This is an async call which in the happy case would cause a
task to be placed on the node, which the worker is waiting for to get
the network configuration.
In the case of this race, the error ocurrs on the manager like this:
```
task allocation failure" error="failed during network allocation for task n7bwwwbymj2o2h9asqkza8gom: failed to allocate network IP for task n7bwwwbymj2o2h9asqkza8gom network rj4szie2zfauqnpgh4eri1yue: could not find an available IP" module=node node.id=u3489c490fx1df8onlyfo1v6e
```
The task is not created and the worker times out waiting for the task.
---
The mitigation for this is to make sure that only one attachment reuest
is in flight for a given network at a time *when the network doesn't
already exist on the node*. If the network already exists on the node
there is no need for synchronization because the network is already
allocated and on the node so there is no need to request it from the
manager.
This basically comes down to a race with `Find(network) ||
Create(network)` without any sort of syncronization.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2018-02-02 17:54:08 +00:00
|
|
|
attachmentStore network.AttachmentStore
|
|
|
|
attachableNetworkLock *locker.Locker
|
2020-10-30 19:47:06 +00:00
|
|
|
|
|
|
|
// This is used for Windows which doesn't currently support running on containerd
|
|
|
|
// It stores metadata for the content store (used for manifest caching)
|
|
|
|
// This needs to be closed on daemon exit
|
|
|
|
mdDB *bbolt.DB
|
Don't create source directory while the daemon is being shutdown, fix #30348
If a container mount the socket the daemon is listening on into
container while the daemon is being shutdown, the socket will
not exist on the host, then daemon will assume it's a directory
and create it on the host, this will cause the daemon can't start
next time.
fix issue https://github.com/moby/moby/issues/30348
To reproduce this issue, you can add following code
```
--- a/daemon/oci_linux.go
+++ b/daemon/oci_linux.go
@@ -8,6 +8,7 @@ import (
"sort"
"strconv"
"strings"
+ "time"
"github.com/Sirupsen/logrus"
"github.com/docker/docker/container"
@@ -666,7 +667,8 @@ func (daemon *Daemon) createSpec(c *container.Container) (*libcontainerd.Spec, e
if err := daemon.setupIpcDirs(c); err != nil {
return nil, err
}
-
+ fmt.Printf("===please stop the daemon===\n")
+ time.Sleep(time.Second * 2)
ms, err := daemon.setupMounts(c)
if err != nil {
return nil, err
```
step1 run a container which has `--restart always` and `-v /var/run/docker.sock:/sock`
```
$ docker run -ti --restart always -v /var/run/docker.sock:/sock busybox
/ #
```
step2 exit the the container
```
/ # exit
```
and kill the daemon when you see
```
===please stop the daemon===
```
in the daemon log
The daemon can't restart again and fail with `can't create unix socket /var/run/docker.sock: is a directory`.
Signed-off-by: Lei Jitang <leijitang@huawei.com>
2017-05-22 07:44:01 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// StoreHosts stores the addresses the daemon is listening on
|
|
|
|
func (daemon *Daemon) StoreHosts(hosts []string) {
|
|
|
|
if daemon.hosts == nil {
|
|
|
|
daemon.hosts = make(map[string]bool)
|
|
|
|
}
|
|
|
|
for _, h := range hosts {
|
|
|
|
daemon.hosts[h] = true
|
|
|
|
}
|
2013-03-21 07:25:00 +00:00
|
|
|
}
|
|
|
|
|
2016-10-06 14:09:54 +00:00
|
|
|
// HasExperimental returns whether the experimental features of the daemon are enabled or not
|
|
|
|
func (daemon *Daemon) HasExperimental() bool {
|
2017-09-19 22:14:41 +00:00
|
|
|
return daemon.configStore != nil && daemon.configStore.Experimental
|
2016-10-06 14:09:54 +00:00
|
|
|
}
|
|
|
|
|
2018-08-22 06:05:26 +00:00
|
|
|
// Features returns the features map from configStore
|
|
|
|
func (daemon *Daemon) Features() *map[string]bool {
|
|
|
|
return &daemon.configStore.Features
|
|
|
|
}
|
|
|
|
|
2022-07-05 14:33:39 +00:00
|
|
|
// UsesSnapshotter returns true if feature flag to use containerd snapshotter is enabled
|
|
|
|
func (daemon *Daemon) UsesSnapshotter() bool {
|
2022-06-22 07:05:21 +00:00
|
|
|
if daemon.configStore.Features != nil {
|
|
|
|
if b, ok := daemon.configStore.Features["containerd-snapshotter"]; ok {
|
|
|
|
return b
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
2020-04-14 03:31:26 +00:00
|
|
|
// RegistryHosts returns registry configuration in containerd resolvers format
|
|
|
|
func (daemon *Daemon) RegistryHosts() docker.RegistryHosts {
|
|
|
|
var (
|
|
|
|
registryKey = "docker.io"
|
|
|
|
mirrors = make([]string, len(daemon.configStore.Mirrors))
|
2022-03-18 15:01:18 +00:00
|
|
|
m = map[string]resolverconfig.RegistryConfig{}
|
2020-04-14 03:31:26 +00:00
|
|
|
)
|
|
|
|
// must trim "https://" or "http://" prefix
|
|
|
|
for i, v := range daemon.configStore.Mirrors {
|
|
|
|
if uri, err := url.Parse(v); err == nil {
|
|
|
|
v = uri.Host
|
2018-09-14 23:07:16 +00:00
|
|
|
}
|
2020-04-14 03:31:26 +00:00
|
|
|
mirrors[i] = v
|
|
|
|
}
|
|
|
|
// set mirrors for default registry
|
2022-03-18 15:01:18 +00:00
|
|
|
m[registryKey] = resolverconfig.RegistryConfig{Mirrors: mirrors}
|
2020-04-14 03:31:26 +00:00
|
|
|
|
|
|
|
for _, v := range daemon.configStore.InsecureRegistries {
|
|
|
|
u, err := url.Parse(v)
|
2022-03-18 15:01:18 +00:00
|
|
|
c := resolverconfig.RegistryConfig{}
|
2020-04-14 03:31:26 +00:00
|
|
|
if err == nil {
|
|
|
|
v = u.Host
|
|
|
|
t := true
|
|
|
|
if u.Scheme == "http" {
|
|
|
|
c.PlainHTTP = &t
|
|
|
|
} else {
|
|
|
|
c.Insecure = &t
|
2018-09-14 23:07:16 +00:00
|
|
|
}
|
|
|
|
}
|
2020-04-14 03:31:26 +00:00
|
|
|
m[v] = c
|
|
|
|
}
|
2018-09-14 23:07:16 +00:00
|
|
|
|
2020-04-14 03:31:26 +00:00
|
|
|
for k, v := range m {
|
2022-02-26 12:40:46 +00:00
|
|
|
v.TLSConfigDir = []string{registry.HostCertsDir(k)}
|
|
|
|
m[k] = v
|
2018-09-14 23:07:16 +00:00
|
|
|
}
|
2020-04-14 03:31:26 +00:00
|
|
|
|
2020-05-13 20:55:43 +00:00
|
|
|
certsDir := registry.CertsDir()
|
2021-08-24 10:10:50 +00:00
|
|
|
if fis, err := os.ReadDir(certsDir); err == nil {
|
2020-05-13 20:55:43 +00:00
|
|
|
for _, fi := range fis {
|
|
|
|
if _, ok := m[fi.Name()]; !ok {
|
2022-03-18 15:01:18 +00:00
|
|
|
m[fi.Name()] = resolverconfig.RegistryConfig{
|
2020-05-13 20:55:43 +00:00
|
|
|
TLSConfigDir: []string{filepath.Join(certsDir, fi.Name())},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-04-14 03:31:26 +00:00
|
|
|
return resolver.NewRegistryConfig(m)
|
2018-09-14 23:07:16 +00:00
|
|
|
}
|
|
|
|
|
2015-09-29 17:51:40 +00:00
|
|
|
func (daemon *Daemon) restore() error {
|
2018-12-04 16:44:45 +00:00
|
|
|
var mapLock sync.Mutex
|
2017-05-16 23:56:56 +00:00
|
|
|
containers := make(map[string]*container.Container)
|
2014-05-30 18:03:56 +00:00
|
|
|
|
2016-09-28 06:41:19 +00:00
|
|
|
logrus.Info("Loading containers: start.")
|
|
|
|
|
2021-08-24 10:10:50 +00:00
|
|
|
dir, err := os.ReadDir(daemon.repository)
|
2013-01-19 00:13:39 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2013-10-24 23:49:28 +00:00
|
|
|
|
2018-12-04 16:44:45 +00:00
|
|
|
// parallelLimit is the maximum number of parallel startup jobs that we
|
|
|
|
// allow (this is the limited used for all startup semaphores). The multipler
|
|
|
|
// (128) was chosen after some fairly significant benchmarking -- don't change
|
|
|
|
// it unless you've tested it significantly (this value is adjusted if
|
|
|
|
// RLIMIT_NOFILE is small to avoid EMFILE).
|
|
|
|
parallelLimit := adjustParallelLimit(len(dir), 128*runtime.NumCPU())
|
|
|
|
|
|
|
|
// Re-used for all parallel startup jobs.
|
|
|
|
var group sync.WaitGroup
|
|
|
|
sem := semaphore.NewWeighted(int64(parallelLimit))
|
|
|
|
|
2013-12-18 18:43:42 +00:00
|
|
|
for _, v := range dir {
|
2018-12-04 16:44:45 +00:00
|
|
|
group.Add(1)
|
|
|
|
go func(id string) {
|
|
|
|
defer group.Done()
|
|
|
|
_ = sem.Acquire(context.Background(), 1)
|
|
|
|
defer sem.Release(1)
|
|
|
|
|
2020-12-15 12:06:39 +00:00
|
|
|
log := logrus.WithField("container", id)
|
|
|
|
|
|
|
|
c, err := daemon.load(id)
|
2016-01-27 01:20:30 +00:00
|
|
|
if err != nil {
|
2020-12-15 12:06:39 +00:00
|
|
|
log.WithError(err).Error("failed to load container")
|
2018-12-04 16:44:45 +00:00
|
|
|
return
|
|
|
|
}
|
2022-08-09 09:11:52 +00:00
|
|
|
if c.Driver != daemon.imageService.StorageDriver() {
|
2022-08-09 15:13:38 +00:00
|
|
|
// Ignore the container if it wasn't created with the current storage-driver
|
|
|
|
log.Debugf("not restoring container because it was created with another storage driver (%s)", c.Driver)
|
|
|
|
return
|
|
|
|
}
|
|
|
|
rwlayer, err := daemon.imageService.GetLayerByID(c.ID)
|
|
|
|
if err != nil {
|
|
|
|
log.WithError(err).Error("failed to load container mount")
|
|
|
|
return
|
2018-12-04 16:44:45 +00:00
|
|
|
}
|
2022-08-09 15:13:38 +00:00
|
|
|
c.RWLayer = rwlayer
|
|
|
|
log.WithFields(logrus.Fields{
|
|
|
|
"running": c.IsRunning(),
|
|
|
|
"paused": c.IsPaused(),
|
|
|
|
}).Debug("loaded container")
|
|
|
|
|
|
|
|
mapLock.Lock()
|
|
|
|
containers[c.ID] = c
|
|
|
|
mapLock.Unlock()
|
2018-12-04 16:44:45 +00:00
|
|
|
}(v.Name())
|
2013-10-05 02:25:15 +00:00
|
|
|
}
|
2018-12-04 16:44:45 +00:00
|
|
|
group.Wait()
|
2013-10-05 02:25:15 +00:00
|
|
|
|
2016-03-01 16:30:27 +00:00
|
|
|
removeContainers := make(map[string]*container.Container)
|
2015-11-24 20:25:12 +00:00
|
|
|
restartContainers := make(map[*container.Container]chan struct{})
|
2016-06-14 16:13:53 +00:00
|
|
|
activeSandboxes := make(map[string]interface{})
|
2018-12-04 16:44:45 +00:00
|
|
|
|
2019-01-05 08:04:21 +00:00
|
|
|
for _, c := range containers {
|
2018-12-04 16:44:45 +00:00
|
|
|
group.Add(1)
|
|
|
|
go func(c *container.Container) {
|
|
|
|
defer group.Done()
|
|
|
|
_ = sem.Acquire(context.Background(), 1)
|
|
|
|
defer sem.Release(1)
|
2016-09-06 13:49:10 +00:00
|
|
|
|
2020-12-15 12:06:39 +00:00
|
|
|
log := logrus.WithField("container", c.ID)
|
|
|
|
|
2018-12-04 16:44:45 +00:00
|
|
|
if err := daemon.registerName(c); err != nil {
|
2020-12-15 12:06:39 +00:00
|
|
|
log.WithError(err).Errorf("failed to register container name: %s", c.Name)
|
2018-12-04 16:44:45 +00:00
|
|
|
mapLock.Lock()
|
2019-01-05 08:04:21 +00:00
|
|
|
delete(containers, c.ID)
|
2018-12-04 16:44:45 +00:00
|
|
|
mapLock.Unlock()
|
|
|
|
return
|
2016-05-07 16:20:24 +00:00
|
|
|
}
|
2018-12-04 16:44:45 +00:00
|
|
|
if err := daemon.Register(c); err != nil {
|
2020-12-15 12:06:39 +00:00
|
|
|
log.WithError(err).Error("failed to register container")
|
2018-12-04 16:44:45 +00:00
|
|
|
mapLock.Lock()
|
2019-01-05 08:04:21 +00:00
|
|
|
delete(containers, c.ID)
|
2018-12-04 16:44:45 +00:00
|
|
|
mapLock.Unlock()
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}(c)
|
2016-03-18 18:50:19 +00:00
|
|
|
}
|
2018-12-04 16:44:45 +00:00
|
|
|
group.Wait()
|
2016-11-30 18:59:54 +00:00
|
|
|
|
2016-03-18 18:50:19 +00:00
|
|
|
for _, c := range containers {
|
2018-12-04 16:44:45 +00:00
|
|
|
group.Add(1)
|
2016-03-18 18:50:19 +00:00
|
|
|
go func(c *container.Container) {
|
2018-12-04 16:44:45 +00:00
|
|
|
defer group.Done()
|
|
|
|
_ = sem.Acquire(context.Background(), 1)
|
|
|
|
defer sem.Release(1)
|
|
|
|
|
2020-12-15 12:06:39 +00:00
|
|
|
log := logrus.WithField("container", c.ID)
|
|
|
|
|
2017-03-27 17:18:53 +00:00
|
|
|
if err := daemon.checkpointAndSave(c); err != nil {
|
2020-12-15 12:06:39 +00:00
|
|
|
log.WithError(err).Error("error saving backported mountspec to disk")
|
2016-08-04 19:34:52 +00:00
|
|
|
}
|
|
|
|
|
2017-04-25 17:05:21 +00:00
|
|
|
daemon.setStateCounter(c)
|
2017-09-22 13:52:41 +00:00
|
|
|
|
2020-12-19 22:04:06 +00:00
|
|
|
logger := func(c *container.Container) *logrus.Entry {
|
|
|
|
return log.WithFields(logrus.Fields{
|
|
|
|
"running": c.IsRunning(),
|
|
|
|
"paused": c.IsPaused(),
|
|
|
|
"restarting": c.IsRestarting(),
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
logger(c).Debug("restoring container")
|
2017-09-22 13:52:41 +00:00
|
|
|
|
2022-05-10 19:59:00 +00:00
|
|
|
var es *containerd.ExitStatus
|
2017-09-22 13:52:41 +00:00
|
|
|
|
2022-05-10 19:59:00 +00:00
|
|
|
if err := c.RestoreTask(context.Background(), daemon.containerd); err != nil && !errdefs.IsNotFound(err) {
|
2020-12-19 22:04:06 +00:00
|
|
|
logger(c).WithError(err).Error("failed to restore container with containerd")
|
2017-09-22 13:52:41 +00:00
|
|
|
return
|
|
|
|
}
|
2022-05-10 19:59:00 +00:00
|
|
|
|
|
|
|
alive := false
|
|
|
|
status := containerd.Unknown
|
|
|
|
if tsk, ok := c.Task(); ok {
|
|
|
|
s, err := tsk.Status(context.Background())
|
|
|
|
if err != nil {
|
|
|
|
logger(c).WithError(err).Error("failed to get task status")
|
|
|
|
} else {
|
|
|
|
status = s.Status
|
|
|
|
alive = status != containerd.Stopped
|
|
|
|
if !alive {
|
|
|
|
logger(c).Debug("cleaning up dead container process")
|
|
|
|
es, err = tsk.Delete(context.Background())
|
|
|
|
if err != nil && !errdefs.IsNotFound(err) {
|
|
|
|
logger(c).WithError(err).Error("failed to delete task from containerd")
|
|
|
|
return
|
|
|
|
}
|
|
|
|
} else if !daemon.configStore.LiveRestoreEnabled {
|
|
|
|
logger(c).Debug("shutting down container considered alive by containerd")
|
|
|
|
if err := daemon.shutdownContainer(c); err != nil && !errdefs.IsNotFound(err) {
|
|
|
|
log.WithError(err).Error("error shutting down container")
|
|
|
|
return
|
|
|
|
}
|
|
|
|
status = containerd.Stopped
|
|
|
|
alive = false
|
|
|
|
c.ResetRestartManager(false)
|
2020-12-19 22:04:06 +00:00
|
|
|
}
|
2017-09-22 13:52:41 +00:00
|
|
|
}
|
|
|
|
}
|
2022-05-10 19:59:00 +00:00
|
|
|
// If the containerd task for the container was not found, docker's view of the
|
|
|
|
// container state will be updated accordingly via SetStopped further down.
|
2017-09-22 13:52:41 +00:00
|
|
|
|
2016-03-18 18:50:19 +00:00
|
|
|
if c.IsRunning() || c.IsPaused() {
|
2020-12-19 22:04:06 +00:00
|
|
|
logger(c).Debug("syncing container on disk state with real state")
|
|
|
|
|
2016-10-05 20:29:56 +00:00
|
|
|
c.RestartManager().Cancel() // manually start containers because some need to wait for swarm networking
|
2017-09-22 13:52:41 +00:00
|
|
|
|
2021-01-26 13:38:10 +00:00
|
|
|
switch {
|
|
|
|
case c.IsPaused() && alive:
|
2022-05-10 19:59:00 +00:00
|
|
|
logger(c).WithField("state", status).Info("restored container paused")
|
|
|
|
switch status {
|
|
|
|
case containerd.Paused, containerd.Pausing:
|
|
|
|
// nothing to do
|
|
|
|
case containerd.Unknown, containerd.Stopped, "":
|
|
|
|
log.WithField("status", status).Error("unexpected status for paused container during restore")
|
|
|
|
default:
|
|
|
|
// running
|
|
|
|
c.Lock()
|
|
|
|
c.Paused = false
|
|
|
|
daemon.setStateCounter(c)
|
|
|
|
daemon.updateHealthMonitor(c)
|
|
|
|
if err := c.CheckpointTo(daemon.containersReplica); err != nil {
|
|
|
|
log.WithError(err).Error("failed to update paused container state")
|
2017-09-22 13:52:41 +00:00
|
|
|
}
|
2022-05-10 19:59:00 +00:00
|
|
|
c.Unlock()
|
2017-09-22 13:52:41 +00:00
|
|
|
}
|
2021-01-26 13:38:10 +00:00
|
|
|
case !c.IsPaused() && alive:
|
|
|
|
logger(c).Debug("restoring healthcheck")
|
|
|
|
c.Lock()
|
|
|
|
daemon.updateHealthMonitor(c)
|
|
|
|
c.Unlock()
|
2017-09-22 13:52:41 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if !alive {
|
2020-12-19 22:04:06 +00:00
|
|
|
logger(c).Debug("setting stopped state")
|
2017-09-22 13:52:41 +00:00
|
|
|
c.Lock()
|
2022-05-10 19:59:00 +00:00
|
|
|
var ces container.ExitStatus
|
|
|
|
if es != nil {
|
|
|
|
ces.ExitCode = int(es.ExitCode())
|
|
|
|
ces.ExitedAt = es.ExitTime()
|
|
|
|
}
|
|
|
|
c.SetStopped(&ces)
|
2017-09-22 13:52:41 +00:00
|
|
|
daemon.Cleanup(c)
|
|
|
|
if err := c.CheckpointTo(daemon.containersReplica); err != nil {
|
2020-12-15 12:06:39 +00:00
|
|
|
log.WithError(err).Error("failed to update stopped container state")
|
2017-09-22 13:52:41 +00:00
|
|
|
}
|
|
|
|
c.Unlock()
|
2020-12-19 22:04:06 +00:00
|
|
|
logger(c).Debug("set stopped state")
|
2016-03-18 18:50:19 +00:00
|
|
|
}
|
2016-12-29 01:08:03 +00:00
|
|
|
|
|
|
|
// we call Mount and then Unmount to get BaseFs of the container
|
|
|
|
if err := daemon.Mount(c); err != nil {
|
|
|
|
// The mount is unlikely to fail. However, in case mount fails
|
|
|
|
// the container should be allowed to restore here. Some functionalities
|
|
|
|
// (like docker exec -u user) might be missing but container is able to be
|
|
|
|
// stopped/restarted/removed.
|
|
|
|
// See #29365 for related information.
|
|
|
|
// The error is only logged here.
|
2020-12-19 22:04:06 +00:00
|
|
|
logger(c).WithError(err).Warn("failed to mount container to get BaseFs path")
|
2016-12-29 01:08:03 +00:00
|
|
|
} else {
|
|
|
|
if err := daemon.Unmount(c); err != nil {
|
2020-12-19 22:04:06 +00:00
|
|
|
logger(c).WithError(err).Warn("failed to umount container to get BaseFs path")
|
2016-12-29 01:08:03 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-10-05 20:29:56 +00:00
|
|
|
c.ResetRestartManager(false)
|
2016-06-16 11:54:36 +00:00
|
|
|
if !c.HostConfig.NetworkMode.IsContainer() && c.IsRunning() {
|
2016-06-14 16:13:53 +00:00
|
|
|
options, err := daemon.buildSandboxOptions(c)
|
|
|
|
if err != nil {
|
2020-12-19 22:04:06 +00:00
|
|
|
logger(c).WithError(err).Warn("failed to build sandbox option to restore container")
|
2016-06-14 16:13:53 +00:00
|
|
|
}
|
|
|
|
mapLock.Lock()
|
|
|
|
activeSandboxes[c.NetworkSettings.SandboxID] = options
|
|
|
|
mapLock.Unlock()
|
|
|
|
}
|
2017-12-14 18:39:12 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// get list of containers we need to restart
|
|
|
|
|
|
|
|
// Do not autostart containers which
|
|
|
|
// has endpoints in a swarm scope
|
|
|
|
// network yet since the cluster is
|
|
|
|
// not initialized yet. We will start
|
|
|
|
// it after the cluster is
|
|
|
|
// initialized.
|
2018-04-23 09:17:53 +00:00
|
|
|
if daemon.configStore.AutoRestart && c.ShouldRestart() && !c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore {
|
2017-12-14 18:39:12 +00:00
|
|
|
mapLock.Lock()
|
|
|
|
restartContainers[c] = make(chan struct{})
|
|
|
|
mapLock.Unlock()
|
|
|
|
} else if c.HostConfig != nil && c.HostConfig.AutoRemove {
|
|
|
|
mapLock.Lock()
|
|
|
|
removeContainers[c.ID] = c
|
|
|
|
mapLock.Unlock()
|
2016-03-18 18:50:19 +00:00
|
|
|
}
|
2015-09-04 00:51:04 +00:00
|
|
|
|
2017-02-21 23:55:59 +00:00
|
|
|
c.Lock()
|
2016-04-29 18:38:13 +00:00
|
|
|
if c.RemovalInProgress {
|
|
|
|
// We probably crashed in the middle of a removal, reset
|
|
|
|
// the flag.
|
|
|
|
//
|
|
|
|
// We DO NOT remove the container here as we do not
|
|
|
|
// know if the user had requested for either the
|
|
|
|
// associated volumes, network links or both to also
|
|
|
|
// be removed. So we put the container in the "dead"
|
|
|
|
// state and leave further processing up to them.
|
2017-02-21 23:55:59 +00:00
|
|
|
c.RemovalInProgress = false
|
|
|
|
c.Dead = true
|
2017-03-27 17:18:53 +00:00
|
|
|
if err := c.CheckpointTo(daemon.containersReplica); err != nil {
|
2020-12-15 12:06:39 +00:00
|
|
|
log.WithError(err).Error("failed to update RemovalInProgress container state")
|
|
|
|
} else {
|
|
|
|
log.Debugf("reset RemovalInProgress state for container")
|
2017-03-27 17:18:53 +00:00
|
|
|
}
|
2016-04-29 18:38:13 +00:00
|
|
|
}
|
2017-02-21 23:55:59 +00:00
|
|
|
c.Unlock()
|
2020-12-19 22:04:06 +00:00
|
|
|
logger(c).Debug("done restoring container")
|
2016-03-18 18:50:19 +00:00
|
|
|
}(c)
|
2015-11-21 18:45:34 +00:00
|
|
|
}
|
2018-12-04 16:44:45 +00:00
|
|
|
group.Wait()
|
|
|
|
|
2022-04-26 08:32:10 +00:00
|
|
|
// Initialize the network controller and configure network settings.
|
|
|
|
//
|
|
|
|
// Note that we cannot initialize the network controller earlier, as it
|
|
|
|
// needs to know if there's active sandboxes (running containers).
|
|
|
|
if err = daemon.initNetworkController(activeSandboxes); err != nil {
|
2016-06-14 16:13:53 +00:00
|
|
|
return fmt.Errorf("Error initializing network controller: %v", err)
|
|
|
|
}
|
2015-11-21 18:45:34 +00:00
|
|
|
|
2015-09-04 00:51:04 +00:00
|
|
|
// Now that all the containers are registered, register the links
|
|
|
|
for _, c := range containers {
|
2018-12-04 16:44:45 +00:00
|
|
|
group.Add(1)
|
|
|
|
go func(c *container.Container) {
|
|
|
|
_ = sem.Acquire(context.Background(), 1)
|
|
|
|
|
|
|
|
if err := daemon.registerLinks(c, c.HostConfig); err != nil {
|
2020-12-15 12:06:39 +00:00
|
|
|
logrus.WithField("container", c.ID).WithError(err).Error("failed to register link for container")
|
2018-12-04 16:44:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
sem.Release(1)
|
|
|
|
group.Done()
|
|
|
|
}(c)
|
2015-11-24 20:25:12 +00:00
|
|
|
}
|
2018-12-04 16:44:45 +00:00
|
|
|
group.Wait()
|
2014-08-06 17:40:43 +00:00
|
|
|
|
2015-11-24 20:25:12 +00:00
|
|
|
for c, notifier := range restartContainers {
|
|
|
|
group.Add(1)
|
2015-09-04 00:51:04 +00:00
|
|
|
go func(c *container.Container, chNotify chan struct{}) {
|
2018-12-04 16:44:45 +00:00
|
|
|
_ = sem.Acquire(context.Background(), 1)
|
2020-12-15 12:06:39 +00:00
|
|
|
|
|
|
|
log := logrus.WithField("container", c.ID)
|
|
|
|
|
|
|
|
log.Debug("starting container")
|
2014-08-06 17:40:43 +00:00
|
|
|
|
2015-11-24 20:25:12 +00:00
|
|
|
// ignore errors here as this is a best effort to wait for children to be
|
|
|
|
// running before we try to start the container
|
2015-09-04 00:51:04 +00:00
|
|
|
children := daemon.children(c)
|
2019-01-09 18:24:03 +00:00
|
|
|
timeout := time.NewTimer(5 * time.Second)
|
|
|
|
defer timeout.Stop()
|
|
|
|
|
2015-11-24 20:25:12 +00:00
|
|
|
for _, child := range children {
|
|
|
|
if notifier, exists := restartContainers[child]; exists {
|
|
|
|
select {
|
|
|
|
case <-notifier:
|
2019-01-09 18:24:03 +00:00
|
|
|
case <-timeout.C:
|
2015-11-24 20:25:12 +00:00
|
|
|
}
|
2014-08-06 17:40:43 +00:00
|
|
|
}
|
|
|
|
}
|
2016-05-04 14:13:23 +00:00
|
|
|
|
2022-09-30 22:30:58 +00:00
|
|
|
if err := daemon.prepareMountPoints(c); err != nil {
|
|
|
|
log.WithError(err).Error("failed to prepare mount points for container")
|
|
|
|
}
|
2016-09-19 16:01:16 +00:00
|
|
|
if err := daemon.containerStart(c, "", "", true); err != nil {
|
2020-12-15 12:06:39 +00:00
|
|
|
log.WithError(err).Error("failed to start container")
|
2015-11-24 20:25:12 +00:00
|
|
|
}
|
|
|
|
close(chNotify)
|
2015-09-04 00:51:04 +00:00
|
|
|
|
2018-12-04 16:44:45 +00:00
|
|
|
sem.Release(1)
|
|
|
|
group.Done()
|
|
|
|
}(c, notifier)
|
2014-06-06 00:31:58 +00:00
|
|
|
}
|
2015-05-19 20:05:25 +00:00
|
|
|
group.Wait()
|
2014-06-06 00:31:58 +00:00
|
|
|
|
2016-03-01 16:30:27 +00:00
|
|
|
for id := range removeContainers {
|
2018-12-04 16:44:45 +00:00
|
|
|
group.Add(1)
|
2016-03-01 16:30:27 +00:00
|
|
|
go func(cid string) {
|
2018-12-04 16:44:45 +00:00
|
|
|
_ = sem.Acquire(context.Background(), 1)
|
|
|
|
|
2016-03-01 16:30:27 +00:00
|
|
|
if err := daemon.ContainerRm(cid, &types.ContainerRmConfig{ForceRemove: true, RemoveVolume: true}); err != nil {
|
2020-12-15 12:06:39 +00:00
|
|
|
logrus.WithField("container", cid).WithError(err).Error("failed to remove container")
|
2016-03-01 16:30:27 +00:00
|
|
|
}
|
2018-12-04 16:44:45 +00:00
|
|
|
|
|
|
|
sem.Release(1)
|
|
|
|
group.Done()
|
2016-03-01 16:30:27 +00:00
|
|
|
}(id)
|
|
|
|
}
|
2018-12-04 16:44:45 +00:00
|
|
|
group.Wait()
|
2016-03-01 16:30:27 +00:00
|
|
|
|
2016-01-20 17:06:03 +00:00
|
|
|
// any containers that were started above would already have had this done,
|
|
|
|
// however we need to now prepare the mountpoints for the rest of the containers as well.
|
|
|
|
// This shouldn't cause any issue running on the containers that already had this run.
|
|
|
|
// This must be run after any containers with a restart policy so that containerized plugins
|
|
|
|
// can have a chance to be running before we try to initialize them.
|
|
|
|
for _, c := range containers {
|
2016-01-27 07:43:40 +00:00
|
|
|
// if the container has restart policy, do not
|
|
|
|
// prepare the mountpoints since it has been done on restarting.
|
|
|
|
// This is to speed up the daemon start when a restart container
|
2016-12-19 12:33:36 +00:00
|
|
|
// has a volume and the volume driver is not available.
|
2016-01-27 07:43:40 +00:00
|
|
|
if _, ok := restartContainers[c]; ok {
|
|
|
|
continue
|
2016-03-01 16:30:27 +00:00
|
|
|
} else if _, ok := removeContainers[c.ID]; ok {
|
|
|
|
// container is automatically removed, skip it.
|
|
|
|
continue
|
2016-01-27 07:43:40 +00:00
|
|
|
}
|
2016-03-01 16:30:27 +00:00
|
|
|
|
2016-01-20 17:06:03 +00:00
|
|
|
group.Add(1)
|
|
|
|
go func(c *container.Container) {
|
2018-12-04 16:44:45 +00:00
|
|
|
_ = sem.Acquire(context.Background(), 1)
|
|
|
|
|
2016-01-20 17:06:03 +00:00
|
|
|
if err := daemon.prepareMountPoints(c); err != nil {
|
2020-12-15 12:06:39 +00:00
|
|
|
logrus.WithField("container", c.ID).WithError(err).Error("failed to prepare mountpoints for container")
|
2016-01-20 17:06:03 +00:00
|
|
|
}
|
2018-12-04 16:44:45 +00:00
|
|
|
|
|
|
|
sem.Release(1)
|
|
|
|
group.Done()
|
2016-01-20 17:06:03 +00:00
|
|
|
}(c)
|
|
|
|
}
|
|
|
|
group.Wait()
|
|
|
|
|
2016-09-28 06:41:19 +00:00
|
|
|
logrus.Info("Loading containers: done.")
|
2013-10-05 02:25:15 +00:00
|
|
|
|
2013-01-19 00:13:39 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2016-09-09 16:55:57 +00:00
|
|
|
// RestartSwarmContainers restarts any autostart container which has a
|
|
|
|
// swarm endpoint.
|
|
|
|
func (daemon *Daemon) RestartSwarmContainers() {
|
2018-12-04 16:44:45 +00:00
|
|
|
ctx := context.Background()
|
|
|
|
|
|
|
|
// parallelLimit is the maximum number of parallel startup jobs that we
|
|
|
|
// allow (this is the limited used for all startup semaphores). The multipler
|
|
|
|
// (128) was chosen after some fairly significant benchmarking -- don't change
|
|
|
|
// it unless you've tested it significantly (this value is adjusted if
|
|
|
|
// RLIMIT_NOFILE is small to avoid EMFILE).
|
|
|
|
parallelLimit := adjustParallelLimit(len(daemon.List()), 128*runtime.NumCPU())
|
|
|
|
|
|
|
|
var group sync.WaitGroup
|
|
|
|
sem := semaphore.NewWeighted(int64(parallelLimit))
|
|
|
|
|
2016-09-09 16:55:57 +00:00
|
|
|
for _, c := range daemon.List() {
|
|
|
|
if !c.IsRunning() && !c.IsPaused() {
|
|
|
|
// Autostart all the containers which has a
|
|
|
|
// swarm endpoint now that the cluster is
|
|
|
|
// initialized.
|
2018-04-23 09:17:53 +00:00
|
|
|
if daemon.configStore.AutoRestart && c.ShouldRestart() && c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore {
|
2016-09-09 16:55:57 +00:00
|
|
|
group.Add(1)
|
|
|
|
go func(c *container.Container) {
|
2018-12-04 16:44:45 +00:00
|
|
|
if err := sem.Acquire(ctx, 1); err != nil {
|
|
|
|
// ctx is done.
|
|
|
|
group.Done()
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2016-09-19 16:01:16 +00:00
|
|
|
if err := daemon.containerStart(c, "", "", true); err != nil {
|
2020-12-15 12:06:39 +00:00
|
|
|
logrus.WithField("container", c.ID).WithError(err).Error("failed to start swarm container")
|
2016-09-09 16:55:57 +00:00
|
|
|
}
|
2018-12-04 16:44:45 +00:00
|
|
|
|
|
|
|
sem.Release(1)
|
|
|
|
group.Done()
|
2016-09-09 16:55:57 +00:00
|
|
|
}(c)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
group.Wait()
|
|
|
|
}
|
|
|
|
|
2015-09-04 00:51:04 +00:00
|
|
|
func (daemon *Daemon) children(c *container.Container) map[string]*container.Container {
|
|
|
|
return daemon.linkIndex.children(c)
|
2013-10-05 02:25:15 +00:00
|
|
|
}
|
|
|
|
|
2015-07-30 21:01:53 +00:00
|
|
|
// parents returns the names of the parent containers of the container
|
|
|
|
// with the given name.
|
2015-09-04 00:51:04 +00:00
|
|
|
func (daemon *Daemon) parents(c *container.Container) map[string]*container.Container {
|
|
|
|
return daemon.linkIndex.parents(c)
|
2014-07-14 23:19:37 +00:00
|
|
|
}
|
|
|
|
|
2015-11-12 19:55:17 +00:00
|
|
|
func (daemon *Daemon) registerLink(parent, child *container.Container, alias string) error {
|
2015-09-04 00:51:04 +00:00
|
|
|
fullName := path.Join(parent.Name, alias)
|
2017-06-30 01:56:22 +00:00
|
|
|
if err := daemon.containersReplica.ReserveName(fullName, child.ID); err != nil {
|
|
|
|
if err == container.ErrNameReserved {
|
2016-01-20 21:24:16 +00:00
|
|
|
logrus.Warnf("error registering link for %s, to %s, as alias %s, ignoring: %v", parent.ID, child.ID, alias, err)
|
|
|
|
return nil
|
|
|
|
}
|
2013-10-05 02:25:15 +00:00
|
|
|
return err
|
|
|
|
}
|
2015-09-04 00:51:04 +00:00
|
|
|
daemon.linkIndex.link(parent, child, fullName)
|
2013-10-28 23:58:59 +00:00
|
|
|
return nil
|
2013-10-05 02:25:15 +00:00
|
|
|
}
|
|
|
|
|
2017-01-14 04:14:03 +00:00
|
|
|
// DaemonJoinsCluster informs the daemon has joined the cluster and provides
|
|
|
|
// the handler to query the cluster component
|
|
|
|
func (daemon *Daemon) DaemonJoinsCluster(clusterProvider cluster.Provider) {
|
|
|
|
daemon.setClusterProvider(clusterProvider)
|
|
|
|
}
|
|
|
|
|
|
|
|
// DaemonLeavesCluster informs the daemon has left the cluster
|
|
|
|
func (daemon *Daemon) DaemonLeavesCluster() {
|
|
|
|
// Daemon is in charge of removing the attachable networks with
|
|
|
|
// connected containers when the node leaves the swarm
|
|
|
|
daemon.clearAttachableNetworks()
|
2017-03-31 21:07:55 +00:00
|
|
|
// We no longer need the cluster provider, stop it now so that
|
|
|
|
// the network agent will stop listening to cluster events.
|
2017-01-14 04:14:03 +00:00
|
|
|
daemon.setClusterProvider(nil)
|
2017-03-31 21:07:55 +00:00
|
|
|
// Wait for the networking cluster agent to stop
|
|
|
|
daemon.netController.AgentStopWait()
|
|
|
|
// Daemon is in charge of removing the ingress network when the
|
|
|
|
// node leaves the swarm. Wait for job to be done or timeout.
|
|
|
|
// This is called also on graceful daemon shutdown. We need to
|
|
|
|
// wait, because the ingress release has to happen before the
|
|
|
|
// network controller is stopped.
|
2019-01-09 18:24:03 +00:00
|
|
|
|
2017-03-31 21:07:55 +00:00
|
|
|
if done, err := daemon.ReleaseIngress(); err == nil {
|
2019-01-09 18:24:03 +00:00
|
|
|
timeout := time.NewTimer(5 * time.Second)
|
|
|
|
defer timeout.Stop()
|
|
|
|
|
2017-03-31 21:07:55 +00:00
|
|
|
select {
|
|
|
|
case <-done:
|
2019-01-09 18:24:03 +00:00
|
|
|
case <-timeout.C:
|
2018-07-11 13:51:51 +00:00
|
|
|
logrus.Warn("timeout while waiting for ingress network removal")
|
2017-03-31 21:07:55 +00:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
logrus.Warnf("failed to initiate ingress network removal: %v", err)
|
|
|
|
}
|
2017-08-29 06:49:26 +00:00
|
|
|
|
2017-09-22 06:04:34 +00:00
|
|
|
daemon.attachmentStore.ClearAttachments()
|
2017-01-14 04:14:03 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// setClusterProvider sets a component for querying the current cluster state.
|
|
|
|
func (daemon *Daemon) setClusterProvider(clusterProvider cluster.Provider) {
|
2016-06-14 02:52:49 +00:00
|
|
|
daemon.clusterProvider = clusterProvider
|
|
|
|
daemon.netController.SetClusterProvider(clusterProvider)
|
Fix race in attachable network attachment
Attachable networks are networks created on the cluster which can then
be attached to by non-swarm containers. These networks are lazily
created on the node that wants to attach to that network.
When no container is currently attached to one of these networks on a
node, and then multiple containers which want that network are started
concurrently, this can cause a race condition in the network attachment
where essentially we try to attach the same network to the node twice.
To easily reproduce this issue you must use a multi-node cluster with a
worker node that has lots of CPUs (I used a 36 CPU node).
Repro steps:
1. On manager, `docker network create -d overlay --attachable test`
2. On worker, `docker create --restart=always --network test busybox
top`, many times... 200 is a good number (but not much more due to
subnet size restrictions)
3. Restart the daemon
When the daemon restarts, it will attempt to start all those containers
simultaneously. Note that you could try to do this yourself over the API,
but it's harder to trigger due to the added latency from going over
the API.
The error produced happens when the daemon tries to start the container
upon allocating the network resources:
```
attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
```
What happens here is the worker makes a network attachment request to
the manager. This is an async call which in the happy case would cause a
task to be placed on the node, which the worker is waiting for to get
the network configuration.
In the case of this race, the error ocurrs on the manager like this:
```
task allocation failure" error="failed during network allocation for task n7bwwwbymj2o2h9asqkza8gom: failed to allocate network IP for task n7bwwwbymj2o2h9asqkza8gom network rj4szie2zfauqnpgh4eri1yue: could not find an available IP" module=node node.id=u3489c490fx1df8onlyfo1v6e
```
The task is not created and the worker times out waiting for the task.
---
The mitigation for this is to make sure that only one attachment reuest
is in flight for a given network at a time *when the network doesn't
already exist on the node*. If the network already exists on the node
there is no need for synchronization because the network is already
allocated and on the node so there is no need to request it from the
manager.
This basically comes down to a race with `Find(network) ||
Create(network)` without any sort of syncronization.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2018-02-02 17:54:08 +00:00
|
|
|
daemon.attachableNetworkLock = locker.New()
|
2016-06-14 02:52:49 +00:00
|
|
|
}
|
|
|
|
|
2016-06-14 16:13:53 +00:00
|
|
|
// IsSwarmCompatible verifies if the current daemon
|
|
|
|
// configuration is compatible with the swarm mode
|
|
|
|
func (daemon *Daemon) IsSwarmCompatible() error {
|
|
|
|
if daemon.configStore == nil {
|
|
|
|
return nil
|
|
|
|
}
|
2017-01-23 11:23:07 +00:00
|
|
|
return daemon.configStore.IsSwarmCompatible()
|
2016-06-14 16:13:53 +00:00
|
|
|
}
|
|
|
|
|
2015-07-30 21:01:53 +00:00
|
|
|
// NewDaemon sets up everything for the daemon to be able to service
|
|
|
|
// requests from the webserver.
|
2018-05-23 19:15:21 +00:00
|
|
|
func NewDaemon(ctx context.Context, config *config.Config, pluginStore *plugin.Store) (daemon *Daemon, err error) {
|
2022-07-12 10:40:46 +00:00
|
|
|
// Verify the platform is supported as a daemon
|
|
|
|
if !platformSupported {
|
|
|
|
return nil, errors.New("the Docker daemon is not supported on this platform")
|
|
|
|
}
|
|
|
|
|
2018-05-23 19:15:21 +00:00
|
|
|
registryService, err := registry.NewService(config.ServiceOptions)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2016-06-02 00:29:06 +00:00
|
|
|
// Ensure that we have a correct root key limit for launching containers.
|
2020-11-09 14:34:26 +00:00
|
|
|
if err := modifyRootKeyLimit(); err != nil {
|
2016-11-08 21:06:24 +00:00
|
|
|
logrus.Warnf("unable to modify root key limit, number of containers could be limited by this quota: %v", err)
|
2016-06-02 00:29:06 +00:00
|
|
|
}
|
|
|
|
|
2016-01-23 02:15:09 +00:00
|
|
|
// Ensure we have compatible and valid configuration options
|
|
|
|
if err := verifyDaemonSettings(config); err != nil {
|
2015-05-15 23:34:26 +00:00
|
|
|
return nil, err
|
2014-09-17 03:00:15 +00:00
|
|
|
}
|
2015-05-15 23:34:26 +00:00
|
|
|
|
|
|
|
// Do we have a disabled network?
|
2015-06-30 17:34:15 +00:00
|
|
|
config.DisableBridge = isBridgeNetworkDisabled(config)
|
2014-08-10 01:18:32 +00:00
|
|
|
|
2018-07-17 19:11:38 +00:00
|
|
|
// Setup the resolv.conf
|
|
|
|
setupResolvConf(config)
|
|
|
|
|
2015-07-11 19:32:08 +00:00
|
|
|
// Validate platform-specific requirements
|
2015-05-15 23:34:26 +00:00
|
|
|
if err := checkSystem(); err != nil {
|
2014-09-16 17:42:59 +00:00
|
|
|
return nil, err
|
2014-07-30 06:51:43 +00:00
|
|
|
}
|
|
|
|
|
2017-11-16 06:20:33 +00:00
|
|
|
idMapping, err := setupRemappedRoot(config)
|
2015-10-08 15:51:41 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2017-11-16 06:20:33 +00:00
|
|
|
rootIDs := idMapping.RootPair()
|
2016-07-11 22:26:23 +00:00
|
|
|
if err := setupDaemonProcess(config); err != nil {
|
2014-05-10 01:05:54 +00:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2015-06-23 12:53:18 +00:00
|
|
|
// set up the tmpDir to use a canonical path
|
2020-10-06 19:40:30 +00:00
|
|
|
tmp, err := prepareTempDir(config.Root)
|
2015-06-23 12:53:18 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("Unable to get the TempDir under %s: %s", config.Root, err)
|
|
|
|
}
|
2018-10-11 14:35:50 +00:00
|
|
|
realTmp, err := fileutils.ReadSymlinkedDirectory(tmp)
|
2015-06-23 12:53:18 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("Unable to get the full path to the TempDir (%s): %s", tmp, err)
|
|
|
|
}
|
2019-10-13 00:29:21 +00:00
|
|
|
if isWindows {
|
2017-10-03 21:47:55 +00:00
|
|
|
if _, err := os.Stat(realTmp); err != nil && os.IsNotExist(err) {
|
2019-08-08 09:51:00 +00:00
|
|
|
if err := system.MkdirAll(realTmp, 0700); err != nil {
|
2017-10-03 21:47:55 +00:00
|
|
|
return nil, fmt.Errorf("Unable to create the TempDir (%s): %s", realTmp, err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
os.Setenv("TEMP", realTmp)
|
|
|
|
os.Setenv("TMP", realTmp)
|
|
|
|
} else {
|
|
|
|
os.Setenv("TMPDIR", realTmp)
|
|
|
|
}
|
2015-06-23 12:53:18 +00:00
|
|
|
|
2017-06-08 10:55:20 +00:00
|
|
|
d := &Daemon{
|
|
|
|
configStore: config,
|
2017-09-22 13:52:41 +00:00
|
|
|
PluginStore: pluginStore,
|
2017-06-08 10:55:20 +00:00
|
|
|
startupDone: make(chan struct{}),
|
|
|
|
}
|
2019-11-05 07:10:19 +00:00
|
|
|
|
2015-12-16 20:32:16 +00:00
|
|
|
// Ensure the daemon is properly shutdown if there is a failure during
|
|
|
|
// initialization
|
2015-04-27 21:11:29 +00:00
|
|
|
defer func() {
|
|
|
|
if err != nil {
|
2015-09-29 17:51:40 +00:00
|
|
|
if err := d.Shutdown(); err != nil {
|
2015-04-27 21:11:29 +00:00
|
|
|
logrus.Error(err)
|
|
|
|
}
|
2015-03-11 14:33:06 +00:00
|
|
|
}
|
2015-04-27 21:11:29 +00:00
|
|
|
}()
|
2013-11-07 20:34:01 +00:00
|
|
|
|
2017-05-31 00:02:11 +00:00
|
|
|
if err := d.setGenericResources(config); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2017-04-04 14:23:14 +00:00
|
|
|
// set up SIGUSR1 handler on Unix-like systems, or a Win32 global event
|
|
|
|
// on Windows to dump Go routine stacks
|
|
|
|
stackDumpDir := config.Root
|
|
|
|
if execRoot := config.GetExecRoot(); execRoot != "" {
|
|
|
|
stackDumpDir = execRoot
|
|
|
|
}
|
|
|
|
d.setupDumpStackTrap(stackDumpDir)
|
|
|
|
|
2016-09-02 13:20:54 +00:00
|
|
|
if err := d.setupSeccompProfile(); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2016-03-18 18:50:19 +00:00
|
|
|
// Set the default isolation mode (only applicable on Windows)
|
|
|
|
if err := d.setDefaultIsolation(); err != nil {
|
|
|
|
return nil, fmt.Errorf("error setting default isolation mode: %v", err)
|
|
|
|
}
|
|
|
|
|
2015-12-02 10:26:30 +00:00
|
|
|
if err := configureMaxThreads(config); err != nil {
|
|
|
|
logrus.Warnf("Failed to configure golang's threads limit: %v", err)
|
|
|
|
}
|
|
|
|
|
2018-10-15 07:52:53 +00:00
|
|
|
// ensureDefaultAppArmorProfile does nothing if apparmor is disabled
|
2016-12-05 13:12:17 +00:00
|
|
|
if err := ensureDefaultAppArmorProfile(); err != nil {
|
|
|
|
logrus.Errorf(err.Error())
|
|
|
|
}
|
|
|
|
|
2015-05-15 23:34:26 +00:00
|
|
|
daemonRepo := filepath.Join(config.Root, "containers")
|
2021-07-02 17:27:45 +00:00
|
|
|
if err := idtools.MkdirAllAndChown(daemonRepo, 0710, idtools.Identity{
|
|
|
|
UID: idtools.CurrentIdentity().UID,
|
|
|
|
GID: rootIDs.GID,
|
|
|
|
}); err != nil {
|
2013-03-14 01:48:50 +00:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2017-09-22 13:52:41 +00:00
|
|
|
// Create the directory where we'll store the runtime scripts (i.e. in
|
|
|
|
// order to support runtimeArgs)
|
|
|
|
daemonRuntimes := filepath.Join(config.Root, "runtimes")
|
2019-08-08 09:51:00 +00:00
|
|
|
if err := system.MkdirAll(daemonRuntimes, 0700); err != nil {
|
2017-09-22 13:52:41 +00:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
if err := d.loadRuntimes(); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2019-10-13 00:29:21 +00:00
|
|
|
if isWindows {
|
2019-08-08 09:51:00 +00:00
|
|
|
if err := system.MkdirAll(filepath.Join(config.Root, "credentialspecs"), 0); err != nil {
|
2016-06-07 19:15:50 +00:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-03-02 10:43:33 +00:00
|
|
|
d.registryService = registryService
|
2016-11-14 18:53:53 +00:00
|
|
|
logger.RegisterPluginGetter(d.PluginStore)
|
2017-03-17 21:57:23 +00:00
|
|
|
|
2017-04-14 01:56:50 +00:00
|
|
|
metricsSockPath, err := d.listenMetricsSock()
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
registerMetricsPluginCallback(d.PluginStore, metricsSockPath)
|
|
|
|
|
2020-02-28 07:52:14 +00:00
|
|
|
backoffConfig := backoff.DefaultConfig
|
|
|
|
backoffConfig.MaxDelay = 3 * time.Second
|
|
|
|
connParams := grpc.ConnectParams{
|
|
|
|
Backoff: backoffConfig,
|
|
|
|
}
|
2018-05-23 19:15:21 +00:00
|
|
|
gopts := []grpc.DialOption{
|
2019-10-24 10:09:56 +00:00
|
|
|
// WithBlock makes sure that the following containerd request
|
|
|
|
// is reliable.
|
|
|
|
//
|
|
|
|
// NOTE: In one edge case with high load pressure, kernel kills
|
|
|
|
// dockerd, containerd and containerd-shims caused by OOM.
|
|
|
|
// When both dockerd and containerd restart, but containerd
|
|
|
|
// will take time to recover all the existing containers. Before
|
|
|
|
// containerd serving, dockerd will failed with gRPC error.
|
|
|
|
// That bad thing is that restore action will still ignore the
|
|
|
|
// any non-NotFound errors and returns running state for
|
|
|
|
// already stopped container. It is unexpected behavior. And
|
|
|
|
// we need to restart dockerd to make sure that anything is OK.
|
|
|
|
//
|
|
|
|
// It is painful. Add WithBlock can prevent the edge case. And
|
|
|
|
// n common case, the containerd will be serving in shortly.
|
|
|
|
// It is not harm to add WithBlock for containerd connection.
|
|
|
|
grpc.WithBlock(),
|
|
|
|
|
2022-03-10 21:31:21 +00:00
|
|
|
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
2020-02-28 07:52:14 +00:00
|
|
|
grpc.WithConnectParams(connParams),
|
2019-10-10 20:34:42 +00:00
|
|
|
grpc.WithContextDialer(dialer.ContextDialer),
|
2018-05-23 19:15:21 +00:00
|
|
|
|
|
|
|
// TODO(stevvooe): We may need to allow configuration of this on the client.
|
|
|
|
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)),
|
|
|
|
grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)),
|
|
|
|
}
|
2021-02-26 23:23:55 +00:00
|
|
|
|
2018-05-23 19:15:21 +00:00
|
|
|
if config.ContainerdAddr != "" {
|
2019-07-11 23:42:16 +00:00
|
|
|
d.containerdCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second))
|
2018-05-23 19:15:21 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-07-14 20:45:32 +00:00
|
|
|
createPluginExec := func(m *plugin.Manager) (plugin.Executor, error) {
|
2018-05-23 19:15:21 +00:00
|
|
|
var pluginCli *containerd.Client
|
|
|
|
|
|
|
|
if config.ContainerdAddr != "" {
|
2019-07-11 23:42:16 +00:00
|
|
|
pluginCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdPluginNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second))
|
2018-05-23 19:15:21 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-07-07 20:33:46 +00:00
|
|
|
var rt types.Runtime
|
2021-01-04 19:43:19 +00:00
|
|
|
if runtime.GOOS != "windows" {
|
|
|
|
rtPtr, err := d.getRuntime(config.GetDefaultRuntimeName())
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
rt = *rtPtr
|
2020-07-07 20:33:46 +00:00
|
|
|
}
|
|
|
|
return pluginexec.New(ctx, getPluginExecRoot(config.Root), pluginCli, config.ContainerdPluginNamespace, m, rt)
|
2017-07-14 20:45:32 +00:00
|
|
|
}
|
|
|
|
|
2016-11-18 21:54:11 +00:00
|
|
|
// Plugin system initialization should happen before restore. Do not change order.
|
2016-12-12 23:05:53 +00:00
|
|
|
d.pluginManager, err = plugin.NewManager(plugin.ManagerConfig{
|
|
|
|
Root: filepath.Join(config.Root, "plugins"),
|
2017-02-02 19:22:12 +00:00
|
|
|
ExecRoot: getPluginExecRoot(config.Root),
|
2016-12-12 23:05:53 +00:00
|
|
|
Store: d.PluginStore,
|
2017-07-14 20:45:32 +00:00
|
|
|
CreateExecutor: createPluginExec,
|
2016-12-12 23:05:53 +00:00
|
|
|
RegistryService: registryService,
|
|
|
|
LiveRestoreEnabled: config.LiveRestoreEnabled,
|
|
|
|
LogPluginEvent: d.LogPluginEvent, // todo: make private
|
2017-03-17 21:57:23 +00:00
|
|
|
AuthzMiddleware: config.AuthzMiddleware,
|
2016-12-12 23:05:53 +00:00
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, errors.Wrap(err, "couldn't create plugin manager")
|
2016-11-18 21:54:11 +00:00
|
|
|
}
|
2016-09-08 00:01:10 +00:00
|
|
|
|
2018-02-08 22:16:20 +00:00
|
|
|
if err := d.setupDefaultLogConfig(); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2018-03-22 21:11:03 +00:00
|
|
|
d.volumes, err = volumesservice.NewVolumeService(config.Root, d.PluginStore, rootIDs, d)
|
2015-06-12 13:25:32 +00:00
|
|
|
if err != nil {
|
2013-04-06 01:00:10 +00:00
|
|
|
return nil, err
|
|
|
|
}
|
2014-08-28 14:18:08 +00:00
|
|
|
|
2022-05-03 11:52:17 +00:00
|
|
|
// Try to preserve the daemon ID (which is the trust-key's ID) when upgrading
|
|
|
|
// an existing installation; this is a "best-effort".
|
|
|
|
idPath := filepath.Join(config.Root, "engine-id")
|
|
|
|
err = migrateTrustKeyID(config.TrustKeyPath, idPath)
|
2015-01-07 22:59:12 +00:00
|
|
|
if err != nil {
|
2022-05-03 11:52:17 +00:00
|
|
|
logrus.WithError(err).Warnf("unable to migrate engine ID; a new engine ID will be generated")
|
2014-10-02 01:26:06 +00:00
|
|
|
}
|
|
|
|
|
2015-06-19 22:29:47 +00:00
|
|
|
// Check if Devices cgroup is mounted, it is hard requirement for container security,
|
2016-05-26 11:08:53 +00:00
|
|
|
// on Linux.
|
2022-06-03 15:35:23 +00:00
|
|
|
//
|
|
|
|
// Important: we call getSysInfo() directly here, without storing the results,
|
|
|
|
// as networking has not yet been set up, so we only have partial system info
|
|
|
|
// at this point.
|
|
|
|
//
|
|
|
|
// TODO(thaJeztah) add a utility to only collect the CgroupDevicesEnabled information
|
|
|
|
if runtime.GOOS == "linux" && !userns.RunningInUserNS() && !getSysInfo(d).CgroupDevicesEnabled {
|
2016-12-25 06:37:31 +00:00
|
|
|
return nil, errors.New("Devices cgroup isn't mounted")
|
2015-06-17 02:36:20 +00:00
|
|
|
}
|
|
|
|
|
2022-05-03 11:52:17 +00:00
|
|
|
d.id, err = loadOrCreateID(idPath)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2015-04-27 21:11:29 +00:00
|
|
|
d.repository = daemonRepo
|
2016-01-15 23:55:46 +00:00
|
|
|
d.containers = container.NewMemoryStore()
|
2017-02-23 23:12:18 +00:00
|
|
|
if d.containersReplica, err = container.NewViewDB(); err != nil {
|
2017-02-22 22:02:20 +00:00
|
|
|
return nil, err
|
|
|
|
}
|
2022-05-10 19:59:00 +00:00
|
|
|
d.execCommands = container.NewExecStore()
|
2015-11-03 19:06:16 +00:00
|
|
|
d.statsCollector = d.newStatsCollector(1 * time.Second)
|
2018-02-08 22:16:20 +00:00
|
|
|
|
2018-02-07 20:52:47 +00:00
|
|
|
d.EventsService = events.New()
|
2015-05-19 20:05:25 +00:00
|
|
|
d.root = config.Root
|
2017-11-16 06:20:33 +00:00
|
|
|
d.idMapping = idMapping
|
2015-04-27 21:11:29 +00:00
|
|
|
|
2015-09-04 00:51:04 +00:00
|
|
|
d.linkIndex = newLinkIndex()
|
|
|
|
|
2022-08-03 09:20:54 +00:00
|
|
|
// On Windows we don't support the environment variable, or a user supplied graphdriver
|
|
|
|
// Unix platforms however run a single graphdriver for all containers, and it can
|
|
|
|
// be set through an environment variable, a daemon start parameter, or chosen through
|
|
|
|
// initialization of the layerstore through driver priority order for example.
|
|
|
|
driverName := os.Getenv("DOCKER_DRIVER")
|
|
|
|
if isWindows {
|
|
|
|
driverName = "windowsfilter"
|
|
|
|
} else if driverName != "" {
|
|
|
|
logrus.Infof("Setting the storage driver from the $DOCKER_DRIVER environment variable (%s)", driverName)
|
|
|
|
} else {
|
|
|
|
driverName = config.GraphDriver
|
|
|
|
}
|
|
|
|
|
2022-07-05 14:33:39 +00:00
|
|
|
if d.UsesSnapshotter() {
|
2022-08-25 16:41:02 +00:00
|
|
|
// FIXME(thaJeztah): implement automatic snapshotter-selection similar to graph-driver selection; see https://github.com/moby/moby/issues/44076
|
|
|
|
if driverName == "" {
|
|
|
|
driverName = containerd.DefaultSnapshotter
|
|
|
|
}
|
|
|
|
|
2022-08-03 09:20:54 +00:00
|
|
|
// Configure and validate the kernels security support. Note this is a Linux/FreeBSD
|
|
|
|
// operation only, so it is safe to pass *just* the runtime OS graphdriver.
|
|
|
|
if err := configureKernelSecuritySupport(config, driverName); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
d.imageService = ctrd.NewService(d.containerdCli, driverName)
|
2022-07-05 14:33:39 +00:00
|
|
|
} else {
|
2022-08-03 09:20:54 +00:00
|
|
|
layerStore, err := layer.NewStoreFromOptions(layer.StoreOptions{
|
|
|
|
Root: config.Root,
|
|
|
|
MetadataStorePathTemplate: filepath.Join(config.Root, "image", "%s", "layerdb"),
|
|
|
|
GraphDriver: driverName,
|
|
|
|
GraphDriverOptions: config.GraphOptions,
|
|
|
|
IDMapping: idMapping,
|
|
|
|
PluginGetter: d.PluginStore,
|
|
|
|
ExperimentalEnabled: config.Experimental,
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
// Configure and validate the kernels security support. Note this is a Linux/FreeBSD
|
|
|
|
// operation only, so it is safe to pass *just* the runtime OS graphdriver.
|
|
|
|
if err := configureKernelSecuritySupport(config, layerStore.DriverName()); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
|
|
|
imageRoot := filepath.Join(config.Root, "image", layerStore.DriverName())
|
2022-07-05 14:33:39 +00:00
|
|
|
ifs, err := image.NewFSStoreBackend(filepath.Join(imageRoot, "imagedb"))
|
2022-05-03 21:10:14 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
2022-07-05 14:33:39 +00:00
|
|
|
|
2022-08-03 09:20:54 +00:00
|
|
|
// We have a single tag/reference store for the daemon globally. However, it's
|
|
|
|
// stored under the graphdriver. On host platforms which only support a single
|
|
|
|
// container OS, but multiple selectable graphdrivers, this means depending on which
|
|
|
|
// graphdriver is chosen, the global reference store is under there. For
|
|
|
|
// platforms which support multiple container operating systems, this is slightly
|
|
|
|
// more problematic as where does the global ref store get located? Fortunately,
|
|
|
|
// for Windows, which is currently the only daemon supporting multiple container
|
|
|
|
// operating systems, the list of graphdrivers available isn't user configurable.
|
|
|
|
// For backwards compatibility, we just put it under the windowsfilter
|
|
|
|
// directory regardless.
|
|
|
|
refStoreLocation := filepath.Join(imageRoot, `repositories.json`)
|
|
|
|
rs, err := refstore.NewReferenceStore(refStoreLocation)
|
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("Couldn't create reference store repository: %s", err)
|
|
|
|
}
|
|
|
|
d.ReferenceStore = rs
|
|
|
|
|
2022-07-05 14:33:39 +00:00
|
|
|
imageStore, err := image.NewImageStore(ifs, layerStore)
|
|
|
|
if err != nil {
|
2022-05-03 21:10:14 +00:00
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2022-07-05 14:33:39 +00:00
|
|
|
distributionMetadataStore, err := dmetadata.NewFSMetadataStore(filepath.Join(imageRoot, "distribution"))
|
2020-10-30 19:47:06 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
|
2022-07-05 14:33:39 +00:00
|
|
|
imgSvcConfig := images.ImageServiceConfig{
|
|
|
|
ContainerStore: d.containers,
|
|
|
|
DistributionMetadataStore: distributionMetadataStore,
|
|
|
|
EventsService: d.EventsService,
|
|
|
|
ImageStore: imageStore,
|
|
|
|
LayerStore: layerStore,
|
|
|
|
MaxConcurrentDownloads: config.MaxConcurrentDownloads,
|
|
|
|
MaxConcurrentUploads: config.MaxConcurrentUploads,
|
|
|
|
MaxDownloadAttempts: config.MaxDownloadAttempts,
|
|
|
|
ReferenceStore: rs,
|
|
|
|
RegistryService: registryService,
|
|
|
|
ContentNamespace: config.ContainerdNamespace,
|
|
|
|
}
|
|
|
|
|
|
|
|
// This is a temporary environment variables used in CI to allow pushing
|
|
|
|
// manifest v2 schema 1 images to test-registries used for testing *pulling*
|
|
|
|
// these images.
|
|
|
|
if os.Getenv("DOCKER_ALLOW_SCHEMA1_PUSH_DONOTUSE") != "" {
|
|
|
|
imgSvcConfig.TrustKey, err = loadOrCreateTrustKey(config.TrustKeyPath)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
if err = system.MkdirAll(filepath.Join(config.Root, "trust"), 0700); err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// containerd is not currently supported with Windows.
|
|
|
|
// So sometimes d.containerdCli will be nil
|
|
|
|
// In that case we'll create a local content store... but otherwise we'll use containerd
|
|
|
|
if d.containerdCli != nil {
|
|
|
|
imgSvcConfig.Leases = d.containerdCli.LeasesService()
|
|
|
|
imgSvcConfig.ContentStore = d.containerdCli.ContentStore()
|
|
|
|
} else {
|
2022-09-06 09:31:45 +00:00
|
|
|
cs, lm, err := d.configureLocalContentStore(config.ContainerdNamespace)
|
2022-07-05 14:33:39 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
imgSvcConfig.ContentStore = cs
|
|
|
|
imgSvcConfig.Leases = lm
|
|
|
|
}
|
|
|
|
|
|
|
|
// TODO: imageStore, distributionMetadataStore, and ReferenceStore are only
|
|
|
|
// used above to run migration. They could be initialized in ImageService
|
|
|
|
// if migration is called from daemon/images. layerStore might move as well.
|
|
|
|
d.imageService = images.NewImageService(imgSvcConfig)
|
|
|
|
|
|
|
|
logrus.Debugf("Max Concurrent Downloads: %d", imgSvcConfig.MaxConcurrentDownloads)
|
|
|
|
logrus.Debugf("Max Concurrent Uploads: %d", imgSvcConfig.MaxConcurrentUploads)
|
|
|
|
logrus.Debugf("Max Download Attempts: %d", imgSvcConfig.MaxDownloadAttempts)
|
|
|
|
}
|
2018-02-02 22:18:46 +00:00
|
|
|
|
2016-03-18 18:50:19 +00:00
|
|
|
go d.execCommandGC()
|
|
|
|
|
2021-02-26 23:23:55 +00:00
|
|
|
if err := d.initLibcontainerd(ctx); err != nil {
|
2015-03-06 20:44:31 +00:00
|
|
|
return nil, err
|
|
|
|
}
|
2015-04-27 21:11:29 +00:00
|
|
|
|
2016-09-06 21:30:55 +00:00
|
|
|
if err := d.restore(); err != nil {
|
2016-07-18 15:02:12 +00:00
|
|
|
return nil, err
|
|
|
|
}
|
2017-06-08 10:55:20 +00:00
|
|
|
close(d.startupDone)
|
2016-07-18 15:02:12 +00:00
|
|
|
|
2019-08-28 23:44:39 +00:00
|
|
|
info := d.SystemInfo()
|
2022-06-03 15:35:23 +00:00
|
|
|
for _, w := range info.Warnings {
|
|
|
|
logrus.Warn(w)
|
|
|
|
}
|
2016-07-20 23:11:28 +00:00
|
|
|
|
2017-04-24 11:32:01 +00:00
|
|
|
engineInfo.WithValues(
|
2016-07-20 23:11:28 +00:00
|
|
|
dockerversion.Version,
|
|
|
|
dockerversion.GitCommit,
|
|
|
|
info.Architecture,
|
|
|
|
info.Driver,
|
|
|
|
info.KernelVersion,
|
|
|
|
info.OperatingSystem,
|
2017-04-24 11:32:01 +00:00
|
|
|
info.OSType,
|
2019-05-30 16:51:41 +00:00
|
|
|
info.OSVersion,
|
2017-04-24 11:32:01 +00:00
|
|
|
info.ID,
|
2016-07-20 23:11:28 +00:00
|
|
|
).Set(1)
|
|
|
|
engineCpus.Set(float64(info.NCPU))
|
|
|
|
engineMemory.Set(float64(info.MemTotal))
|
|
|
|
|
2017-05-16 23:56:56 +00:00
|
|
|
logrus.WithFields(logrus.Fields{
|
2021-06-10 08:06:04 +00:00
|
|
|
"version": dockerversion.Version,
|
|
|
|
"commit": dockerversion.GitCommit,
|
2022-08-09 09:11:52 +00:00
|
|
|
"graphdriver": d.ImageService().StorageDriver(),
|
2017-05-16 23:56:56 +00:00
|
|
|
}).Info("Docker daemon")
|
|
|
|
|
2015-05-06 22:39:29 +00:00
|
|
|
return d, nil
|
|
|
|
}
|
|
|
|
|
2018-05-23 22:53:14 +00:00
|
|
|
// DistributionServices returns services controlling daemon storage
|
2018-04-17 18:56:28 +00:00
|
|
|
func (daemon *Daemon) DistributionServices() images.DistributionServices {
|
|
|
|
return daemon.imageService.DistributionServices()
|
|
|
|
}
|
|
|
|
|
2017-06-08 10:55:20 +00:00
|
|
|
func (daemon *Daemon) waitForStartupDone() {
|
|
|
|
<-daemon.startupDone
|
|
|
|
}
|
|
|
|
|
2015-11-12 19:55:17 +00:00
|
|
|
func (daemon *Daemon) shutdownContainer(c *container.Container) error {
|
2016-06-07 03:29:05 +00:00
|
|
|
// If container failed to exit in stopTimeout seconds of SIGTERM, then using the force
|
2021-08-20 22:23:26 +00:00
|
|
|
if err := daemon.containerStop(context.TODO(), c, containertypes.StopOptions{}); err != nil {
|
2016-07-03 12:47:39 +00:00
|
|
|
return fmt.Errorf("Failed to stop container %s with error: %v", c.ID, err)
|
2015-10-29 22:11:35 +00:00
|
|
|
}
|
|
|
|
|
2017-03-30 20:52:40 +00:00
|
|
|
// Wait without timeout for the container to exit.
|
|
|
|
// Ignore the result.
|
2017-09-06 08:54:24 +00:00
|
|
|
<-c.Wait(context.Background(), container.WaitConditionNotRunning)
|
2015-10-29 22:11:35 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2017-09-27 06:19:19 +00:00
|
|
|
// ShutdownTimeout returns the timeout (in seconds) before containers are forcibly
|
|
|
|
// killed during shutdown. The default timeout can be configured both on the daemon
|
|
|
|
// and per container, and the longest timeout will be used. A grace-period of
|
|
|
|
// 5 seconds is added to the configured timeout.
|
|
|
|
//
|
|
|
|
// A negative (-1) timeout means "indefinitely", which means that containers
|
|
|
|
// are not forcibly killed, and the daemon shuts down after all containers exit.
|
2016-06-07 03:29:05 +00:00
|
|
|
func (daemon *Daemon) ShutdownTimeout() int {
|
2016-05-26 21:07:30 +00:00
|
|
|
shutdownTimeout := daemon.configStore.ShutdownTimeout
|
2017-09-27 06:19:19 +00:00
|
|
|
if shutdownTimeout < 0 {
|
|
|
|
return -1
|
|
|
|
}
|
|
|
|
if daemon.containers == nil {
|
|
|
|
return shutdownTimeout
|
|
|
|
}
|
2016-05-26 21:07:30 +00:00
|
|
|
|
2016-06-07 03:29:05 +00:00
|
|
|
graceTimeout := 5
|
2017-09-27 06:19:19 +00:00
|
|
|
for _, c := range daemon.containers.List() {
|
|
|
|
stopTimeout := c.StopTimeout()
|
|
|
|
if stopTimeout < 0 {
|
|
|
|
return -1
|
|
|
|
}
|
|
|
|
if stopTimeout+graceTimeout > shutdownTimeout {
|
|
|
|
shutdownTimeout = stopTimeout + graceTimeout
|
2016-06-07 03:29:05 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return shutdownTimeout
|
|
|
|
}
|
|
|
|
|
2015-07-30 21:01:53 +00:00
|
|
|
// Shutdown stops the daemon.
|
2015-09-29 17:51:40 +00:00
|
|
|
func (daemon *Daemon) Shutdown() error {
|
2015-08-05 21:09:08 +00:00
|
|
|
daemon.shutdown = true
|
2016-06-02 18:10:55 +00:00
|
|
|
// Keep mounts and networking running on daemon shutdown if
|
|
|
|
// we are to keep containers running and restore them.
|
2016-07-22 15:53:26 +00:00
|
|
|
|
2016-07-27 15:30:15 +00:00
|
|
|
if daemon.configStore.LiveRestoreEnabled && daemon.containers != nil {
|
2016-07-08 02:19:48 +00:00
|
|
|
// check if there are any running containers, if none we should do some cleanup
|
|
|
|
if ls, err := daemon.Containers(&types.ContainerListOptions{}); len(ls) != 0 || err != nil {
|
2017-04-14 01:56:50 +00:00
|
|
|
// metrics plugins still need some cleanup
|
|
|
|
daemon.cleanupMetricsPlugins()
|
2016-07-08 02:19:48 +00:00
|
|
|
return nil
|
|
|
|
}
|
2016-06-02 18:10:55 +00:00
|
|
|
}
|
2016-07-08 02:19:48 +00:00
|
|
|
|
2015-04-27 21:11:29 +00:00
|
|
|
if daemon.containers != nil {
|
2017-10-12 23:31:33 +00:00
|
|
|
logrus.Debugf("daemon configured with a %d seconds minimum shutdown timeout", daemon.configStore.ShutdownTimeout)
|
|
|
|
logrus.Debugf("start clean shutdown of all containers with a %d seconds timeout...", daemon.ShutdownTimeout())
|
2016-01-15 23:55:46 +00:00
|
|
|
daemon.containers.ApplyAll(func(c *container.Container) {
|
|
|
|
if !c.IsRunning() {
|
|
|
|
return
|
2015-04-27 21:11:29 +00:00
|
|
|
}
|
2020-12-15 12:06:39 +00:00
|
|
|
log := logrus.WithField("container", c.ID)
|
|
|
|
log.Debug("shutting down container")
|
2016-01-15 23:55:46 +00:00
|
|
|
if err := daemon.shutdownContainer(c); err != nil {
|
2020-12-15 12:06:39 +00:00
|
|
|
log.WithError(err).Error("failed to shut down container")
|
2016-01-15 23:55:46 +00:00
|
|
|
return
|
|
|
|
}
|
2021-03-19 14:34:08 +00:00
|
|
|
if mountid, err := daemon.imageService.GetLayerMountID(c.ID); err == nil {
|
2016-03-18 18:50:19 +00:00
|
|
|
daemon.cleanupMountsByID(mountid)
|
|
|
|
}
|
2020-12-15 12:06:39 +00:00
|
|
|
log.Debugf("shut down container")
|
2016-01-15 23:55:46 +00:00
|
|
|
})
|
2015-10-29 22:11:35 +00:00
|
|
|
}
|
2015-06-05 22:02:56 +00:00
|
|
|
|
2016-12-01 22:17:07 +00:00
|
|
|
if daemon.volumes != nil {
|
|
|
|
if err := daemon.volumes.Shutdown(); err != nil {
|
|
|
|
logrus.Errorf("Error shutting down volume store: %v", err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-02-07 20:52:47 +00:00
|
|
|
if daemon.imageService != nil {
|
2022-01-21 19:25:01 +00:00
|
|
|
if err := daemon.imageService.Cleanup(); err != nil {
|
|
|
|
logrus.Error(err)
|
|
|
|
}
|
2018-02-07 20:52:47 +00:00
|
|
|
}
|
2016-11-30 01:00:02 +00:00
|
|
|
|
2017-03-31 21:07:55 +00:00
|
|
|
// If we are part of a cluster, clean up cluster's stuff
|
|
|
|
if daemon.clusterProvider != nil {
|
|
|
|
logrus.Debugf("start clean shutdown of cluster resources...")
|
|
|
|
daemon.DaemonLeavesCluster()
|
|
|
|
}
|
|
|
|
|
2017-04-14 01:56:50 +00:00
|
|
|
daemon.cleanupMetricsPlugins()
|
|
|
|
|
2016-11-30 01:00:02 +00:00
|
|
|
// Shutdown plugins after containers and layerstore. Don't change the order.
|
2016-10-06 14:09:54 +00:00
|
|
|
daemon.pluginShutdown()
|
2016-10-03 22:42:46 +00:00
|
|
|
|
2015-10-29 22:11:35 +00:00
|
|
|
// trigger libnetwork Stop only if it's initialized
|
|
|
|
if daemon.netController != nil {
|
|
|
|
daemon.netController.Stop()
|
2015-04-27 21:11:29 +00:00
|
|
|
}
|
2014-03-25 23:21:07 +00:00
|
|
|
|
2018-05-23 19:15:21 +00:00
|
|
|
if daemon.containerdCli != nil {
|
|
|
|
daemon.containerdCli.Close()
|
|
|
|
}
|
|
|
|
|
2020-10-30 19:47:06 +00:00
|
|
|
if daemon.mdDB != nil {
|
|
|
|
daemon.mdDB.Close()
|
|
|
|
}
|
|
|
|
|
2018-01-14 23:42:25 +00:00
|
|
|
return daemon.cleanupMounts()
|
2014-03-25 23:21:07 +00:00
|
|
|
}
|
|
|
|
|
2015-11-12 19:55:17 +00:00
|
|
|
// Mount sets container.BaseFS
|
2015-07-30 21:01:53 +00:00
|
|
|
// (is it not set coming in? why is it unset?)
|
2015-11-12 19:55:17 +00:00
|
|
|
func (daemon *Daemon) Mount(container *container.Container) error {
|
2018-02-07 23:25:50 +00:00
|
|
|
if container.RWLayer == nil {
|
|
|
|
return errors.New("RWLayer of container " + container.ID + " is unexpectedly nil")
|
|
|
|
}
|
2015-12-16 22:13:50 +00:00
|
|
|
dir, err := container.RWLayer.Mount(container.GetMountLabel())
|
2015-11-18 22:20:54 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2020-12-15 12:06:39 +00:00
|
|
|
logrus.WithField("container", container.ID).Debugf("container mounted via layerStore: %v", dir)
|
2015-05-15 23:34:26 +00:00
|
|
|
|
2022-09-23 16:25:19 +00:00
|
|
|
if container.BaseFS != "" && container.BaseFS != dir {
|
2015-05-15 23:34:26 +00:00
|
|
|
// The mount path reported by the graph driver should always be trusted on Windows, since the
|
|
|
|
// volume path for a given mounted layer may change over time. This should only be an error
|
|
|
|
// on non-Windows operating systems.
|
2017-08-04 00:22:00 +00:00
|
|
|
if runtime.GOOS != "windows" {
|
2015-11-18 22:20:54 +00:00
|
|
|
daemon.Unmount(container)
|
2022-08-09 15:25:48 +00:00
|
|
|
return fmt.Errorf("driver %s is returning inconsistent paths for container %s ('%s' then '%s')",
|
|
|
|
container.Driver, container.ID, container.BaseFS, dir)
|
2015-05-15 23:34:26 +00:00
|
|
|
}
|
2013-11-01 01:07:54 +00:00
|
|
|
}
|
2015-11-12 19:55:17 +00:00
|
|
|
container.BaseFS = dir // TODO: combine these fields
|
2013-11-07 20:34:01 +00:00
|
|
|
return nil
|
2013-11-01 01:07:54 +00:00
|
|
|
}
|
|
|
|
|
2015-11-03 01:06:09 +00:00
|
|
|
// Unmount unsets the container base filesystem
|
2016-03-18 18:50:19 +00:00
|
|
|
func (daemon *Daemon) Unmount(container *container.Container) error {
|
2018-02-07 23:25:50 +00:00
|
|
|
if container.RWLayer == nil {
|
|
|
|
return errors.New("RWLayer of container " + container.ID + " is unexpectedly nil")
|
|
|
|
}
|
2015-12-16 22:13:50 +00:00
|
|
|
if err := container.RWLayer.Unmount(); err != nil {
|
2020-12-15 12:06:39 +00:00
|
|
|
logrus.WithField("container", container.ID).WithError(err).Error("error unmounting container")
|
2016-03-18 18:50:19 +00:00
|
|
|
return err
|
2015-11-18 22:20:54 +00:00
|
|
|
}
|
2016-10-19 16:22:02 +00:00
|
|
|
|
2016-03-18 18:50:19 +00:00
|
|
|
return nil
|
2014-01-10 22:26:29 +00:00
|
|
|
}
|
|
|
|
|
2017-02-28 09:51:40 +00:00
|
|
|
// Subnets return the IPv4 and IPv6 subnets of networks that are manager by Docker.
|
|
|
|
func (daemon *Daemon) Subnets() ([]net.IPNet, []net.IPNet) {
|
|
|
|
var v4Subnets []net.IPNet
|
|
|
|
var v6Subnets []net.IPNet
|
2016-07-01 01:07:35 +00:00
|
|
|
|
|
|
|
managedNetworks := daemon.netController.Networks()
|
|
|
|
|
|
|
|
for _, managedNetwork := range managedNetworks {
|
2017-02-28 09:51:40 +00:00
|
|
|
v4infos, v6infos := managedNetwork.Info().IpamInfo()
|
|
|
|
for _, info := range v4infos {
|
|
|
|
if info.IPAMData.Pool != nil {
|
|
|
|
v4Subnets = append(v4Subnets, *info.IPAMData.Pool)
|
2016-07-01 01:07:35 +00:00
|
|
|
}
|
|
|
|
}
|
2017-02-28 09:51:40 +00:00
|
|
|
for _, info := range v6infos {
|
|
|
|
if info.IPAMData.Pool != nil {
|
|
|
|
v6Subnets = append(v6Subnets, *info.IPAMData.Pool)
|
2016-07-01 01:07:35 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-02-28 09:51:40 +00:00
|
|
|
return v4Subnets, v6Subnets
|
2016-07-01 01:07:35 +00:00
|
|
|
}
|
|
|
|
|
2017-03-10 17:47:25 +00:00
|
|
|
// prepareTempDir prepares and returns the default directory to use
|
|
|
|
// for temporary files.
|
|
|
|
// If it doesn't exist, it is created. If it exists, its content is removed.
|
2020-10-06 19:40:30 +00:00
|
|
|
func prepareTempDir(rootDir string) (string, error) {
|
2015-03-29 18:51:17 +00:00
|
|
|
var tmpDir string
|
|
|
|
if tmpDir = os.Getenv("DOCKER_TMPDIR"); tmpDir == "" {
|
|
|
|
tmpDir = filepath.Join(rootDir, "tmp")
|
2017-03-10 17:47:25 +00:00
|
|
|
newName := tmpDir + "-old"
|
2017-04-17 18:59:23 +00:00
|
|
|
if err := os.Rename(tmpDir, newName); err == nil {
|
2017-03-10 17:47:25 +00:00
|
|
|
go func() {
|
|
|
|
if err := os.RemoveAll(newName); err != nil {
|
|
|
|
logrus.Warnf("failed to delete old tmp directory: %s", newName)
|
|
|
|
}
|
|
|
|
}()
|
2017-09-26 09:45:54 +00:00
|
|
|
} else if !os.IsNotExist(err) {
|
2017-03-10 17:47:25 +00:00
|
|
|
logrus.Warnf("failed to rename %s for background deletion: %s. Deleting synchronously", tmpDir, err)
|
|
|
|
if err := os.RemoveAll(tmpDir); err != nil {
|
|
|
|
logrus.Warnf("failed to delete old tmp directory: %s", tmpDir)
|
|
|
|
}
|
|
|
|
}
|
2015-03-29 18:51:17 +00:00
|
|
|
}
|
2020-10-06 19:40:30 +00:00
|
|
|
return tmpDir, idtools.MkdirAllAndChown(tmpDir, 0700, idtools.CurrentIdentity())
|
2015-04-16 06:31:52 +00:00
|
|
|
}
|
2015-04-23 02:23:02 +00:00
|
|
|
|
2017-05-31 00:02:11 +00:00
|
|
|
func (daemon *Daemon) setGenericResources(conf *config.Config) error {
|
2017-08-29 19:28:33 +00:00
|
|
|
genericResources, err := config.ParseGenericResources(conf.NodeGenericResources)
|
2017-05-31 00:02:11 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
|
|
|
daemon.genericResources = genericResources
|
|
|
|
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2015-11-03 19:25:22 +00:00
|
|
|
// IsShuttingDown tells whether the daemon is shutting down or not
|
|
|
|
func (daemon *Daemon) IsShuttingDown() bool {
|
|
|
|
return daemon.shutdown
|
|
|
|
}
|
|
|
|
|
2017-01-23 11:23:07 +00:00
|
|
|
func isBridgeNetworkDisabled(conf *config.Config) bool {
|
|
|
|
return conf.BridgeConfig.Iface == config.DisableNetworkBridge
|
2016-03-10 04:33:21 +00:00
|
|
|
}
|
|
|
|
|
2022-04-23 21:12:55 +00:00
|
|
|
func (daemon *Daemon) networkOptions(pg plugingetter.PluginGetter, activeSandboxes map[string]interface{}) ([]nwconfig.Option, error) {
|
2016-03-10 04:33:21 +00:00
|
|
|
options := []nwconfig.Option{}
|
2022-04-23 21:12:55 +00:00
|
|
|
if daemon.configStore == nil {
|
2016-03-10 04:33:21 +00:00
|
|
|
return options, nil
|
|
|
|
}
|
2022-04-23 21:12:55 +00:00
|
|
|
conf := daemon.configStore
|
2016-03-10 04:33:21 +00:00
|
|
|
dd := runconfig.DefaultDaemonNetworkMode()
|
2016-06-14 16:13:53 +00:00
|
|
|
|
2022-04-23 21:12:55 +00:00
|
|
|
options = []nwconfig.Option{
|
|
|
|
nwconfig.OptionDataDir(conf.Root),
|
|
|
|
nwconfig.OptionExecRoot(conf.GetExecRoot()),
|
|
|
|
nwconfig.OptionDefaultDriver(string(dd)),
|
|
|
|
nwconfig.OptionDefaultNetwork(dd.NetworkName()),
|
|
|
|
nwconfig.OptionLabels(conf.Labels),
|
|
|
|
nwconfig.OptionNetworkControlPlaneMTU(conf.NetworkControlPlaneMTU),
|
|
|
|
driverOptions(conf),
|
2016-12-13 23:04:59 +00:00
|
|
|
}
|
|
|
|
|
2022-04-23 21:12:55 +00:00
|
|
|
if len(conf.NetworkConfig.DefaultAddressPools.Value()) > 0 {
|
|
|
|
options = append(options, nwconfig.OptionDefaultAddressPoolConfig(conf.NetworkConfig.DefaultAddressPools.Value()))
|
|
|
|
}
|
|
|
|
if conf.LiveRestoreEnabled && len(activeSandboxes) != 0 {
|
2016-06-14 16:13:53 +00:00
|
|
|
options = append(options, nwconfig.OptionActiveSandboxes(activeSandboxes))
|
|
|
|
}
|
2016-09-26 17:08:52 +00:00
|
|
|
if pg != nil {
|
|
|
|
options = append(options, nwconfig.OptionPluginGetter(pg))
|
|
|
|
}
|
|
|
|
|
2016-03-10 04:33:21 +00:00
|
|
|
return options, nil
|
|
|
|
}
|
2016-03-18 18:50:19 +00:00
|
|
|
|
2016-10-18 04:36:52 +00:00
|
|
|
// GetCluster returns the cluster
|
|
|
|
func (daemon *Daemon) GetCluster() Cluster {
|
|
|
|
return daemon.cluster
|
|
|
|
}
|
|
|
|
|
|
|
|
// SetCluster sets the cluster
|
|
|
|
func (daemon *Daemon) SetCluster(cluster Cluster) {
|
|
|
|
daemon.cluster = cluster
|
|
|
|
}
|
2016-11-10 01:49:09 +00:00
|
|
|
|
|
|
|
func (daemon *Daemon) pluginShutdown() {
|
2016-12-12 23:05:53 +00:00
|
|
|
manager := daemon.pluginManager
|
2016-11-10 01:49:09 +00:00
|
|
|
// Check for a valid manager object. In error conditions, daemon init can fail
|
|
|
|
// and shutdown called, before plugin manager is initialized.
|
|
|
|
if manager != nil {
|
|
|
|
manager.Shutdown()
|
|
|
|
}
|
|
|
|
}
|
2016-11-04 19:42:21 +00:00
|
|
|
|
2016-12-12 23:05:53 +00:00
|
|
|
// PluginManager returns current pluginManager associated with the daemon
|
|
|
|
func (daemon *Daemon) PluginManager() *plugin.Manager { // set up before daemon to avoid this method
|
|
|
|
return daemon.pluginManager
|
|
|
|
}
|
|
|
|
|
2017-01-20 01:09:37 +00:00
|
|
|
// PluginGetter returns current pluginStore associated with the daemon
|
|
|
|
func (daemon *Daemon) PluginGetter() *plugin.Store {
|
|
|
|
return daemon.PluginStore
|
|
|
|
}
|
|
|
|
|
2016-11-04 19:42:21 +00:00
|
|
|
// CreateDaemonRoot creates the root for the daemon
|
2017-01-23 11:23:07 +00:00
|
|
|
func CreateDaemonRoot(config *config.Config) error {
|
2016-11-04 19:42:21 +00:00
|
|
|
// get the canonical path to the Docker root directory
|
|
|
|
var realRoot string
|
|
|
|
if _, err := os.Stat(config.Root); err != nil && os.IsNotExist(err) {
|
|
|
|
realRoot = config.Root
|
|
|
|
} else {
|
2018-10-11 14:35:50 +00:00
|
|
|
realRoot, err = fileutils.ReadSymlinkedDirectory(config.Root)
|
2016-11-04 19:42:21 +00:00
|
|
|
if err != nil {
|
|
|
|
return fmt.Errorf("Unable to get the full path to root (%s): %s", config.Root, err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-11-16 06:20:33 +00:00
|
|
|
idMapping, err := setupRemappedRoot(config)
|
2016-11-04 19:42:21 +00:00
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
2017-11-16 06:20:33 +00:00
|
|
|
return setupDaemonRoot(config, realRoot, idMapping.RootPair())
|
2016-11-04 19:42:21 +00:00
|
|
|
}
|
2017-06-22 14:46:26 +00:00
|
|
|
|
|
|
|
// checkpointAndSave grabs a container lock to safely call container.CheckpointTo
|
|
|
|
func (daemon *Daemon) checkpointAndSave(container *container.Container) error {
|
|
|
|
container.Lock()
|
|
|
|
defer container.Unlock()
|
|
|
|
if err := container.CheckpointTo(daemon.containersReplica); err != nil {
|
|
|
|
return fmt.Errorf("Error saving container state: %v", err)
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
2017-06-30 17:34:40 +00:00
|
|
|
|
|
|
|
// because the CLI sends a -1 when it wants to unset the swappiness value
|
|
|
|
// we need to clear it on the server side
|
|
|
|
func fixMemorySwappiness(resources *containertypes.Resources) {
|
|
|
|
if resources.MemorySwappiness != nil && *resources.MemorySwappiness == -1 {
|
|
|
|
resources.MemorySwappiness = nil
|
|
|
|
}
|
|
|
|
}
|
2017-08-29 06:49:26 +00:00
|
|
|
|
2017-09-22 06:04:34 +00:00
|
|
|
// GetAttachmentStore returns current attachment store associated with the daemon
|
|
|
|
func (daemon *Daemon) GetAttachmentStore() *network.AttachmentStore {
|
|
|
|
return &daemon.attachmentStore
|
2017-08-29 06:49:26 +00:00
|
|
|
}
|
2018-02-02 22:18:46 +00:00
|
|
|
|
2017-11-16 06:20:33 +00:00
|
|
|
// IdentityMapping returns uid/gid mapping or a SID (in the case of Windows) for the builder
|
2022-03-14 19:24:29 +00:00
|
|
|
func (daemon *Daemon) IdentityMapping() idtools.IdentityMapping {
|
2017-11-16 06:20:33 +00:00
|
|
|
return daemon.idMapping
|
2018-02-02 22:18:46 +00:00
|
|
|
}
|
|
|
|
|
2018-02-07 20:52:47 +00:00
|
|
|
// ImageService returns the Daemon's ImageService
|
2022-06-28 12:09:10 +00:00
|
|
|
func (daemon *Daemon) ImageService() ImageService {
|
2018-02-02 22:18:46 +00:00
|
|
|
return daemon.imageService
|
|
|
|
}
|
|
|
|
|
2018-02-07 20:52:47 +00:00
|
|
|
// BuilderBackend returns the backend used by builder
|
2018-02-02 22:18:46 +00:00
|
|
|
func (daemon *Daemon) BuilderBackend() builder.Backend {
|
|
|
|
return struct {
|
|
|
|
*Daemon
|
2022-06-28 12:09:10 +00:00
|
|
|
ImageService
|
2018-02-02 22:18:46 +00:00
|
|
|
}{daemon, daemon.imageService}
|
|
|
|
}
|
daemon: load and cache sysInfo on initialization
The `daemon.RawSysInfo()` function can be a heavy operation, as it collects
information about all cgroups on the host, networking, AppArmor, Seccomp, etc.
While looking at our code, I noticed that various parts in the code call this
function, potentially even _multiple times_ per container, for example, it is
called from:
- `verifyPlatformContainerSettings()`
- `oci.WithCgroups()` if the daemon has `cpu-rt-period` or `cpu-rt-runtime` configured
- in `ContainerDecoder.DecodeConfig()`, which is called on boith `container create` and `container commit`
Given that this information is not expected to change during the daemon's
lifecycle, and various information coming from this (such as seccomp and
apparmor status) was already cached, we may as well load it once, and cache
the results in the daemon instance.
This patch updates `daemon.RawSysInfo()` to use a `sync.Once()` so that
it's only executed once for the daemon's lifecycle.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-01-07 11:54:47 +00:00
|
|
|
|
|
|
|
// RawSysInfo returns *sysinfo.SysInfo .
|
|
|
|
func (daemon *Daemon) RawSysInfo() *sysinfo.SysInfo {
|
|
|
|
daemon.sysInfoOnce.Do(func() {
|
|
|
|
// We check if sysInfo is not set here, to allow some test to
|
|
|
|
// override the actual sysInfo.
|
|
|
|
if daemon.sysInfo == nil {
|
2022-06-03 15:35:23 +00:00
|
|
|
daemon.sysInfo = getSysInfo(daemon)
|
daemon: load and cache sysInfo on initialization
The `daemon.RawSysInfo()` function can be a heavy operation, as it collects
information about all cgroups on the host, networking, AppArmor, Seccomp, etc.
While looking at our code, I noticed that various parts in the code call this
function, potentially even _multiple times_ per container, for example, it is
called from:
- `verifyPlatformContainerSettings()`
- `oci.WithCgroups()` if the daemon has `cpu-rt-period` or `cpu-rt-runtime` configured
- in `ContainerDecoder.DecodeConfig()`, which is called on boith `container create` and `container commit`
Given that this information is not expected to change during the daemon's
lifecycle, and various information coming from this (such as seccomp and
apparmor status) was already cached, we may as well load it once, and cache
the results in the daemon instance.
This patch updates `daemon.RawSysInfo()` to use a `sync.Once()` so that
it's only executed once for the daemon's lifecycle.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-01-07 11:54:47 +00:00
|
|
|
}
|
|
|
|
})
|
|
|
|
|
|
|
|
return daemon.sysInfo
|
|
|
|
}
|