moby/daemon/daemon.go

1475 lines
48 KiB
Go
Raw Normal View History

// Package daemon exposes the functions that occur on the host server
// that the Docker daemon is running.
//
// In implementing the various functions of the daemon, there is often
// a method-specific struct for configuring the runtime behavior.
package daemon // import "github.com/docker/docker/daemon"
2013-01-19 00:13:39 +00:00
import (
"context"
2013-01-19 00:13:39 +00:00
"fmt"
"net"
"net/url"
"os"
"path"
"path/filepath"
"runtime"
"strings"
"sync"
"time"
"github.com/containerd/containerd"
"github.com/containerd/containerd/defaults"
"github.com/containerd/containerd/pkg/dialer"
"github.com/containerd/containerd/pkg/userns"
"github.com/containerd/containerd/remotes/docker"
"github.com/docker/docker/api/types"
containertypes "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/volume"
"github.com/docker/docker/builder"
"github.com/docker/docker/container"
"github.com/docker/docker/daemon/config"
ctrd "github.com/docker/docker/daemon/containerd"
"github.com/docker/docker/daemon/events"
_ "github.com/docker/docker/daemon/graphdriver/register" // register graph drivers
"github.com/docker/docker/daemon/images"
dlogger "github.com/docker/docker/daemon/logger"
"github.com/docker/docker/daemon/network"
"github.com/docker/docker/daemon/stats"
dmetadata "github.com/docker/docker/distribution/metadata"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/image"
"github.com/docker/docker/layer"
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 22:30:52 +00:00
libcontainerdtypes "github.com/docker/docker/libcontainerd/types"
"github.com/docker/docker/libnetwork"
"github.com/docker/docker/libnetwork/cluster"
nwconfig "github.com/docker/docker/libnetwork/config"
"github.com/docker/docker/pkg/authorization"
"github.com/docker/docker/pkg/fileutils"
"github.com/docker/docker/pkg/idtools"
"github.com/docker/docker/pkg/plugingetter"
"github.com/docker/docker/pkg/sysinfo"
"github.com/docker/docker/pkg/system"
"github.com/docker/docker/plugin"
pluginexec "github.com/docker/docker/plugin/executor/containerd"
refstore "github.com/docker/docker/reference"
"github.com/docker/docker/registry"
"github.com/docker/docker/runconfig"
volumesservice "github.com/docker/docker/volume/service"
"github.com/moby/buildkit/util/resolver"
resolverconfig "github.com/moby/buildkit/util/resolver/config"
"github.com/moby/locker"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.etcd.io/bbolt"
"golang.org/x/sync/semaphore"
"google.golang.org/grpc"
"google.golang.org/grpc/backoff"
"google.golang.org/grpc/credentials/insecure"
"resenje.org/singleflight"
)
// Daemon holds information about the Docker daemon.
type Daemon struct {
id string
repository string
containers container.Store
containersReplica *container.ViewDB
execCommands *container.ExecStore
imageService ImageService
configStore *config.Config
statsCollector *stats.Collector
defaultLogConfig containertypes.LogConfig
registryService registry.Service
EventsService *events.Events
netController *libnetwork.Controller
volumes *volumesservice.VolumesService
root string
sysInfoOnce sync.Once
sysInfo *sysinfo.SysInfo
shutdown bool
idMapping idtools.IdentityMapping
PluginStore *plugin.Store // TODO: remove
pluginManager *plugin.Manager
linkIndex *linkIndex
containerdCli *containerd.Client
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 22:30:52 +00:00
containerd libcontainerdtypes.Client
defaultIsolation containertypes.Isolation // Default isolation mode on Windows
clusterProvider cluster.Provider
cluster Cluster
genericResources []swarm.GenericResource
metricsPluginListener net.Listener
ReferenceStore refstore.Store
machineMemory uint64
seccompProfile []byte
seccompProfilePath string
usageContainers singleflight.Group[struct{}, []*types.Container]
usageImages singleflight.Group[struct{}, []*types.ImageSummary]
usageVolumes singleflight.Group[struct{}, []*volume.Volume]
usageLayer singleflight.Group[struct{}, int64]
pruneRunning int32
hosts map[string]bool // hosts stores the addresses the daemon is listening on
startupDone chan struct{}
Fix race in attachable network attachment Attachable networks are networks created on the cluster which can then be attached to by non-swarm containers. These networks are lazily created on the node that wants to attach to that network. When no container is currently attached to one of these networks on a node, and then multiple containers which want that network are started concurrently, this can cause a race condition in the network attachment where essentially we try to attach the same network to the node twice. To easily reproduce this issue you must use a multi-node cluster with a worker node that has lots of CPUs (I used a 36 CPU node). Repro steps: 1. On manager, `docker network create -d overlay --attachable test` 2. On worker, `docker create --restart=always --network test busybox top`, many times... 200 is a good number (but not much more due to subnet size restrictions) 3. Restart the daemon When the daemon restarts, it will attempt to start all those containers simultaneously. Note that you could try to do this yourself over the API, but it's harder to trigger due to the added latency from going over the API. The error produced happens when the daemon tries to start the container upon allocating the network resources: ``` attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded ``` What happens here is the worker makes a network attachment request to the manager. This is an async call which in the happy case would cause a task to be placed on the node, which the worker is waiting for to get the network configuration. In the case of this race, the error ocurrs on the manager like this: ``` task allocation failure" error="failed during network allocation for task n7bwwwbymj2o2h9asqkza8gom: failed to allocate network IP for task n7bwwwbymj2o2h9asqkza8gom network rj4szie2zfauqnpgh4eri1yue: could not find an available IP" module=node node.id=u3489c490fx1df8onlyfo1v6e ``` The task is not created and the worker times out waiting for the task. --- The mitigation for this is to make sure that only one attachment reuest is in flight for a given network at a time *when the network doesn't already exist on the node*. If the network already exists on the node there is no need for synchronization because the network is already allocated and on the node so there is no need to request it from the manager. This basically comes down to a race with `Find(network) || Create(network)` without any sort of syncronization. Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2018-02-02 17:54:08 +00:00
attachmentStore network.AttachmentStore
attachableNetworkLock *locker.Locker
// This is used for Windows which doesn't currently support running on containerd
// It stores metadata for the content store (used for manifest caching)
// This needs to be closed on daemon exit
mdDB *bbolt.DB
}
// StoreHosts stores the addresses the daemon is listening on
func (daemon *Daemon) StoreHosts(hosts []string) {
if daemon.hosts == nil {
daemon.hosts = make(map[string]bool)
}
for _, h := range hosts {
daemon.hosts[h] = true
}
}
// HasExperimental returns whether the experimental features of the daemon are enabled or not
func (daemon *Daemon) HasExperimental() bool {
return daemon.configStore != nil && daemon.configStore.Experimental
}
// Features returns the features map from configStore
func (daemon *Daemon) Features() *map[string]bool {
return &daemon.configStore.Features
}
// UsesSnapshotter returns true if feature flag to use containerd snapshotter is enabled
func (daemon *Daemon) UsesSnapshotter() bool {
// TEST_INTEGRATION_USE_SNAPSHOTTER is used for integration tests only.
if os.Getenv("TEST_INTEGRATION_USE_SNAPSHOTTER") != "" {
return true
}
if daemon.configStore.Features != nil {
if b, ok := daemon.configStore.Features["containerd-snapshotter"]; ok {
return b
}
}
return false
}
// RegistryHosts returns registry configuration in containerd resolvers format
func (daemon *Daemon) RegistryHosts() docker.RegistryHosts {
var (
registryKey = "docker.io"
mirrors = make([]string, len(daemon.configStore.Mirrors))
m = map[string]resolverconfig.RegistryConfig{}
)
// must trim "https://" or "http://" prefix
for i, v := range daemon.configStore.Mirrors {
if uri, err := url.Parse(v); err == nil {
v = uri.Host
}
mirrors[i] = v
}
// set mirrors for default registry
m[registryKey] = resolverconfig.RegistryConfig{Mirrors: mirrors}
for _, v := range daemon.configStore.InsecureRegistries {
u, err := url.Parse(v)
if err != nil && !strings.HasPrefix(v, "http://") && !strings.HasPrefix(v, "https://") {
originalErr := err
u, err = url.Parse("http://" + v)
if err != nil {
err = originalErr
}
}
c := resolverconfig.RegistryConfig{}
if err == nil {
v = u.Host
t := true
if u.Scheme == "http" {
c.PlainHTTP = &t
} else {
c.Insecure = &t
}
}
m[v] = c
}
for k, v := range m {
v.TLSConfigDir = []string{registry.HostCertsDir(k)}
m[k] = v
}
certsDir := registry.CertsDir()
if fis, err := os.ReadDir(certsDir); err == nil {
for _, fi := range fis {
if _, ok := m[fi.Name()]; !ok {
m[fi.Name()] = resolverconfig.RegistryConfig{
TLSConfigDir: []string{filepath.Join(certsDir, fi.Name())},
}
}
}
}
return resolver.NewRegistryConfig(m)
}
func (daemon *Daemon) restore() error {
var mapLock sync.Mutex
containers := make(map[string]*container.Container)
logrus.Info("Loading containers: start.")
dir, err := os.ReadDir(daemon.repository)
2013-01-19 00:13:39 +00:00
if err != nil {
return err
}
// parallelLimit is the maximum number of parallel startup jobs that we
// allow (this is the limited used for all startup semaphores). The multipler
// (128) was chosen after some fairly significant benchmarking -- don't change
// it unless you've tested it significantly (this value is adjusted if
// RLIMIT_NOFILE is small to avoid EMFILE).
parallelLimit := adjustParallelLimit(len(dir), 128*runtime.NumCPU())
// Re-used for all parallel startup jobs.
var group sync.WaitGroup
sem := semaphore.NewWeighted(int64(parallelLimit))
2013-12-18 18:43:42 +00:00
for _, v := range dir {
group.Add(1)
go func(id string) {
defer group.Done()
_ = sem.Acquire(context.Background(), 1)
defer sem.Release(1)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log := logrus.WithField("container", id)
c, err := daemon.load(id)
if err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to load container")
return
}
if c.Driver != daemon.imageService.StorageDriver() {
// Ignore the container if it wasn't created with the current storage-driver
log.Debugf("not restoring container because it was created with another storage driver (%s)", c.Driver)
return
}
rwlayer, err := daemon.imageService.GetLayerByID(c.ID)
if err != nil {
log.WithError(err).Error("failed to load container mount")
return
}
c.RWLayer = rwlayer
log.WithFields(logrus.Fields{
"running": c.IsRunning(),
"paused": c.IsPaused(),
}).Debug("loaded container")
mapLock.Lock()
containers[c.ID] = c
mapLock.Unlock()
}(v.Name())
}
group.Wait()
removeContainers := make(map[string]*container.Container)
restartContainers := make(map[*container.Container]chan struct{})
activeSandboxes := make(map[string]interface{})
for _, c := range containers {
group.Add(1)
go func(c *container.Container) {
defer group.Done()
_ = sem.Acquire(context.Background(), 1)
defer sem.Release(1)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log := logrus.WithField("container", c.ID)
if err := daemon.registerName(c); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Errorf("failed to register container name: %s", c.Name)
mapLock.Lock()
delete(containers, c.ID)
mapLock.Unlock()
return
}
if err := daemon.Register(c); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to register container")
mapLock.Lock()
delete(containers, c.ID)
mapLock.Unlock()
return
}
}(c)
}
group.Wait()
for _, c := range containers {
group.Add(1)
go func(c *container.Container) {
defer group.Done()
_ = sem.Acquire(context.Background(), 1)
defer sem.Release(1)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log := logrus.WithField("container", c.ID)
if err := daemon.checkpointAndSave(c); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("error saving backported mountspec to disk")
}
daemon.setStateCounter(c)
logger := func(c *container.Container) *logrus.Entry {
return log.WithFields(logrus.Fields{
"running": c.IsRunning(),
"paused": c.IsPaused(),
"restarting": c.IsRestarting(),
})
}
logger(c).Debug("restoring container")
var es *containerd.ExitStatus
if err := c.RestoreTask(context.Background(), daemon.containerd); err != nil && !errdefs.IsNotFound(err) {
logger(c).WithError(err).Error("failed to restore container with containerd")
return
}
alive := false
status := containerd.Unknown
if tsk, ok := c.Task(); ok {
s, err := tsk.Status(context.Background())
if err != nil {
logger(c).WithError(err).Error("failed to get task status")
} else {
status = s.Status
alive = status != containerd.Stopped
if !alive {
logger(c).Debug("cleaning up dead container process")
es, err = tsk.Delete(context.Background())
if err != nil && !errdefs.IsNotFound(err) {
logger(c).WithError(err).Error("failed to delete task from containerd")
return
}
} else if !daemon.configStore.LiveRestoreEnabled {
logger(c).Debug("shutting down container considered alive by containerd")
if err := daemon.shutdownContainer(c); err != nil && !errdefs.IsNotFound(err) {
log.WithError(err).Error("error shutting down container")
return
}
status = containerd.Stopped
alive = false
c.ResetRestartManager(false)
}
}
}
// If the containerd task for the container was not found, docker's view of the
// container state will be updated accordingly via SetStopped further down.
if c.IsRunning() || c.IsPaused() {
logger(c).Debug("syncing container on disk state with real state")
c.RestartManager().Cancel() // manually start containers because some need to wait for swarm networking
switch {
case c.IsPaused() && alive:
logger(c).WithField("state", status).Info("restored container paused")
switch status {
case containerd.Paused, containerd.Pausing:
// nothing to do
case containerd.Unknown, containerd.Stopped, "":
log.WithField("status", status).Error("unexpected status for paused container during restore")
default:
// running
c.Lock()
c.Paused = false
daemon.setStateCounter(c)
daemon.updateHealthMonitor(c)
if err := c.CheckpointTo(daemon.containersReplica); err != nil {
log.WithError(err).Error("failed to update paused container state")
}
c.Unlock()
}
case !c.IsPaused() && alive:
logger(c).Debug("restoring healthcheck")
c.Lock()
daemon.updateHealthMonitor(c)
c.Unlock()
}
if !alive {
logger(c).Debug("setting stopped state")
c.Lock()
var ces container.ExitStatus
if es != nil {
ces.ExitCode = int(es.ExitCode())
ces.ExitedAt = es.ExitTime()
}
c.SetStopped(&ces)
daemon.Cleanup(c)
if err := c.CheckpointTo(daemon.containersReplica); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to update stopped container state")
}
c.Unlock()
logger(c).Debug("set stopped state")
}
// we call Mount and then Unmount to get BaseFs of the container
if err := daemon.Mount(c); err != nil {
// The mount is unlikely to fail. However, in case mount fails
// the container should be allowed to restore here. Some functionalities
// (like docker exec -u user) might be missing but container is able to be
// stopped/restarted/removed.
// See #29365 for related information.
// The error is only logged here.
logger(c).WithError(err).Warn("failed to mount container to get BaseFs path")
} else {
if err := daemon.Unmount(c); err != nil {
logger(c).WithError(err).Warn("failed to umount container to get BaseFs path")
}
}
c.ResetRestartManager(false)
if !c.HostConfig.NetworkMode.IsContainer() && c.IsRunning() {
options, err := daemon.buildSandboxOptions(c)
if err != nil {
logger(c).WithError(err).Warn("failed to build sandbox option to restore container")
}
mapLock.Lock()
activeSandboxes[c.NetworkSettings.SandboxID] = options
mapLock.Unlock()
}
}
// get list of containers we need to restart
// Do not autostart containers which
// has endpoints in a swarm scope
// network yet since the cluster is
// not initialized yet. We will start
// it after the cluster is
// initialized.
if daemon.configStore.AutoRestart && c.ShouldRestart() && !c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore {
mapLock.Lock()
restartContainers[c] = make(chan struct{})
mapLock.Unlock()
} else if c.HostConfig != nil && c.HostConfig.AutoRemove {
mapLock.Lock()
removeContainers[c.ID] = c
mapLock.Unlock()
}
c.Lock()
if c.RemovalInProgress {
// We probably crashed in the middle of a removal, reset
// the flag.
//
// We DO NOT remove the container here as we do not
// know if the user had requested for either the
// associated volumes, network links or both to also
// be removed. So we put the container in the "dead"
// state and leave further processing up to them.
c.RemovalInProgress = false
c.Dead = true
if err := c.CheckpointTo(daemon.containersReplica); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to update RemovalInProgress container state")
} else {
log.Debugf("reset RemovalInProgress state for container")
}
}
c.Unlock()
logger(c).Debug("done restoring container")
}(c)
}
group.Wait()
// Initialize the network controller and configure network settings.
//
// Note that we cannot initialize the network controller earlier, as it
// needs to know if there's active sandboxes (running containers).
if err = daemon.initNetworkController(activeSandboxes); err != nil {
return fmt.Errorf("Error initializing network controller: %v", err)
}
// Now that all the containers are registered, register the links
for _, c := range containers {
group.Add(1)
go func(c *container.Container) {
_ = sem.Acquire(context.Background(), 1)
if err := daemon.registerLinks(c, c.HostConfig); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
logrus.WithField("container", c.ID).WithError(err).Error("failed to register link for container")
}
sem.Release(1)
group.Done()
}(c)
}
group.Wait()
for c, notifier := range restartContainers {
group.Add(1)
go func(c *container.Container, chNotify chan struct{}) {
_ = sem.Acquire(context.Background(), 1)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log := logrus.WithField("container", c.ID)
log.Debug("starting container")
// ignore errors here as this is a best effort to wait for children to be
// running before we try to start the container
children := daemon.children(c)
timeout := time.NewTimer(5 * time.Second)
defer timeout.Stop()
for _, child := range children {
if notifier, exists := restartContainers[child]; exists {
select {
case <-notifier:
case <-timeout.C:
}
}
}
if err := daemon.prepareMountPoints(c); err != nil {
log.WithError(err).Error("failed to prepare mount points for container")
}
if err := daemon.containerStart(context.Background(), c, "", "", true); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to start container")
}
close(chNotify)
sem.Release(1)
group.Done()
}(c, notifier)
}
group.Wait()
for id := range removeContainers {
group.Add(1)
go func(cid string) {
_ = sem.Acquire(context.Background(), 1)
if err := daemon.ContainerRm(cid, &types.ContainerRmConfig{ForceRemove: true, RemoveVolume: true}); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
logrus.WithField("container", cid).WithError(err).Error("failed to remove container")
}
sem.Release(1)
group.Done()
}(id)
}
group.Wait()
// any containers that were started above would already have had this done,
// however we need to now prepare the mountpoints for the rest of the containers as well.
// This shouldn't cause any issue running on the containers that already had this run.
// This must be run after any containers with a restart policy so that containerized plugins
// can have a chance to be running before we try to initialize them.
for _, c := range containers {
// if the container has restart policy, do not
// prepare the mountpoints since it has been done on restarting.
// This is to speed up the daemon start when a restart container
// has a volume and the volume driver is not available.
if _, ok := restartContainers[c]; ok {
continue
} else if _, ok := removeContainers[c.ID]; ok {
// container is automatically removed, skip it.
continue
}
group.Add(1)
go func(c *container.Container) {
_ = sem.Acquire(context.Background(), 1)
if err := daemon.prepareMountPoints(c); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
logrus.WithField("container", c.ID).WithError(err).Error("failed to prepare mountpoints for container")
}
sem.Release(1)
group.Done()
}(c)
}
group.Wait()
logrus.Info("Loading containers: done.")
2013-01-19 00:13:39 +00:00
return nil
}
// RestartSwarmContainers restarts any autostart container which has a
// swarm endpoint.
func (daemon *Daemon) RestartSwarmContainers() {
ctx := context.Background()
// parallelLimit is the maximum number of parallel startup jobs that we
// allow (this is the limited used for all startup semaphores). The multipler
// (128) was chosen after some fairly significant benchmarking -- don't change
// it unless you've tested it significantly (this value is adjusted if
// RLIMIT_NOFILE is small to avoid EMFILE).
parallelLimit := adjustParallelLimit(len(daemon.List()), 128*runtime.NumCPU())
var group sync.WaitGroup
sem := semaphore.NewWeighted(int64(parallelLimit))
for _, c := range daemon.List() {
if !c.IsRunning() && !c.IsPaused() {
// Autostart all the containers which has a
// swarm endpoint now that the cluster is
// initialized.
if daemon.configStore.AutoRestart && c.ShouldRestart() && c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore {
group.Add(1)
go func(c *container.Container) {
if err := sem.Acquire(ctx, 1); err != nil {
// ctx is done.
group.Done()
return
}
if err := daemon.containerStart(ctx, c, "", "", true); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
logrus.WithField("container", c.ID).WithError(err).Error("failed to start swarm container")
}
sem.Release(1)
group.Done()
}(c)
}
}
}
group.Wait()
}
func (daemon *Daemon) children(c *container.Container) map[string]*container.Container {
return daemon.linkIndex.children(c)
}
// parents returns the names of the parent containers of the container
// with the given name.
func (daemon *Daemon) parents(c *container.Container) map[string]*container.Container {
return daemon.linkIndex.parents(c)
}
func (daemon *Daemon) registerLink(parent, child *container.Container, alias string) error {
fullName := path.Join(parent.Name, alias)
if err := daemon.containersReplica.ReserveName(fullName, child.ID); err != nil {
if errors.Is(err, container.ErrNameReserved) {
logrus.Warnf("error registering link for %s, to %s, as alias %s, ignoring: %v", parent.ID, child.ID, alias, err)
return nil
}
return err
}
daemon.linkIndex.link(parent, child, fullName)
return nil
}
// DaemonJoinsCluster informs the daemon has joined the cluster and provides
// the handler to query the cluster component
func (daemon *Daemon) DaemonJoinsCluster(clusterProvider cluster.Provider) {
daemon.setClusterProvider(clusterProvider)
}
// DaemonLeavesCluster informs the daemon has left the cluster
func (daemon *Daemon) DaemonLeavesCluster() {
// Daemon is in charge of removing the attachable networks with
// connected containers when the node leaves the swarm
daemon.clearAttachableNetworks()
// We no longer need the cluster provider, stop it now so that
// the network agent will stop listening to cluster events.
daemon.setClusterProvider(nil)
// Wait for the networking cluster agent to stop
daemon.netController.AgentStopWait()
// Daemon is in charge of removing the ingress network when the
// node leaves the swarm. Wait for job to be done or timeout.
// This is called also on graceful daemon shutdown. We need to
// wait, because the ingress release has to happen before the
// network controller is stopped.
if done, err := daemon.ReleaseIngress(); err == nil {
timeout := time.NewTimer(5 * time.Second)
defer timeout.Stop()
select {
case <-done:
case <-timeout.C:
logrus.Warn("timeout while waiting for ingress network removal")
}
} else {
logrus.Warnf("failed to initiate ingress network removal: %v", err)
}
daemon.attachmentStore.ClearAttachments()
}
// setClusterProvider sets a component for querying the current cluster state.
func (daemon *Daemon) setClusterProvider(clusterProvider cluster.Provider) {
daemon.clusterProvider = clusterProvider
daemon.netController.SetClusterProvider(clusterProvider)
Fix race in attachable network attachment Attachable networks are networks created on the cluster which can then be attached to by non-swarm containers. These networks are lazily created on the node that wants to attach to that network. When no container is currently attached to one of these networks on a node, and then multiple containers which want that network are started concurrently, this can cause a race condition in the network attachment where essentially we try to attach the same network to the node twice. To easily reproduce this issue you must use a multi-node cluster with a worker node that has lots of CPUs (I used a 36 CPU node). Repro steps: 1. On manager, `docker network create -d overlay --attachable test` 2. On worker, `docker create --restart=always --network test busybox top`, many times... 200 is a good number (but not much more due to subnet size restrictions) 3. Restart the daemon When the daemon restarts, it will attempt to start all those containers simultaneously. Note that you could try to do this yourself over the API, but it's harder to trigger due to the added latency from going over the API. The error produced happens when the daemon tries to start the container upon allocating the network resources: ``` attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded ``` What happens here is the worker makes a network attachment request to the manager. This is an async call which in the happy case would cause a task to be placed on the node, which the worker is waiting for to get the network configuration. In the case of this race, the error ocurrs on the manager like this: ``` task allocation failure" error="failed during network allocation for task n7bwwwbymj2o2h9asqkza8gom: failed to allocate network IP for task n7bwwwbymj2o2h9asqkza8gom network rj4szie2zfauqnpgh4eri1yue: could not find an available IP" module=node node.id=u3489c490fx1df8onlyfo1v6e ``` The task is not created and the worker times out waiting for the task. --- The mitigation for this is to make sure that only one attachment reuest is in flight for a given network at a time *when the network doesn't already exist on the node*. If the network already exists on the node there is no need for synchronization because the network is already allocated and on the node so there is no need to request it from the manager. This basically comes down to a race with `Find(network) || Create(network)` without any sort of syncronization. Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2018-02-02 17:54:08 +00:00
daemon.attachableNetworkLock = locker.New()
}
// IsSwarmCompatible verifies if the current daemon
// configuration is compatible with the swarm mode
func (daemon *Daemon) IsSwarmCompatible() error {
if daemon.configStore == nil {
return nil
}
return daemon.configStore.IsSwarmCompatible()
}
// NewDaemon sets up everything for the daemon to be able to service
// requests from the webserver.
func NewDaemon(ctx context.Context, config *config.Config, pluginStore *plugin.Store, authzMiddleware *authorization.Middleware) (daemon *Daemon, err error) {
// Verify platform-specific requirements.
// TODO(thaJeztah): this should be called before we try to create the daemon; perhaps together with the config validation.
if err := checkSystem(); err != nil {
return nil, err
}
registryService, err := registry.NewService(config.ServiceOptions)
if err != nil {
return nil, err
}
// Ensure that we have a correct root key limit for launching containers.
if err := modifyRootKeyLimit(); err != nil {
logrus.Warnf("unable to modify root key limit, number of containers could be limited by this quota: %v", err)
}
// Ensure we have compatible and valid configuration options
if err := verifyDaemonSettings(config); err != nil {
return nil, err
}
// Do we have a disabled network?
config.DisableBridge = isBridgeNetworkDisabled(config)
// Setup the resolv.conf
setupResolvConf(config)
idMapping, err := setupRemappedRoot(config)
if err != nil {
return nil, err
}
rootIDs := idMapping.RootPair()
if err := setupDaemonProcess(config); err != nil {
return nil, err
}
// set up the tmpDir to use a canonical path
tmp, err := prepareTempDir(config.Root)
if err != nil {
return nil, fmt.Errorf("Unable to get the TempDir under %s: %s", config.Root, err)
}
realTmp, err := fileutils.ReadSymlinkedDirectory(tmp)
if err != nil {
return nil, fmt.Errorf("Unable to get the full path to the TempDir (%s): %s", tmp, err)
}
if isWindows {
2022-10-16 12:59:00 +00:00
if err := system.MkdirAll(realTmp, 0); err != nil {
return nil, fmt.Errorf("Unable to create the TempDir (%s): %s", realTmp, err)
}
os.Setenv("TEMP", realTmp)
os.Setenv("TMP", realTmp)
} else {
os.Setenv("TMPDIR", realTmp)
}
d := &Daemon{
configStore: config,
PluginStore: pluginStore,
startupDone: make(chan struct{}),
}
// Ensure the daemon is properly shutdown if there is a failure during
// initialization
defer func() {
if err != nil {
// Use a fresh context here. Passed context could be cancelled.
if err := d.Shutdown(context.Background()); err != nil {
logrus.Error(err)
}
}
}()
if err := d.setGenericResources(config); err != nil {
return nil, err
}
// set up SIGUSR1 handler on Unix-like systems, or a Win32 global event
// on Windows to dump Go routine stacks
stackDumpDir := config.Root
if execRoot := config.GetExecRoot(); execRoot != "" {
stackDumpDir = execRoot
}
d.setupDumpStackTrap(stackDumpDir)
if err := d.setupSeccompProfile(); err != nil {
return nil, err
}
// Set the default isolation mode (only applicable on Windows)
if err := d.setDefaultIsolation(); err != nil {
return nil, fmt.Errorf("error setting default isolation mode: %v", err)
}
if err := configureMaxThreads(config); err != nil {
logrus.Warnf("Failed to configure golang's threads limit: %v", err)
}
// ensureDefaultAppArmorProfile does nothing if apparmor is disabled
if err := ensureDefaultAppArmorProfile(); err != nil {
logrus.Errorf(err.Error())
}
daemonRepo := filepath.Join(config.Root, "containers")
2022-10-16 12:59:00 +00:00
if err := idtools.MkdirAllAndChown(daemonRepo, 0o710, idtools.Identity{
UID: idtools.CurrentIdentity().UID,
GID: rootIDs.GID,
}); err != nil {
return nil, err
}
// Create the directory where we'll store the runtime scripts (i.e. in
// order to support runtimeArgs)
2022-10-16 12:59:00 +00:00
if err = os.Mkdir(filepath.Join(config.Root, "runtimes"), 0o700); err != nil && !errors.Is(err, os.ErrExist) {
return nil, err
}
if err := d.loadRuntimes(); err != nil {
return nil, err
}
if isWindows {
2022-10-16 12:59:00 +00:00
// Note that permissions (0o700) are ignored on Windows; passing them to
// show intent only. We could consider using idtools.MkdirAndChown here
// to apply an ACL.
if err = os.Mkdir(filepath.Join(config.Root, "credentialspecs"), 0o700); err != nil && !errors.Is(err, os.ErrExist) {
return nil, err
}
}
d.registryService = registryService
dlogger.RegisterPluginGetter(d.PluginStore)
metricsSockPath, err := d.listenMetricsSock()
if err != nil {
return nil, err
}
registerMetricsPluginCallback(d.PluginStore, metricsSockPath)
backoffConfig := backoff.DefaultConfig
backoffConfig.MaxDelay = 3 * time.Second
connParams := grpc.ConnectParams{
Backoff: backoffConfig,
}
gopts := []grpc.DialOption{
// WithBlock makes sure that the following containerd request
// is reliable.
//
// NOTE: In one edge case with high load pressure, kernel kills
// dockerd, containerd and containerd-shims caused by OOM.
// When both dockerd and containerd restart, but containerd
// will take time to recover all the existing containers. Before
// containerd serving, dockerd will failed with gRPC error.
// That bad thing is that restore action will still ignore the
// any non-NotFound errors and returns running state for
// already stopped container. It is unexpected behavior. And
// we need to restart dockerd to make sure that anything is OK.
//
// It is painful. Add WithBlock can prevent the edge case. And
// n common case, the containerd will be serving in shortly.
// It is not harm to add WithBlock for containerd connection.
grpc.WithBlock(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithConnectParams(connParams),
grpc.WithContextDialer(dialer.ContextDialer),
// TODO(stevvooe): We may need to allow configuration of this on the client.
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)),
grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)),
}
if config.ContainerdAddr != "" {
d.containerdCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second))
if err != nil {
return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr)
}
}
createPluginExec := func(m *plugin.Manager) (plugin.Executor, error) {
var pluginCli *containerd.Client
if config.ContainerdAddr != "" {
pluginCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdPluginNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second))
if err != nil {
return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr)
}
}
var (
shim string
shimOpts interface{}
)
if runtime.GOOS != "windows" {
shim, shimOpts, err = d.getRuntime(config.GetDefaultRuntimeName())
if err != nil {
return nil, err
}
}
return pluginexec.New(ctx, getPluginExecRoot(config), pluginCli, config.ContainerdPluginNamespace, m, shim, shimOpts)
}
// Plugin system initialization should happen before restore. Do not change order.
d.pluginManager, err = plugin.NewManager(plugin.ManagerConfig{
Root: filepath.Join(config.Root, "plugins"),
ExecRoot: getPluginExecRoot(config),
Store: d.PluginStore,
CreateExecutor: createPluginExec,
RegistryService: registryService,
LiveRestoreEnabled: config.LiveRestoreEnabled,
LogPluginEvent: d.LogPluginEvent, // todo: make private
AuthzMiddleware: authzMiddleware,
})
if err != nil {
return nil, errors.Wrap(err, "couldn't create plugin manager")
}
if err := d.setupDefaultLogConfig(); err != nil {
return nil, err
}
d.volumes, err = volumesservice.NewVolumeService(config.Root, d.PluginStore, rootIDs, d)
if err != nil {
return nil, err
}
// Check if Devices cgroup is mounted, it is hard requirement for container security,
// on Linux.
daemon.NewDaemon(): fix network feature detection on first start Commit 483aa6294b457ad4f60df91e46c0038a0e953dad introduced a regression, causing spurious warnings to be shown when starting a daemon for the first time after a fresh install: docker info ... WARNING: IPv4 forwarding is disabled WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled The information shown is incorrect, as checking the corresponding options on the system, shows that these options are available: cat /proc/sys/net/ipv4/ip_forward 1 cat /proc/sys/net/bridge/bridge-nf-call-iptables 1 cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 1 The reason this is failing is because the daemon itself reconfigures those options during networking initialization in `configureIPForwarding()`; https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/libnetwork/drivers/bridge/setup_ip_forwarding.go#L14-L25 Network initialization happens in the `daemon.restore()` function within `daemon.NewDaemon()`: https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/daemon/daemon.go#L475-L478 However, 483aa6294b457ad4f60df91e46c0038a0e953dad moved detection of features earlier in the `daemon.NewDaemon()` function, and collects the system information (`d.RawSysInfo()`) before we enter `daemon.restore()`; https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/daemon/daemon.go#L1008-L1011 For optimization (collecting the system information comes at a cost), those results are cached on the daemon, and will only be performed once (using a `sync.Once`). This patch: - introduces a `getSysInfo()` utility, which collects system information without caching the results - uses `getSysInfo()` to collect the preliminary information needed at that point in the daemon's lifecycle. - moves printing warnings to the end of `daemon.NewDaemon()`, after all information can be read correctly. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-06-03 15:35:23 +00:00
//
// Important: we call getSysInfo() directly here, without storing the results,
// as networking has not yet been set up, so we only have partial system info
// at this point.
//
// TODO(thaJeztah) add a utility to only collect the CgroupDevicesEnabled information
if runtime.GOOS == "linux" && !userns.RunningInUserNS() && !getSysInfo(d).CgroupDevicesEnabled {
return nil, errors.New("Devices cgroup isn't mounted")
}
d.id, err = loadOrCreateID(filepath.Join(config.Root, "engine-id"))
if err != nil {
return nil, err
}
d.repository = daemonRepo
d.containers = container.NewMemoryStore()
if d.containersReplica, err = container.NewViewDB(); err != nil {
return nil, err
}
d.execCommands = container.NewExecStore()
d.statsCollector = d.newStatsCollector(1 * time.Second)
d.EventsService = events.New()
d.root = config.Root
d.idMapping = idMapping
d.linkIndex = newLinkIndex()
// On Windows we don't support the environment variable, or a user supplied graphdriver
// Unix platforms however run a single graphdriver for all containers, and it can
// be set through an environment variable, a daemon start parameter, or chosen through
// initialization of the layerstore through driver priority order for example.
driverName := os.Getenv("DOCKER_DRIVER")
if isWindows {
driverName = "windowsfilter"
} else if driverName != "" {
logrus.Infof("Setting the storage driver from the $DOCKER_DRIVER environment variable (%s)", driverName)
} else {
driverName = config.GraphDriver
}
if d.UsesSnapshotter() {
if os.Getenv("TEST_INTEGRATION_USE_SNAPSHOTTER") != "" {
logrus.Warn("Enabling containerd snapshotter through the $TEST_INTEGRATION_USE_SNAPSHOTTER environment variable. This should only be used for testing.")
}
logrus.Info("Starting daemon with containerd snapshotter integration enabled")
// FIXME(thaJeztah): implement automatic snapshotter-selection similar to graph-driver selection; see https://github.com/moby/moby/issues/44076
if driverName == "" {
driverName = containerd.DefaultSnapshotter
}
// Configure and validate the kernels security support. Note this is a Linux/FreeBSD
// operation only, so it is safe to pass *just* the runtime OS graphdriver.
if err := configureKernelSecuritySupport(config, driverName); err != nil {
return nil, err
}
d.imageService = ctrd.NewService(d.containerdCli, d.containers, driverName, d, d.registryService)
} else {
layerStore, err := layer.NewStoreFromOptions(layer.StoreOptions{
Root: config.Root,
MetadataStorePathTemplate: filepath.Join(config.Root, "image", "%s", "layerdb"),
GraphDriver: driverName,
GraphDriverOptions: config.GraphOptions,
IDMapping: idMapping,
PluginGetter: d.PluginStore,
ExperimentalEnabled: config.Experimental,
})
if err != nil {
return nil, err
}
// Configure and validate the kernels security support. Note this is a Linux/FreeBSD
// operation only, so it is safe to pass *just* the runtime OS graphdriver.
if err := configureKernelSecuritySupport(config, layerStore.DriverName()); err != nil {
return nil, err
}
imageRoot := filepath.Join(config.Root, "image", layerStore.DriverName())
ifs, err := image.NewFSStoreBackend(filepath.Join(imageRoot, "imagedb"))
if err != nil {
return nil, err
}
// We have a single tag/reference store for the daemon globally. However, it's
// stored under the graphdriver. On host platforms which only support a single
// container OS, but multiple selectable graphdrivers, this means depending on which
// graphdriver is chosen, the global reference store is under there. For
// platforms which support multiple container operating systems, this is slightly
// more problematic as where does the global ref store get located? Fortunately,
// for Windows, which is currently the only daemon supporting multiple container
// operating systems, the list of graphdrivers available isn't user configurable.
// For backwards compatibility, we just put it under the windowsfilter
// directory regardless.
refStoreLocation := filepath.Join(imageRoot, `repositories.json`)
rs, err := refstore.NewReferenceStore(refStoreLocation)
if err != nil {
return nil, fmt.Errorf("Couldn't create reference store repository: %s", err)
}
d.ReferenceStore = rs
imageStore, err := image.NewImageStore(ifs, layerStore)
if err != nil {
return nil, err
}
distributionMetadataStore, err := dmetadata.NewFSMetadataStore(filepath.Join(imageRoot, "distribution"))
if err != nil {
return nil, err
}
imgSvcConfig := images.ImageServiceConfig{
ContainerStore: d.containers,
DistributionMetadataStore: distributionMetadataStore,
EventsService: d.EventsService,
ImageStore: imageStore,
LayerStore: layerStore,
MaxConcurrentDownloads: config.MaxConcurrentDownloads,
MaxConcurrentUploads: config.MaxConcurrentUploads,
MaxDownloadAttempts: config.MaxDownloadAttempts,
ReferenceStore: rs,
RegistryService: registryService,
ContentNamespace: config.ContainerdNamespace,
}
// containerd is not currently supported with Windows.
// So sometimes d.containerdCli will be nil
// In that case we'll create a local content store... but otherwise we'll use containerd
if d.containerdCli != nil {
imgSvcConfig.Leases = d.containerdCli.LeasesService()
imgSvcConfig.ContentStore = d.containerdCli.ContentStore()
} else {
cs, lm, err := d.configureLocalContentStore(config.ContainerdNamespace)
if err != nil {
return nil, err
}
imgSvcConfig.ContentStore = cs
imgSvcConfig.Leases = lm
}
// TODO: imageStore, distributionMetadataStore, and ReferenceStore are only
// used above to run migration. They could be initialized in ImageService
// if migration is called from daemon/images. layerStore might move as well.
d.imageService = images.NewImageService(imgSvcConfig)
logrus.Debugf("Max Concurrent Downloads: %d", imgSvcConfig.MaxConcurrentDownloads)
logrus.Debugf("Max Concurrent Uploads: %d", imgSvcConfig.MaxConcurrentUploads)
logrus.Debugf("Max Download Attempts: %d", imgSvcConfig.MaxDownloadAttempts)
}
go d.execCommandGC()
if err := d.initLibcontainerd(ctx); err != nil {
return nil, err
}
if err := d.restore(); err != nil {
return nil, err
}
close(d.startupDone)
info := d.SystemInfo()
daemon.NewDaemon(): fix network feature detection on first start Commit 483aa6294b457ad4f60df91e46c0038a0e953dad introduced a regression, causing spurious warnings to be shown when starting a daemon for the first time after a fresh install: docker info ... WARNING: IPv4 forwarding is disabled WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled The information shown is incorrect, as checking the corresponding options on the system, shows that these options are available: cat /proc/sys/net/ipv4/ip_forward 1 cat /proc/sys/net/bridge/bridge-nf-call-iptables 1 cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 1 The reason this is failing is because the daemon itself reconfigures those options during networking initialization in `configureIPForwarding()`; https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/libnetwork/drivers/bridge/setup_ip_forwarding.go#L14-L25 Network initialization happens in the `daemon.restore()` function within `daemon.NewDaemon()`: https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/daemon/daemon.go#L475-L478 However, 483aa6294b457ad4f60df91e46c0038a0e953dad moved detection of features earlier in the `daemon.NewDaemon()` function, and collects the system information (`d.RawSysInfo()`) before we enter `daemon.restore()`; https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/daemon/daemon.go#L1008-L1011 For optimization (collecting the system information comes at a cost), those results are cached on the daemon, and will only be performed once (using a `sync.Once`). This patch: - introduces a `getSysInfo()` utility, which collects system information without caching the results - uses `getSysInfo()` to collect the preliminary information needed at that point in the daemon's lifecycle. - moves printing warnings to the end of `daemon.NewDaemon()`, after all information can be read correctly. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-06-03 15:35:23 +00:00
for _, w := range info.Warnings {
logrus.Warn(w)
}
engineInfo.WithValues(
dockerversion.Version,
dockerversion.GitCommit,
info.Architecture,
info.Driver,
info.KernelVersion,
info.OperatingSystem,
info.OSType,
info.OSVersion,
info.ID,
).Set(1)
engineCpus.Set(float64(info.NCPU))
engineMemory.Set(float64(info.MemTotal))
logrus.WithFields(logrus.Fields{
"version": dockerversion.Version,
"commit": dockerversion.GitCommit,
"graphdriver": d.ImageService().StorageDriver(),
}).Info("Docker daemon")
return d, nil
}
// DistributionServices returns services controlling daemon storage
func (daemon *Daemon) DistributionServices() images.DistributionServices {
return daemon.imageService.DistributionServices()
}
func (daemon *Daemon) waitForStartupDone() {
<-daemon.startupDone
}
func (daemon *Daemon) shutdownContainer(c *container.Container) error {
// If container failed to exit in stopTimeout seconds of SIGTERM, then using the force
if err := daemon.containerStop(context.TODO(), c, containertypes.StopOptions{}); err != nil {
return fmt.Errorf("Failed to stop container %s with error: %v", c.ID, err)
}
// Wait without timeout for the container to exit.
// Ignore the result.
<-c.Wait(context.Background(), container.WaitConditionNotRunning)
return nil
}
// ShutdownTimeout returns the timeout (in seconds) before containers are forcibly
// killed during shutdown. The default timeout can be configured both on the daemon
// and per container, and the longest timeout will be used. A grace-period of
// 5 seconds is added to the configured timeout.
//
// A negative (-1) timeout means "indefinitely", which means that containers
// are not forcibly killed, and the daemon shuts down after all containers exit.
func (daemon *Daemon) ShutdownTimeout() int {
shutdownTimeout := daemon.configStore.ShutdownTimeout
if shutdownTimeout < 0 {
return -1
}
if daemon.containers == nil {
return shutdownTimeout
}
graceTimeout := 5
for _, c := range daemon.containers.List() {
stopTimeout := c.StopTimeout()
if stopTimeout < 0 {
return -1
}
if stopTimeout+graceTimeout > shutdownTimeout {
shutdownTimeout = stopTimeout + graceTimeout
}
}
return shutdownTimeout
}
// Shutdown stops the daemon.
func (daemon *Daemon) Shutdown(ctx context.Context) error {
daemon.shutdown = true
// Keep mounts and networking running on daemon shutdown if
// we are to keep containers running and restore them.
if daemon.configStore.LiveRestoreEnabled && daemon.containers != nil {
// check if there are any running containers, if none we should do some cleanup
if ls, err := daemon.Containers(ctx, &types.ContainerListOptions{}); len(ls) != 0 || err != nil {
// metrics plugins still need some cleanup
daemon.cleanupMetricsPlugins()
return err
}
}
if daemon.containers != nil {
logrus.Debugf("daemon configured with a %d seconds minimum shutdown timeout", daemon.configStore.ShutdownTimeout)
logrus.Debugf("start clean shutdown of all containers with a %d seconds timeout...", daemon.ShutdownTimeout())
daemon.containers.ApplyAll(func(c *container.Container) {
if !c.IsRunning() {
return
}
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log := logrus.WithField("container", c.ID)
log.Debug("shutting down container")
if err := daemon.shutdownContainer(c); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to shut down container")
return
}
if mountid, err := daemon.imageService.GetLayerMountID(c.ID); err == nil {
daemon.cleanupMountsByID(mountid)
}
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.Debugf("shut down container")
})
}
if daemon.volumes != nil {
if err := daemon.volumes.Shutdown(); err != nil {
logrus.Errorf("Error shutting down volume store: %v", err)
}
}
if daemon.imageService != nil {
if err := daemon.imageService.Cleanup(); err != nil {
logrus.Error(err)
}
}
// If we are part of a cluster, clean up cluster's stuff
if daemon.clusterProvider != nil {
logrus.Debugf("start clean shutdown of cluster resources...")
daemon.DaemonLeavesCluster()
}
daemon.cleanupMetricsPlugins()
// Shutdown plugins after containers and layerstore. Don't change the order.
daemon.pluginShutdown()
// trigger libnetwork Stop only if it's initialized
if daemon.netController != nil {
daemon.netController.Stop()
}
if daemon.containerdCli != nil {
daemon.containerdCli.Close()
}
if daemon.mdDB != nil {
daemon.mdDB.Close()
}
return daemon.cleanupMounts()
}
// Mount sets container.BaseFS
func (daemon *Daemon) Mount(container *container.Container) error {
return daemon.imageService.Mount(context.Background(), container)
}
// Unmount unsets the container base filesystem
func (daemon *Daemon) Unmount(container *container.Container) error {
return daemon.imageService.Unmount(context.Background(), container)
}
// Subnets return the IPv4 and IPv6 subnets of networks that are manager by Docker.
func (daemon *Daemon) Subnets() ([]net.IPNet, []net.IPNet) {
var v4Subnets []net.IPNet
var v6Subnets []net.IPNet
Split advertised address from listen address There are currently problems with "swarm init" and "swarm join" when an explicit --listen-addr flag is not provided. swarmkit defaults to finding the IP address associated with the default route, and in cloud setups this is often the wrong choice. Introduce a notion of "advertised address", with the client flag --advertise-addr, and the daemon flag --swarm-default-advertise-addr to provide a default. The default listening address is now 0.0.0.0, but a valid advertised address must be detected or specified. If no explicit advertised address is specified, error out if there is more than one usable candidate IP address on the system. This requires a user to explicitly choose instead of letting swarmkit make the wrong choice. For the purposes of this autodetection, we ignore certain interfaces that are unlikely to be relevant (currently docker*). The user is also required to choose a listen address on swarm init if they specify an explicit advertise address that is a hostname or an IP address that's not local to the system. This is a requirement for overlay networking. Also support specifying interface names to --listen-addr, --advertise-addr, and the daemon flag --swarm-default-advertise-addr. This will fail if the interface has multiple IP addresses (unless it has a single IPv4 address and a single IPv6 address - then we resolve the tie in favor of IPv4). This change also exposes the node's externally-reachable address in docker info, as requested by #24017. Make corresponding API and CLI docs changes. Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2016-07-01 01:07:35 +00:00
managedNetworks := daemon.netController.Networks()
for _, managedNetwork := range managedNetworks {
v4infos, v6infos := managedNetwork.Info().IpamInfo()
for _, info := range v4infos {
if info.IPAMData.Pool != nil {
v4Subnets = append(v4Subnets, *info.IPAMData.Pool)
Split advertised address from listen address There are currently problems with "swarm init" and "swarm join" when an explicit --listen-addr flag is not provided. swarmkit defaults to finding the IP address associated with the default route, and in cloud setups this is often the wrong choice. Introduce a notion of "advertised address", with the client flag --advertise-addr, and the daemon flag --swarm-default-advertise-addr to provide a default. The default listening address is now 0.0.0.0, but a valid advertised address must be detected or specified. If no explicit advertised address is specified, error out if there is more than one usable candidate IP address on the system. This requires a user to explicitly choose instead of letting swarmkit make the wrong choice. For the purposes of this autodetection, we ignore certain interfaces that are unlikely to be relevant (currently docker*). The user is also required to choose a listen address on swarm init if they specify an explicit advertise address that is a hostname or an IP address that's not local to the system. This is a requirement for overlay networking. Also support specifying interface names to --listen-addr, --advertise-addr, and the daemon flag --swarm-default-advertise-addr. This will fail if the interface has multiple IP addresses (unless it has a single IPv4 address and a single IPv6 address - then we resolve the tie in favor of IPv4). This change also exposes the node's externally-reachable address in docker info, as requested by #24017. Make corresponding API and CLI docs changes. Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2016-07-01 01:07:35 +00:00
}
}
for _, info := range v6infos {
if info.IPAMData.Pool != nil {
v6Subnets = append(v6Subnets, *info.IPAMData.Pool)
Split advertised address from listen address There are currently problems with "swarm init" and "swarm join" when an explicit --listen-addr flag is not provided. swarmkit defaults to finding the IP address associated with the default route, and in cloud setups this is often the wrong choice. Introduce a notion of "advertised address", with the client flag --advertise-addr, and the daemon flag --swarm-default-advertise-addr to provide a default. The default listening address is now 0.0.0.0, but a valid advertised address must be detected or specified. If no explicit advertised address is specified, error out if there is more than one usable candidate IP address on the system. This requires a user to explicitly choose instead of letting swarmkit make the wrong choice. For the purposes of this autodetection, we ignore certain interfaces that are unlikely to be relevant (currently docker*). The user is also required to choose a listen address on swarm init if they specify an explicit advertise address that is a hostname or an IP address that's not local to the system. This is a requirement for overlay networking. Also support specifying interface names to --listen-addr, --advertise-addr, and the daemon flag --swarm-default-advertise-addr. This will fail if the interface has multiple IP addresses (unless it has a single IPv4 address and a single IPv6 address - then we resolve the tie in favor of IPv4). This change also exposes the node's externally-reachable address in docker info, as requested by #24017. Make corresponding API and CLI docs changes. Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2016-07-01 01:07:35 +00:00
}
}
}
return v4Subnets, v6Subnets
Split advertised address from listen address There are currently problems with "swarm init" and "swarm join" when an explicit --listen-addr flag is not provided. swarmkit defaults to finding the IP address associated with the default route, and in cloud setups this is often the wrong choice. Introduce a notion of "advertised address", with the client flag --advertise-addr, and the daemon flag --swarm-default-advertise-addr to provide a default. The default listening address is now 0.0.0.0, but a valid advertised address must be detected or specified. If no explicit advertised address is specified, error out if there is more than one usable candidate IP address on the system. This requires a user to explicitly choose instead of letting swarmkit make the wrong choice. For the purposes of this autodetection, we ignore certain interfaces that are unlikely to be relevant (currently docker*). The user is also required to choose a listen address on swarm init if they specify an explicit advertise address that is a hostname or an IP address that's not local to the system. This is a requirement for overlay networking. Also support specifying interface names to --listen-addr, --advertise-addr, and the daemon flag --swarm-default-advertise-addr. This will fail if the interface has multiple IP addresses (unless it has a single IPv4 address and a single IPv6 address - then we resolve the tie in favor of IPv4). This change also exposes the node's externally-reachable address in docker info, as requested by #24017. Make corresponding API and CLI docs changes. Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2016-07-01 01:07:35 +00:00
}
// prepareTempDir prepares and returns the default directory to use
// for temporary files.
// If it doesn't exist, it is created. If it exists, its content is removed.
func prepareTempDir(rootDir string) (string, error) {
var tmpDir string
if tmpDir = os.Getenv("DOCKER_TMPDIR"); tmpDir == "" {
tmpDir = filepath.Join(rootDir, "tmp")
newName := tmpDir + "-old"
if err := os.Rename(tmpDir, newName); err == nil {
go func() {
if err := os.RemoveAll(newName); err != nil {
logrus.Warnf("failed to delete old tmp directory: %s", newName)
}
}()
} else if !os.IsNotExist(err) {
logrus.Warnf("failed to rename %s for background deletion: %s. Deleting synchronously", tmpDir, err)
if err := os.RemoveAll(tmpDir); err != nil {
logrus.Warnf("failed to delete old tmp directory: %s", tmpDir)
}
}
}
2022-10-16 12:59:00 +00:00
return tmpDir, idtools.MkdirAllAndChown(tmpDir, 0o700, idtools.CurrentIdentity())
}
func (daemon *Daemon) setGenericResources(conf *config.Config) error {
genericResources, err := config.ParseGenericResources(conf.NodeGenericResources)
if err != nil {
return err
}
daemon.genericResources = genericResources
return nil
}
// IsShuttingDown tells whether the daemon is shutting down or not
func (daemon *Daemon) IsShuttingDown() bool {
return daemon.shutdown
}
func isBridgeNetworkDisabled(conf *config.Config) bool {
return conf.BridgeConfig.Iface == config.DisableNetworkBridge
}
func (daemon *Daemon) networkOptions(pg plugingetter.PluginGetter, activeSandboxes map[string]interface{}) ([]nwconfig.Option, error) {
options := []nwconfig.Option{}
if daemon.configStore == nil {
return options, nil
}
conf := daemon.configStore
dd := runconfig.DefaultDaemonNetworkMode()
options = []nwconfig.Option{
nwconfig.OptionDataDir(conf.Root),
nwconfig.OptionExecRoot(conf.GetExecRoot()),
nwconfig.OptionDefaultDriver(string(dd)),
nwconfig.OptionDefaultNetwork(dd.NetworkName()),
nwconfig.OptionLabels(conf.Labels),
nwconfig.OptionNetworkControlPlaneMTU(conf.NetworkControlPlaneMTU),
driverOptions(conf),
}
if len(conf.NetworkConfig.DefaultAddressPools.Value()) > 0 {
options = append(options, nwconfig.OptionDefaultAddressPoolConfig(conf.NetworkConfig.DefaultAddressPools.Value()))
}
if conf.LiveRestoreEnabled && len(activeSandboxes) != 0 {
options = append(options, nwconfig.OptionActiveSandboxes(activeSandboxes))
}
if pg != nil {
options = append(options, nwconfig.OptionPluginGetter(pg))
}
return options, nil
}
// GetCluster returns the cluster
func (daemon *Daemon) GetCluster() Cluster {
return daemon.cluster
}
// SetCluster sets the cluster
func (daemon *Daemon) SetCluster(cluster Cluster) {
daemon.cluster = cluster
}
func (daemon *Daemon) pluginShutdown() {
manager := daemon.pluginManager
// Check for a valid manager object. In error conditions, daemon init can fail
// and shutdown called, before plugin manager is initialized.
if manager != nil {
manager.Shutdown()
}
}
// PluginManager returns current pluginManager associated with the daemon
func (daemon *Daemon) PluginManager() *plugin.Manager { // set up before daemon to avoid this method
return daemon.pluginManager
}
// PluginGetter returns current pluginStore associated with the daemon
func (daemon *Daemon) PluginGetter() *plugin.Store {
return daemon.PluginStore
}
// CreateDaemonRoot creates the root for the daemon
func CreateDaemonRoot(config *config.Config) error {
// get the canonical path to the Docker root directory
var realRoot string
if _, err := os.Stat(config.Root); err != nil && os.IsNotExist(err) {
realRoot = config.Root
} else {
realRoot, err = fileutils.ReadSymlinkedDirectory(config.Root)
if err != nil {
return fmt.Errorf("Unable to get the full path to root (%s): %s", config.Root, err)
}
}
idMapping, err := setupRemappedRoot(config)
if err != nil {
return err
}
return setupDaemonRoot(config, realRoot, idMapping.RootPair())
}
// checkpointAndSave grabs a container lock to safely call container.CheckpointTo
func (daemon *Daemon) checkpointAndSave(container *container.Container) error {
container.Lock()
defer container.Unlock()
if err := container.CheckpointTo(daemon.containersReplica); err != nil {
return fmt.Errorf("Error saving container state: %v", err)
}
return nil
}
// because the CLI sends a -1 when it wants to unset the swappiness value
// we need to clear it on the server side
func fixMemorySwappiness(resources *containertypes.Resources) {
if resources.MemorySwappiness != nil && *resources.MemorySwappiness == -1 {
resources.MemorySwappiness = nil
}
}
// GetAttachmentStore returns current attachment store associated with the daemon
func (daemon *Daemon) GetAttachmentStore() *network.AttachmentStore {
return &daemon.attachmentStore
}
// IdentityMapping returns uid/gid mapping or a SID (in the case of Windows) for the builder
func (daemon *Daemon) IdentityMapping() idtools.IdentityMapping {
return daemon.idMapping
}
// ImageService returns the Daemon's ImageService
func (daemon *Daemon) ImageService() ImageService {
return daemon.imageService
}
// BuilderBackend returns the backend used by builder
func (daemon *Daemon) BuilderBackend() builder.Backend {
return struct {
*Daemon
ImageService
}{daemon, daemon.imageService}
}
// RawSysInfo returns *sysinfo.SysInfo .
func (daemon *Daemon) RawSysInfo() *sysinfo.SysInfo {
daemon.sysInfoOnce.Do(func() {
// We check if sysInfo is not set here, to allow some test to
// override the actual sysInfo.
if daemon.sysInfo == nil {
daemon.NewDaemon(): fix network feature detection on first start Commit 483aa6294b457ad4f60df91e46c0038a0e953dad introduced a regression, causing spurious warnings to be shown when starting a daemon for the first time after a fresh install: docker info ... WARNING: IPv4 forwarding is disabled WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled The information shown is incorrect, as checking the corresponding options on the system, shows that these options are available: cat /proc/sys/net/ipv4/ip_forward 1 cat /proc/sys/net/bridge/bridge-nf-call-iptables 1 cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 1 The reason this is failing is because the daemon itself reconfigures those options during networking initialization in `configureIPForwarding()`; https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/libnetwork/drivers/bridge/setup_ip_forwarding.go#L14-L25 Network initialization happens in the `daemon.restore()` function within `daemon.NewDaemon()`: https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/daemon/daemon.go#L475-L478 However, 483aa6294b457ad4f60df91e46c0038a0e953dad moved detection of features earlier in the `daemon.NewDaemon()` function, and collects the system information (`d.RawSysInfo()`) before we enter `daemon.restore()`; https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/daemon/daemon.go#L1008-L1011 For optimization (collecting the system information comes at a cost), those results are cached on the daemon, and will only be performed once (using a `sync.Once`). This patch: - introduces a `getSysInfo()` utility, which collects system information without caching the results - uses `getSysInfo()` to collect the preliminary information needed at that point in the daemon's lifecycle. - moves printing warnings to the end of `daemon.NewDaemon()`, after all information can be read correctly. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-06-03 15:35:23 +00:00
daemon.sysInfo = getSysInfo(daemon)
}
})
return daemon.sysInfo
}