moby/daemon/daemon.go

1590 lines
51 KiB
Go
Raw Normal View History

// Package daemon exposes the functions that occur on the host server
// that the Docker daemon is running.
//
// In implementing the various functions of the daemon, there is often
// a method-specific struct for configuring the runtime behavior.
package daemon // import "github.com/docker/docker/daemon"
2013-01-19 00:13:39 +00:00
import (
"context"
2013-01-19 00:13:39 +00:00
"fmt"
"io/ioutil"
"net"
"net/url"
"os"
"path"
"path/filepath"
"runtime"
Remove static errors from errors package. Moving all strings to the errors package wasn't a good idea after all. Our custom implementation of Go errors predates everything that's nice and good about working with errors in Go. Take as an example what we have to do to get an error message: ```go func GetErrorMessage(err error) string { switch err.(type) { case errcode.Error: e, _ := err.(errcode.Error) return e.Message case errcode.ErrorCode: ec, _ := err.(errcode.ErrorCode) return ec.Message() default: return err.Error() } } ``` This goes against every good practice for Go development. The language already provides a simple, intuitive and standard way to get error messages, that is calling the `Error()` method from an error. Reinventing the error interface is a mistake. Our custom implementation also makes very hard to reason about errors, another nice thing about Go. I found several (>10) error declarations that we don't use anywhere. This is a clear sign about how little we know about the errors we return. I also found several error usages where the number of arguments was different than the parameters declared in the error, another clear example of how difficult is to reason about errors. Moreover, our custom implementation didn't really make easier for people to return custom HTTP status code depending on the errors. Again, it's hard to reason about when to set custom codes and how. Take an example what we have to do to extract the message and status code from an error before returning a response from the API: ```go switch err.(type) { case errcode.ErrorCode: daError, _ := err.(errcode.ErrorCode) statusCode = daError.Descriptor().HTTPStatusCode errMsg = daError.Message() case errcode.Error: // For reference, if you're looking for a particular error // then you can do something like : // import ( derr "github.com/docker/docker/errors" ) // if daError.ErrorCode() == derr.ErrorCodeNoSuchContainer { ... } daError, _ := err.(errcode.Error) statusCode = daError.ErrorCode().Descriptor().HTTPStatusCode errMsg = daError.Message default: // This part of will be removed once we've // converted everything over to use the errcode package // FIXME: this is brittle and should not be necessary. // If we need to differentiate between different possible error types, // we should create appropriate error types with clearly defined meaning errStr := strings.ToLower(err.Error()) for keyword, status := range map[string]int{ "not found": http.StatusNotFound, "no such": http.StatusNotFound, "bad parameter": http.StatusBadRequest, "conflict": http.StatusConflict, "impossible": http.StatusNotAcceptable, "wrong login/password": http.StatusUnauthorized, "hasn't been activated": http.StatusForbidden, } { if strings.Contains(errStr, keyword) { statusCode = status break } } } ``` You can notice two things in that code: 1. We have to explain how errors work, because our implementation goes against how easy to use Go errors are. 2. At no moment we arrived to remove that `switch` statement that was the original reason to use our custom implementation. This change removes all our status errors from the errors package and puts them back in their specific contexts. IT puts the messages back with their contexts. That way, we know right away when errors used and how to generate their messages. It uses custom interfaces to reason about errors. Errors that need to response with a custom status code MUST implementent this simple interface: ```go type errorWithStatus interface { HTTPErrorStatusCode() int } ``` This interface is very straightforward to implement. It also preserves Go errors real behavior, getting the message is as simple as using the `Error()` method. I included helper functions to generate errors that use custom status code in `errors/errors.go`. By doing this, we remove the hard dependency we have eeverywhere to our custom errors package. Yes, you can use it as a helper to generate error, but it's still very easy to generate errors without it. Please, read this fantastic blog post about errors in Go: http://dave.cheney.net/2014/12/24/inspecting-errors Signed-off-by: David Calavera <david.calavera@gmail.com>
2016-02-25 15:53:35 +00:00
"strings"
"sync"
"time"
"github.com/docker/docker/pkg/fileutils"
"go.etcd.io/bbolt"
"google.golang.org/grpc"
"google.golang.org/grpc/backoff"
"github.com/containerd/containerd"
"github.com/containerd/containerd/defaults"
"github.com/containerd/containerd/pkg/dialer"
"github.com/containerd/containerd/remotes/docker"
"github.com/containerd/containerd/sys"
"github.com/docker/docker/api/types"
containertypes "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/builder"
"github.com/docker/docker/container"
"github.com/docker/docker/daemon/config"
"github.com/docker/docker/daemon/discovery"
"github.com/docker/docker/daemon/events"
"github.com/docker/docker/daemon/exec"
"github.com/docker/docker/daemon/images"
"github.com/docker/docker/daemon/logger"
"github.com/docker/docker/daemon/network"
"github.com/docker/docker/errdefs"
bkconfig "github.com/moby/buildkit/cmd/buildkitd/config"
"github.com/moby/buildkit/util/resolver"
"github.com/sirupsen/logrus"
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 22:30:52 +00:00
// register graph drivers
_ "github.com/docker/docker/daemon/graphdriver/register"
"github.com/docker/docker/daemon/stats"
dmetadata "github.com/docker/docker/distribution/metadata"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/image"
"github.com/docker/docker/layer"
"github.com/docker/docker/libcontainerd"
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 22:30:52 +00:00
libcontainerdtypes "github.com/docker/docker/libcontainerd/types"
"github.com/docker/docker/pkg/idtools"
"github.com/docker/docker/pkg/plugingetter"
"github.com/docker/docker/pkg/system"
"github.com/docker/docker/pkg/truncindex"
"github.com/docker/docker/plugin"
pluginexec "github.com/docker/docker/plugin/executor/containerd"
refstore "github.com/docker/docker/reference"
"github.com/docker/docker/registry"
"github.com/docker/docker/runconfig"
volumesservice "github.com/docker/docker/volume/service"
"github.com/docker/libnetwork"
"github.com/docker/libnetwork/cluster"
nwconfig "github.com/docker/libnetwork/config"
"github.com/moby/locker"
"github.com/pkg/errors"
"golang.org/x/sync/semaphore"
)
// ContainersNamespace is the name of the namespace used for users containers
const (
ContainersNamespace = "moby"
)
var (
errSystemNotSupported = errors.New("the Docker daemon is not supported on this platform")
)
2013-09-07 00:33:05 +00:00
// Daemon holds information about the Docker daemon.
type Daemon struct {
ID string
repository string
containers container.Store
containersReplica container.ViewDB
execCommands *exec.Store
imageService *images.ImageService
idIndex *truncindex.TruncIndex
configStore *config.Config
statsCollector *stats.Collector
defaultLogConfig containertypes.LogConfig
RegistryService registry.Service
EventsService *events.Events
netController libnetwork.NetworkController
volumes *volumesservice.VolumesService
discoveryWatcher discovery.Reloader
root string
seccompEnabled bool
apparmorEnabled bool
shutdown bool
idMapping *idtools.IdentityMapping
// TODO: move graphDrivers field to an InfoService
graphDrivers map[string]string // By operating system
PluginStore *plugin.Store // todo: remove
pluginManager *plugin.Manager
linkIndex *linkIndex
containerdCli *containerd.Client
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 22:30:52 +00:00
containerd libcontainerdtypes.Client
defaultIsolation containertypes.Isolation // Default isolation mode on Windows
clusterProvider cluster.Provider
cluster Cluster
genericResources []swarm.GenericResource
metricsPluginListener net.Listener
machineMemory uint64
seccompProfile []byte
seccompProfilePath string
diskUsageRunning int32
pruneRunning int32
hosts map[string]bool // hosts stores the addresses the daemon is listening on
startupDone chan struct{}
Fix race in attachable network attachment Attachable networks are networks created on the cluster which can then be attached to by non-swarm containers. These networks are lazily created on the node that wants to attach to that network. When no container is currently attached to one of these networks on a node, and then multiple containers which want that network are started concurrently, this can cause a race condition in the network attachment where essentially we try to attach the same network to the node twice. To easily reproduce this issue you must use a multi-node cluster with a worker node that has lots of CPUs (I used a 36 CPU node). Repro steps: 1. On manager, `docker network create -d overlay --attachable test` 2. On worker, `docker create --restart=always --network test busybox top`, many times... 200 is a good number (but not much more due to subnet size restrictions) 3. Restart the daemon When the daemon restarts, it will attempt to start all those containers simultaneously. Note that you could try to do this yourself over the API, but it's harder to trigger due to the added latency from going over the API. The error produced happens when the daemon tries to start the container upon allocating the network resources: ``` attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded ``` What happens here is the worker makes a network attachment request to the manager. This is an async call which in the happy case would cause a task to be placed on the node, which the worker is waiting for to get the network configuration. In the case of this race, the error ocurrs on the manager like this: ``` task allocation failure" error="failed during network allocation for task n7bwwwbymj2o2h9asqkza8gom: failed to allocate network IP for task n7bwwwbymj2o2h9asqkza8gom network rj4szie2zfauqnpgh4eri1yue: could not find an available IP" module=node node.id=u3489c490fx1df8onlyfo1v6e ``` The task is not created and the worker times out waiting for the task. --- The mitigation for this is to make sure that only one attachment reuest is in flight for a given network at a time *when the network doesn't already exist on the node*. If the network already exists on the node there is no need for synchronization because the network is already allocated and on the node so there is no need to request it from the manager. This basically comes down to a race with `Find(network) || Create(network)` without any sort of syncronization. Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2018-02-02 17:54:08 +00:00
attachmentStore network.AttachmentStore
attachableNetworkLock *locker.Locker
// This is used for Windows which doesn't currently support running on containerd
// It stores metadata for the content store (used for manifest caching)
// This needs to be closed on daemon exit
mdDB *bbolt.DB
}
// StoreHosts stores the addresses the daemon is listening on
func (daemon *Daemon) StoreHosts(hosts []string) {
if daemon.hosts == nil {
daemon.hosts = make(map[string]bool)
}
for _, h := range hosts {
daemon.hosts[h] = true
}
}
// HasExperimental returns whether the experimental features of the daemon are enabled or not
func (daemon *Daemon) HasExperimental() bool {
return daemon.configStore != nil && daemon.configStore.Experimental
}
// Features returns the features map from configStore
func (daemon *Daemon) Features() *map[string]bool {
return &daemon.configStore.Features
}
// RegistryHosts returns registry configuration in containerd resolvers format
func (daemon *Daemon) RegistryHosts() docker.RegistryHosts {
var (
registryKey = "docker.io"
mirrors = make([]string, len(daemon.configStore.Mirrors))
m = map[string]bkconfig.RegistryConfig{}
)
// must trim "https://" or "http://" prefix
for i, v := range daemon.configStore.Mirrors {
if uri, err := url.Parse(v); err == nil {
v = uri.Host
}
mirrors[i] = v
}
// set mirrors for default registry
m[registryKey] = bkconfig.RegistryConfig{Mirrors: mirrors}
for _, v := range daemon.configStore.InsecureRegistries {
u, err := url.Parse(v)
c := bkconfig.RegistryConfig{}
if err == nil {
v = u.Host
t := true
if u.Scheme == "http" {
c.PlainHTTP = &t
} else {
c.Insecure = &t
}
}
m[v] = c
}
for k, v := range m {
if d, err := registry.HostCertsDir(k); err == nil {
v.TLSConfigDir = []string{d}
m[k] = v
}
}
certsDir := registry.CertsDir()
if fis, err := ioutil.ReadDir(certsDir); err == nil {
for _, fi := range fis {
if _, ok := m[fi.Name()]; !ok {
m[fi.Name()] = bkconfig.RegistryConfig{
TLSConfigDir: []string{filepath.Join(certsDir, fi.Name())},
}
}
}
}
return resolver.NewRegistryConfig(m)
}
func (daemon *Daemon) restore() error {
var mapLock sync.Mutex
containers := make(map[string]*container.Container)
logrus.Info("Loading containers: start.")
dir, err := ioutil.ReadDir(daemon.repository)
2013-01-19 00:13:39 +00:00
if err != nil {
return err
}
// parallelLimit is the maximum number of parallel startup jobs that we
// allow (this is the limited used for all startup semaphores). The multipler
// (128) was chosen after some fairly significant benchmarking -- don't change
// it unless you've tested it significantly (this value is adjusted if
// RLIMIT_NOFILE is small to avoid EMFILE).
parallelLimit := adjustParallelLimit(len(dir), 128*runtime.NumCPU())
// Re-used for all parallel startup jobs.
var group sync.WaitGroup
sem := semaphore.NewWeighted(int64(parallelLimit))
2013-12-18 18:43:42 +00:00
for _, v := range dir {
group.Add(1)
go func(id string) {
defer group.Done()
_ = sem.Acquire(context.Background(), 1)
defer sem.Release(1)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log := logrus.WithField("container", id)
c, err := daemon.load(id)
if err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to load container")
return
}
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
if !system.IsOSSupported(c.OS) {
log.Errorf("failed to load container: %s (%q)", system.ErrNotSupportedOperatingSystem, c.OS)
return
}
// Ignore the container if it does not support the current driver being used by the graph
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
currentDriverForContainerOS := daemon.graphDrivers[c.OS]
if (c.Driver == "" && currentDriverForContainerOS == "aufs") || c.Driver == currentDriverForContainerOS {
rwlayer, err := daemon.imageService.GetLayerByID(c.ID, c.OS)
if err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to load container mount")
return
}
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
c.RWLayer = rwlayer
log.WithFields(logrus.Fields{
"running": c.IsRunning(),
"paused": c.IsPaused(),
}).Debug("loaded container")
mapLock.Lock()
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
containers[c.ID] = c
mapLock.Unlock()
} else {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.Debugf("cannot load container because it was created with another storage driver")
}
}(v.Name())
}
group.Wait()
removeContainers := make(map[string]*container.Container)
restartContainers := make(map[*container.Container]chan struct{})
activeSandboxes := make(map[string]interface{})
for _, c := range containers {
group.Add(1)
go func(c *container.Container) {
defer group.Done()
_ = sem.Acquire(context.Background(), 1)
defer sem.Release(1)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log := logrus.WithField("container", c.ID)
if err := daemon.registerName(c); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Errorf("failed to register container name: %s", c.Name)
mapLock.Lock()
delete(containers, c.ID)
mapLock.Unlock()
return
}
if err := daemon.Register(c); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to register container")
mapLock.Lock()
delete(containers, c.ID)
mapLock.Unlock()
return
}
// The LogConfig.Type is empty if the container was created before docker 1.12 with default log driver.
// We should rewrite it to use the daemon defaults.
// Fixes https://github.com/docker/docker/issues/22536
if c.HostConfig.LogConfig.Type == "" {
if err := daemon.mergeAndVerifyLogConfig(&c.HostConfig.LogConfig); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to verify log config for container")
}
}
}(c)
}
group.Wait()
for _, c := range containers {
group.Add(1)
go func(c *container.Container) {
defer group.Done()
_ = sem.Acquire(context.Background(), 1)
defer sem.Release(1)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log := logrus.WithField("container", c.ID)
daemon.backportMountSpec(c)
if err := daemon.checkpointAndSave(c); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("error saving backported mountspec to disk")
}
daemon.setStateCounter(c)
logger := func(c *container.Container) *logrus.Entry {
return log.WithFields(logrus.Fields{
"running": c.IsRunning(),
"paused": c.IsPaused(),
"restarting": c.IsRestarting(),
})
}
logger(c).Debug("restoring container")
var (
err error
alive bool
ec uint32
exitedAt time.Time
process libcontainerdtypes.Process
)
alive, _, process, err = daemon.containerd.Restore(context.Background(), c.ID, c.InitializeStdio)
if err != nil && !errdefs.IsNotFound(err) {
logger(c).WithError(err).Error("failed to restore container with containerd")
return
}
logger(c).Debugf("alive: %v", alive)
if !alive {
// If process is not nil, cleanup dead container from containerd.
// If process is nil then the above `containerd.Restore` returned an errdefs.NotFoundError,
// and docker's view of the container state will be updated accorrdingly via SetStopped further down.
if process != nil {
logger(c).Debug("cleaning up dead container process")
ec, exitedAt, err = process.Delete(context.Background())
if err != nil && !errdefs.IsNotFound(err) {
logger(c).WithError(err).Error("failed to delete container from containerd")
return
}
}
} else if !daemon.configStore.LiveRestoreEnabled {
logger(c).Debug("shutting down container considered alive by containerd")
if err := daemon.shutdownContainer(c); err != nil && !errdefs.IsNotFound(err) {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("error shutting down container")
return
}
c.ResetRestartManager(false)
}
if c.IsRunning() || c.IsPaused() {
logger(c).Debug("syncing container on disk state with real state")
c.RestartManager().Cancel() // manually start containers because some need to wait for swarm networking
if c.IsPaused() && alive {
s, err := daemon.containerd.Status(context.Background(), c.ID)
if err != nil {
logger(c).WithError(err).Error("failed to get container status")
} else {
logger(c).WithField("state", s).Info("restored container paused")
switch s {
case containerd.Paused, containerd.Pausing:
// nothing to do
case containerd.Stopped:
alive = false
case containerd.Unknown:
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.Error("unknown status for paused container during restore")
default:
// running
c.Lock()
c.Paused = false
daemon.setStateCounter(c)
if err := c.CheckpointTo(daemon.containersReplica); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to update paused container state")
}
c.Unlock()
}
}
}
if !alive {
logger(c).Debug("setting stopped state")
c.Lock()
c.SetStopped(&container.ExitStatus{ExitCode: int(ec), ExitedAt: exitedAt})
daemon.Cleanup(c)
if err := c.CheckpointTo(daemon.containersReplica); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to update stopped container state")
}
c.Unlock()
logger(c).Debug("set stopped state")
}
// we call Mount and then Unmount to get BaseFs of the container
if err := daemon.Mount(c); err != nil {
// The mount is unlikely to fail. However, in case mount fails
// the container should be allowed to restore here. Some functionalities
// (like docker exec -u user) might be missing but container is able to be
// stopped/restarted/removed.
// See #29365 for related information.
// The error is only logged here.
logger(c).WithError(err).Warn("failed to mount container to get BaseFs path")
} else {
if err := daemon.Unmount(c); err != nil {
logger(c).WithError(err).Warn("failed to umount container to get BaseFs path")
}
}
c.ResetRestartManager(false)
if !c.HostConfig.NetworkMode.IsContainer() && c.IsRunning() {
options, err := daemon.buildSandboxOptions(c)
if err != nil {
logger(c).WithError(err).Warn("failed to build sandbox option to restore container")
}
mapLock.Lock()
activeSandboxes[c.NetworkSettings.SandboxID] = options
mapLock.Unlock()
}
}
// get list of containers we need to restart
// Do not autostart containers which
// has endpoints in a swarm scope
// network yet since the cluster is
// not initialized yet. We will start
// it after the cluster is
// initialized.
if daemon.configStore.AutoRestart && c.ShouldRestart() && !c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore {
mapLock.Lock()
restartContainers[c] = make(chan struct{})
mapLock.Unlock()
} else if c.HostConfig != nil && c.HostConfig.AutoRemove {
mapLock.Lock()
removeContainers[c.ID] = c
mapLock.Unlock()
}
c.Lock()
if c.RemovalInProgress {
// We probably crashed in the middle of a removal, reset
// the flag.
//
// We DO NOT remove the container here as we do not
// know if the user had requested for either the
// associated volumes, network links or both to also
// be removed. So we put the container in the "dead"
// state and leave further processing up to them.
c.RemovalInProgress = false
c.Dead = true
if err := c.CheckpointTo(daemon.containersReplica); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to update RemovalInProgress container state")
} else {
log.Debugf("reset RemovalInProgress state for container")
}
}
c.Unlock()
logger(c).Debug("done restoring container")
}(c)
}
group.Wait()
daemon.netController, err = daemon.initNetworkController(daemon.configStore, activeSandboxes)
if err != nil {
return fmt.Errorf("Error initializing network controller: %v", err)
}
// Now that all the containers are registered, register the links
for _, c := range containers {
group.Add(1)
go func(c *container.Container) {
_ = sem.Acquire(context.Background(), 1)
if err := daemon.registerLinks(c, c.HostConfig); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
logrus.WithField("container", c.ID).WithError(err).Error("failed to register link for container")
}
sem.Release(1)
group.Done()
}(c)
}
group.Wait()
for c, notifier := range restartContainers {
group.Add(1)
go func(c *container.Container, chNotify chan struct{}) {
_ = sem.Acquire(context.Background(), 1)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log := logrus.WithField("container", c.ID)
log.Debug("starting container")
// ignore errors here as this is a best effort to wait for children to be
// running before we try to start the container
children := daemon.children(c)
timeout := time.NewTimer(5 * time.Second)
defer timeout.Stop()
for _, child := range children {
if notifier, exists := restartContainers[child]; exists {
select {
case <-notifier:
case <-timeout.C:
}
}
}
// Make sure networks are available before starting
daemon.waitForNetworks(c)
if err := daemon.containerStart(c, "", "", true); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to start container")
}
close(chNotify)
sem.Release(1)
group.Done()
}(c, notifier)
}
group.Wait()
for id := range removeContainers {
group.Add(1)
go func(cid string) {
_ = sem.Acquire(context.Background(), 1)
if err := daemon.ContainerRm(cid, &types.ContainerRmConfig{ForceRemove: true, RemoveVolume: true}); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
logrus.WithField("container", cid).WithError(err).Error("failed to remove container")
}
sem.Release(1)
group.Done()
}(id)
}
group.Wait()
// any containers that were started above would already have had this done,
// however we need to now prepare the mountpoints for the rest of the containers as well.
// This shouldn't cause any issue running on the containers that already had this run.
// This must be run after any containers with a restart policy so that containerized plugins
// can have a chance to be running before we try to initialize them.
for _, c := range containers {
// if the container has restart policy, do not
// prepare the mountpoints since it has been done on restarting.
// This is to speed up the daemon start when a restart container
// has a volume and the volume driver is not available.
if _, ok := restartContainers[c]; ok {
continue
} else if _, ok := removeContainers[c.ID]; ok {
// container is automatically removed, skip it.
continue
}
group.Add(1)
go func(c *container.Container) {
_ = sem.Acquire(context.Background(), 1)
if err := daemon.prepareMountPoints(c); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
logrus.WithField("container", c.ID).WithError(err).Error("failed to prepare mountpoints for container")
}
sem.Release(1)
group.Done()
}(c)
}
group.Wait()
logrus.Info("Loading containers: done.")
2013-01-19 00:13:39 +00:00
return nil
}
// RestartSwarmContainers restarts any autostart container which has a
// swarm endpoint.
func (daemon *Daemon) RestartSwarmContainers() {
ctx := context.Background()
// parallelLimit is the maximum number of parallel startup jobs that we
// allow (this is the limited used for all startup semaphores). The multipler
// (128) was chosen after some fairly significant benchmarking -- don't change
// it unless you've tested it significantly (this value is adjusted if
// RLIMIT_NOFILE is small to avoid EMFILE).
parallelLimit := adjustParallelLimit(len(daemon.List()), 128*runtime.NumCPU())
var group sync.WaitGroup
sem := semaphore.NewWeighted(int64(parallelLimit))
for _, c := range daemon.List() {
if !c.IsRunning() && !c.IsPaused() {
// Autostart all the containers which has a
// swarm endpoint now that the cluster is
// initialized.
if daemon.configStore.AutoRestart && c.ShouldRestart() && c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore {
group.Add(1)
go func(c *container.Container) {
if err := sem.Acquire(ctx, 1); err != nil {
// ctx is done.
group.Done()
return
}
if err := daemon.containerStart(c, "", "", true); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
logrus.WithField("container", c.ID).WithError(err).Error("failed to start swarm container")
}
sem.Release(1)
group.Done()
}(c)
}
}
}
group.Wait()
}
// waitForNetworks is used during daemon initialization when starting up containers
// It ensures that all of a container's networks are available before the daemon tries to start the container.
// In practice it just makes sure the discovery service is available for containers which use a network that require discovery.
func (daemon *Daemon) waitForNetworks(c *container.Container) {
if daemon.discoveryWatcher == nil {
return
}
// Make sure if the container has a network that requires discovery that the discovery service is available before starting
for netName := range c.NetworkSettings.Networks {
// If we get `ErrNoSuchNetwork` here, we can assume that it is due to discovery not being ready
// Most likely this is because the K/V store used for discovery is in a container and needs to be started
if _, err := daemon.netController.NetworkByName(netName); err != nil {
if _, ok := err.(libnetwork.ErrNoSuchNetwork); !ok {
continue
}
// use a longish timeout here due to some slowdowns in libnetwork if the k/v store is on anything other than --net=host
// FIXME: why is this slow???
dur := 60 * time.Second
timer := time.NewTimer(dur)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
logrus.WithField("container", c.ID).Debugf("Container %s waiting for network to be ready", c.Name)
select {
case <-daemon.discoveryWatcher.ReadyCh():
case <-timer.C:
}
timer.Stop()
return
}
}
}
func (daemon *Daemon) children(c *container.Container) map[string]*container.Container {
return daemon.linkIndex.children(c)
}
// parents returns the names of the parent containers of the container
// with the given name.
func (daemon *Daemon) parents(c *container.Container) map[string]*container.Container {
return daemon.linkIndex.parents(c)
}
func (daemon *Daemon) registerLink(parent, child *container.Container, alias string) error {
fullName := path.Join(parent.Name, alias)
if err := daemon.containersReplica.ReserveName(fullName, child.ID); err != nil {
if err == container.ErrNameReserved {
logrus.Warnf("error registering link for %s, to %s, as alias %s, ignoring: %v", parent.ID, child.ID, alias, err)
return nil
}
return err
}
daemon.linkIndex.link(parent, child, fullName)
return nil
}
// DaemonJoinsCluster informs the daemon has joined the cluster and provides
// the handler to query the cluster component
func (daemon *Daemon) DaemonJoinsCluster(clusterProvider cluster.Provider) {
daemon.setClusterProvider(clusterProvider)
}
// DaemonLeavesCluster informs the daemon has left the cluster
func (daemon *Daemon) DaemonLeavesCluster() {
// Daemon is in charge of removing the attachable networks with
// connected containers when the node leaves the swarm
daemon.clearAttachableNetworks()
// We no longer need the cluster provider, stop it now so that
// the network agent will stop listening to cluster events.
daemon.setClusterProvider(nil)
// Wait for the networking cluster agent to stop
daemon.netController.AgentStopWait()
// Daemon is in charge of removing the ingress network when the
// node leaves the swarm. Wait for job to be done or timeout.
// This is called also on graceful daemon shutdown. We need to
// wait, because the ingress release has to happen before the
// network controller is stopped.
if done, err := daemon.ReleaseIngress(); err == nil {
timeout := time.NewTimer(5 * time.Second)
defer timeout.Stop()
select {
case <-done:
case <-timeout.C:
logrus.Warn("timeout while waiting for ingress network removal")
}
} else {
logrus.Warnf("failed to initiate ingress network removal: %v", err)
}
daemon.attachmentStore.ClearAttachments()
}
// setClusterProvider sets a component for querying the current cluster state.
func (daemon *Daemon) setClusterProvider(clusterProvider cluster.Provider) {
daemon.clusterProvider = clusterProvider
daemon.netController.SetClusterProvider(clusterProvider)
Fix race in attachable network attachment Attachable networks are networks created on the cluster which can then be attached to by non-swarm containers. These networks are lazily created on the node that wants to attach to that network. When no container is currently attached to one of these networks on a node, and then multiple containers which want that network are started concurrently, this can cause a race condition in the network attachment where essentially we try to attach the same network to the node twice. To easily reproduce this issue you must use a multi-node cluster with a worker node that has lots of CPUs (I used a 36 CPU node). Repro steps: 1. On manager, `docker network create -d overlay --attachable test` 2. On worker, `docker create --restart=always --network test busybox top`, many times... 200 is a good number (but not much more due to subnet size restrictions) 3. Restart the daemon When the daemon restarts, it will attempt to start all those containers simultaneously. Note that you could try to do this yourself over the API, but it's harder to trigger due to the added latency from going over the API. The error produced happens when the daemon tries to start the container upon allocating the network resources: ``` attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded ``` What happens here is the worker makes a network attachment request to the manager. This is an async call which in the happy case would cause a task to be placed on the node, which the worker is waiting for to get the network configuration. In the case of this race, the error ocurrs on the manager like this: ``` task allocation failure" error="failed during network allocation for task n7bwwwbymj2o2h9asqkza8gom: failed to allocate network IP for task n7bwwwbymj2o2h9asqkza8gom network rj4szie2zfauqnpgh4eri1yue: could not find an available IP" module=node node.id=u3489c490fx1df8onlyfo1v6e ``` The task is not created and the worker times out waiting for the task. --- The mitigation for this is to make sure that only one attachment reuest is in flight for a given network at a time *when the network doesn't already exist on the node*. If the network already exists on the node there is no need for synchronization because the network is already allocated and on the node so there is no need to request it from the manager. This basically comes down to a race with `Find(network) || Create(network)` without any sort of syncronization. Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2018-02-02 17:54:08 +00:00
daemon.attachableNetworkLock = locker.New()
}
// IsSwarmCompatible verifies if the current daemon
// configuration is compatible with the swarm mode
func (daemon *Daemon) IsSwarmCompatible() error {
if daemon.configStore == nil {
return nil
}
return daemon.configStore.IsSwarmCompatible()
}
// NewDaemon sets up everything for the daemon to be able to service
// requests from the webserver.
func NewDaemon(ctx context.Context, config *config.Config, pluginStore *plugin.Store) (daemon *Daemon, err error) {
setDefaultMtu(config)
registryService, err := registry.NewService(config.ServiceOptions)
if err != nil {
return nil, err
}
// Ensure that we have a correct root key limit for launching containers.
if err := ModifyRootKeyLimit(); err != nil {
logrus.Warnf("unable to modify root key limit, number of containers could be limited by this quota: %v", err)
}
// Ensure we have compatible and valid configuration options
if err := verifyDaemonSettings(config); err != nil {
return nil, err
}
// Do we have a disabled network?
config.DisableBridge = isBridgeNetworkDisabled(config)
// Setup the resolv.conf
setupResolvConf(config)
// Verify the platform is supported as a daemon
if !platformSupported {
return nil, errSystemNotSupported
}
// Validate platform-specific requirements
if err := checkSystem(); err != nil {
return nil, err
}
idMapping, err := setupRemappedRoot(config)
if err != nil {
return nil, err
}
rootIDs := idMapping.RootPair()
if err := setupDaemonProcess(config); err != nil {
return nil, err
}
// set up the tmpDir to use a canonical path
tmp, err := prepareTempDir(config.Root)
if err != nil {
return nil, fmt.Errorf("Unable to get the TempDir under %s: %s", config.Root, err)
}
realTmp, err := fileutils.ReadSymlinkedDirectory(tmp)
if err != nil {
return nil, fmt.Errorf("Unable to get the full path to the TempDir (%s): %s", tmp, err)
}
if isWindows {
if _, err := os.Stat(realTmp); err != nil && os.IsNotExist(err) {
if err := system.MkdirAll(realTmp, 0700); err != nil {
return nil, fmt.Errorf("Unable to create the TempDir (%s): %s", realTmp, err)
}
}
os.Setenv("TEMP", realTmp)
os.Setenv("TMP", realTmp)
} else {
os.Setenv("TMPDIR", realTmp)
}
d := &Daemon{
configStore: config,
PluginStore: pluginStore,
startupDone: make(chan struct{}),
}
// Ensure the daemon is properly shutdown if there is a failure during
// initialization
defer func() {
if err != nil {
if err := d.Shutdown(); err != nil {
logrus.Error(err)
}
}
}()
if err := d.setGenericResources(config); err != nil {
return nil, err
}
// set up SIGUSR1 handler on Unix-like systems, or a Win32 global event
// on Windows to dump Go routine stacks
stackDumpDir := config.Root
if execRoot := config.GetExecRoot(); execRoot != "" {
stackDumpDir = execRoot
}
d.setupDumpStackTrap(stackDumpDir)
if err := d.setupSeccompProfile(); err != nil {
return nil, err
}
// Set the default isolation mode (only applicable on Windows)
if err := d.setDefaultIsolation(); err != nil {
return nil, fmt.Errorf("error setting default isolation mode: %v", err)
}
if err := configureMaxThreads(config); err != nil {
logrus.Warnf("Failed to configure golang's threads limit: %v", err)
}
// ensureDefaultAppArmorProfile does nothing if apparmor is disabled
if err := ensureDefaultAppArmorProfile(); err != nil {
logrus.Errorf(err.Error())
}
daemonRepo := filepath.Join(config.Root, "containers")
if err := idtools.MkdirAllAndChown(daemonRepo, 0701, idtools.CurrentIdentity()); err != nil {
return nil, err
}
// Create the directory where we'll store the runtime scripts (i.e. in
// order to support runtimeArgs)
daemonRuntimes := filepath.Join(config.Root, "runtimes")
if err := system.MkdirAll(daemonRuntimes, 0700); err != nil {
return nil, err
}
if err := d.loadRuntimes(); err != nil {
return nil, err
}
if isWindows {
if err := system.MkdirAll(filepath.Join(config.Root, "credentialspecs"), 0); err != nil {
return nil, err
}
}
// On Windows we don't support the environment variable, or a user supplied graphdriver
// as Windows has no choice in terms of which graphdrivers to use. It's a case of
// running Windows containers on Windows - windowsfilter, running Linux containers on Windows,
// lcow. Unix platforms however run a single graphdriver for all containers, and it can
// be set through an environment variable, a daemon start parameter, or chosen through
// initialization of the layerstore through driver priority order for example.
d.graphDrivers = make(map[string]string)
layerStores := make(map[string]layer.Store)
if isWindows {
d.graphDrivers[runtime.GOOS] = "windowsfilter"
if system.LCOWSupported() {
d.graphDrivers["linux"] = "lcow"
}
} else {
driverName := os.Getenv("DOCKER_DRIVER")
if driverName == "" {
driverName = config.GraphDriver
} else {
logrus.Infof("Setting the storage driver from the $DOCKER_DRIVER environment variable (%s)", driverName)
}
d.graphDrivers[runtime.GOOS] = driverName // May still be empty. Layerstore init determines instead.
}
d.RegistryService = registryService
logger.RegisterPluginGetter(d.PluginStore)
metricsSockPath, err := d.listenMetricsSock()
if err != nil {
return nil, err
}
registerMetricsPluginCallback(d.PluginStore, metricsSockPath)
backoffConfig := backoff.DefaultConfig
backoffConfig.MaxDelay = 3 * time.Second
connParams := grpc.ConnectParams{
Backoff: backoffConfig,
}
gopts := []grpc.DialOption{
// WithBlock makes sure that the following containerd request
// is reliable.
//
// NOTE: In one edge case with high load pressure, kernel kills
// dockerd, containerd and containerd-shims caused by OOM.
// When both dockerd and containerd restart, but containerd
// will take time to recover all the existing containers. Before
// containerd serving, dockerd will failed with gRPC error.
// That bad thing is that restore action will still ignore the
// any non-NotFound errors and returns running state for
// already stopped container. It is unexpected behavior. And
// we need to restart dockerd to make sure that anything is OK.
//
// It is painful. Add WithBlock can prevent the edge case. And
// n common case, the containerd will be serving in shortly.
// It is not harm to add WithBlock for containerd connection.
grpc.WithBlock(),
grpc.WithInsecure(),
grpc.WithConnectParams(connParams),
grpc.WithContextDialer(dialer.ContextDialer),
// TODO(stevvooe): We may need to allow configuration of this on the client.
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)),
grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)),
}
if config.ContainerdAddr != "" {
d.containerdCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second))
if err != nil {
return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr)
}
}
createPluginExec := func(m *plugin.Manager) (plugin.Executor, error) {
var pluginCli *containerd.Client
// Windows is not currently using containerd, keep the
// client as nil
if config.ContainerdAddr != "" {
pluginCli, err = containerd.New(config.ContainerdAddr, containerd.WithDefaultNamespace(config.ContainerdPluginNamespace), containerd.WithDialOpts(gopts), containerd.WithTimeout(60*time.Second))
if err != nil {
return nil, errors.Wrapf(err, "failed to dial %q", config.ContainerdAddr)
}
}
var rt types.Runtime
if runtime.GOOS != "windows" {
rtPtr, err := d.getRuntime(config.GetDefaultRuntimeName())
if err != nil {
return nil, err
}
rt = *rtPtr
}
return pluginexec.New(ctx, getPluginExecRoot(config.Root), pluginCli, config.ContainerdPluginNamespace, m, rt)
}
// Plugin system initialization should happen before restore. Do not change order.
d.pluginManager, err = plugin.NewManager(plugin.ManagerConfig{
Root: filepath.Join(config.Root, "plugins"),
ExecRoot: getPluginExecRoot(config.Root),
Store: d.PluginStore,
CreateExecutor: createPluginExec,
RegistryService: registryService,
LiveRestoreEnabled: config.LiveRestoreEnabled,
LogPluginEvent: d.LogPluginEvent, // todo: make private
AuthzMiddleware: config.AuthzMiddleware,
})
if err != nil {
return nil, errors.Wrap(err, "couldn't create plugin manager")
}
if err := d.setupDefaultLogConfig(); err != nil {
return nil, err
}
for operatingSystem, gd := range d.graphDrivers {
layerStores[operatingSystem], err = layer.NewStoreFromOptions(layer.StoreOptions{
Root: config.Root,
MetadataStorePathTemplate: filepath.Join(config.Root, "image", "%s", "layerdb"),
GraphDriver: gd,
GraphDriverOptions: config.GraphOptions,
IDMapping: idMapping,
PluginGetter: d.PluginStore,
ExperimentalEnabled: config.Experimental,
OS: operatingSystem,
})
if err != nil {
return nil, err
}
// As layerstore initialization may set the driver
d.graphDrivers[operatingSystem] = layerStores[operatingSystem].DriverName()
}
// Configure and validate the kernels security support. Note this is a Linux/FreeBSD
// operation only, so it is safe to pass *just* the runtime OS graphdriver.
if err := configureKernelSecuritySupport(config, d.graphDrivers[runtime.GOOS]); err != nil {
return nil, err
}
imageRoot := filepath.Join(config.Root, "image", d.graphDrivers[runtime.GOOS])
ifs, err := image.NewFSStoreBackend(filepath.Join(imageRoot, "imagedb"))
if err != nil {
return nil, err
}
lgrMap := make(map[string]image.LayerGetReleaser)
for los, ls := range layerStores {
lgrMap[los] = ls
}
imageStore, err := image.NewImageStore(ifs, lgrMap)
if err != nil {
return nil, err
}
d.volumes, err = volumesservice.NewVolumeService(config.Root, d.PluginStore, rootIDs, d)
if err != nil {
return nil, err
}
trustKey, err := loadOrCreateTrustKey(config.TrustKeyPath)
if err != nil {
return nil, err
}
trustDir := filepath.Join(config.Root, "trust")
if err := system.MkdirAll(trustDir, 0700); err != nil {
return nil, err
}
// We have a single tag/reference store for the daemon globally. However, it's
// stored under the graphdriver. On host platforms which only support a single
// container OS, but multiple selectable graphdrivers, this means depending on which
// graphdriver is chosen, the global reference store is under there. For
// platforms which support multiple container operating systems, this is slightly
// more problematic as where does the global ref store get located? Fortunately,
// for Windows, which is currently the only daemon supporting multiple container
// operating systems, the list of graphdrivers available isn't user configurable.
// For backwards compatibility, we just put it under the windowsfilter
// directory regardless.
refStoreLocation := filepath.Join(imageRoot, `repositories.json`)
rs, err := refstore.NewReferenceStore(refStoreLocation)
if err != nil {
return nil, fmt.Errorf("Couldn't create reference store repository: %s", err)
}
distributionMetadataStore, err := dmetadata.NewFSMetadataStore(filepath.Join(imageRoot, "distribution"))
if err != nil {
return nil, err
}
// Discovery is only enabled when the daemon is launched with an address to advertise. When
// initialized, the daemon is registered and we can store the discovery backend as it's read-only
if err := d.initDiscovery(config); err != nil {
return nil, err
}
sysInfo := d.RawSysInfo(false)
// Check if Devices cgroup is mounted, it is hard requirement for container security,
// on Linux.
if runtime.GOOS == "linux" && !sysInfo.CgroupDevicesEnabled && !sys.RunningInUserNS() {
return nil, errors.New("Devices cgroup isn't mounted")
}
d.ID = trustKey.PublicKey().KeyID()
d.repository = daemonRepo
d.containers = container.NewMemoryStore()
if d.containersReplica, err = container.NewViewDB(); err != nil {
return nil, err
}
d.execCommands = exec.NewStore()
d.idIndex = truncindex.NewTruncIndex([]string{})
d.statsCollector = d.newStatsCollector(1 * time.Second)
d.EventsService = events.New()
d.root = config.Root
d.idMapping = idMapping
d.seccompEnabled = sysInfo.Seccomp
d.apparmorEnabled = sysInfo.AppArmor
d.linkIndex = newLinkIndex()
imgSvcConfig := images.ImageServiceConfig{
ContainerStore: d.containers,
DistributionMetadataStore: distributionMetadataStore,
EventsService: d.EventsService,
ImageStore: imageStore,
LayerStores: layerStores,
MaxConcurrentDownloads: *config.MaxConcurrentDownloads,
MaxConcurrentUploads: *config.MaxConcurrentUploads,
Adding ability to change max download attempts Moby works perfectly when you are in a situation when one has a good and stable internet connection. Operating in area's where internet connectivity is likely to be lost in undetermined intervals, like a satellite connection or 4G/LTE in rural area's, can become a problem when pulling a new image. When connection is lost while image layers are being pulled, Moby will try to reconnect up to 5 times. If this fails, the incompletely downloaded layers are lost will need to be completely downloaded again during the next pull request. This means that we are using more data than we might have to. Pulling a layer multiple times from the start can become costly over a satellite or 4G/LTE connection. As these techniques (especially 4G) quite common in IoT and Moby is used to run Azure IoT Edge devices, I would like to add a settable maximum download attempts. The maximum download attempts is currently set at 5 (distribution/xfer/download.go). I would like to change this constant to a variable that the user can set. The default will still be 5, so nothing will change from the current version unless specified when starting the daemon with the added flag or in the config file. I added a default value of 5 for DefaultMaxDownloadAttempts and a settable max-download-attempts in the daemon config file. It is also added to the config of dockerd so it can be set with a flag when starting the daemon. This value gets stored in the imageService of the daemon when it is initiated and can be passed to the NewLayerDownloadManager as a parameter. It will be stored in the LayerDownloadManager when initiated. This enables us to set the max amount of retries in makeDownoadFunc equal to the max download attempts. I also added some tests that are based on maxConcurrentDownloads/maxConcurrentUploads. You can pull this version and test in a development container. Either create a config `file /etc/docker/daemon.json` with `{"max-download-attempts"=3}``, or use `dockerd --max-download-attempts=3 -D &` to start up the dockerd. Start downloading a container and disconnect from the internet whilst downloading. The result would be that it stops pulling after three attempts. Signed-off-by: Lukas Heeren <lukas-heeren@hotmail.com> Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-25 13:26:36 +00:00
MaxDownloadAttempts: *config.MaxDownloadAttempts,
ReferenceStore: rs,
RegistryService: registryService,
TrustKey: trustKey,
ContentNamespace: config.ContainerdNamespace,
}
// containerd is not currently supported with Windows.
// So sometimes d.containerdCli will be nil
// In that case we'll create a local content store... but otherwise we'll use containerd
if d.containerdCli != nil {
imgSvcConfig.Leases = d.containerdCli.LeasesService()
imgSvcConfig.ContentStore = d.containerdCli.ContentStore()
} else {
cs, lm, err := d.configureLocalContentStore()
if err != nil {
return nil, err
}
imgSvcConfig.ContentStore = cs
imgSvcConfig.Leases = lm
}
// TODO: imageStore, distributionMetadataStore, and ReferenceStore are only
// used above to run migration. They could be initialized in ImageService
// if migration is called from daemon/images. layerStore might move as well.
d.imageService = images.NewImageService(imgSvcConfig)
go d.execCommandGC()
d.containerd, err = libcontainerd.NewClient(ctx, d.containerdCli, filepath.Join(config.ExecRoot, "containerd"), config.ContainerdNamespace, d)
if err != nil {
return nil, err
}
if err := d.restore(); err != nil {
return nil, err
}
close(d.startupDone)
info := d.SystemInfo()
engineInfo.WithValues(
dockerversion.Version,
dockerversion.GitCommit,
info.Architecture,
info.Driver,
info.KernelVersion,
info.OperatingSystem,
info.OSType,
info.OSVersion,
info.ID,
).Set(1)
engineCpus.Set(float64(info.NCPU))
engineMemory.Set(float64(info.MemTotal))
gd := ""
for os, driver := range d.graphDrivers {
if len(gd) > 0 {
gd += ", "
}
gd += driver
if len(d.graphDrivers) > 1 {
gd = fmt.Sprintf("%s (%s)", gd, os)
}
}
logrus.WithFields(logrus.Fields{
"version": dockerversion.Version,
"commit": dockerversion.GitCommit,
"graphdriver(s)": gd,
}).Info("Docker daemon")
return d, nil
}
// DistributionServices returns services controlling daemon storage
func (daemon *Daemon) DistributionServices() images.DistributionServices {
return daemon.imageService.DistributionServices()
}
func (daemon *Daemon) waitForStartupDone() {
<-daemon.startupDone
}
func (daemon *Daemon) shutdownContainer(c *container.Container) error {
stopTimeout := c.StopTimeout()
// If container failed to exit in stopTimeout seconds of SIGTERM, then using the force
if err := daemon.containerStop(c, stopTimeout); err != nil {
return fmt.Errorf("Failed to stop container %s with error: %v", c.ID, err)
}
// Wait without timeout for the container to exit.
// Ignore the result.
<-c.Wait(context.Background(), container.WaitConditionNotRunning)
return nil
}
// ShutdownTimeout returns the timeout (in seconds) before containers are forcibly
// killed during shutdown. The default timeout can be configured both on the daemon
// and per container, and the longest timeout will be used. A grace-period of
// 5 seconds is added to the configured timeout.
//
// A negative (-1) timeout means "indefinitely", which means that containers
// are not forcibly killed, and the daemon shuts down after all containers exit.
func (daemon *Daemon) ShutdownTimeout() int {
shutdownTimeout := daemon.configStore.ShutdownTimeout
if shutdownTimeout < 0 {
return -1
}
if daemon.containers == nil {
return shutdownTimeout
}
graceTimeout := 5
for _, c := range daemon.containers.List() {
stopTimeout := c.StopTimeout()
if stopTimeout < 0 {
return -1
}
if stopTimeout+graceTimeout > shutdownTimeout {
shutdownTimeout = stopTimeout + graceTimeout
}
}
return shutdownTimeout
}
// Shutdown stops the daemon.
func (daemon *Daemon) Shutdown() error {
daemon.shutdown = true
// Keep mounts and networking running on daemon shutdown if
// we are to keep containers running and restore them.
if daemon.configStore.LiveRestoreEnabled && daemon.containers != nil {
// check if there are any running containers, if none we should do some cleanup
if ls, err := daemon.Containers(&types.ContainerListOptions{}); len(ls) != 0 || err != nil {
// metrics plugins still need some cleanup
daemon.cleanupMetricsPlugins()
return nil
}
}
if daemon.containers != nil {
logrus.Debugf("daemon configured with a %d seconds minimum shutdown timeout", daemon.configStore.ShutdownTimeout)
logrus.Debugf("start clean shutdown of all containers with a %d seconds timeout...", daemon.ShutdownTimeout())
daemon.containers.ApplyAll(func(c *container.Container) {
if !c.IsRunning() {
return
}
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log := logrus.WithField("container", c.ID)
log.Debug("shutting down container")
if err := daemon.shutdownContainer(c); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.WithError(err).Error("failed to shut down container")
return
}
if mountid, err := daemon.imageService.GetLayerMountID(c.ID, c.OS); err == nil {
daemon.cleanupMountsByID(mountid)
}
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
log.Debugf("shut down container")
})
}
if daemon.volumes != nil {
if err := daemon.volumes.Shutdown(); err != nil {
logrus.Errorf("Error shutting down volume store: %v", err)
}
}
if daemon.imageService != nil {
daemon.imageService.Cleanup()
}
// If we are part of a cluster, clean up cluster's stuff
if daemon.clusterProvider != nil {
logrus.Debugf("start clean shutdown of cluster resources...")
daemon.DaemonLeavesCluster()
}
daemon.cleanupMetricsPlugins()
// Shutdown plugins after containers and layerstore. Don't change the order.
daemon.pluginShutdown()
// trigger libnetwork Stop only if it's initialized
if daemon.netController != nil {
daemon.netController.Stop()
}
if daemon.containerdCli != nil {
daemon.containerdCli.Close()
}
if daemon.mdDB != nil {
daemon.mdDB.Close()
}
return daemon.cleanupMounts()
}
// Mount sets container.BaseFS
// (is it not set coming in? why is it unset?)
func (daemon *Daemon) Mount(container *container.Container) error {
if container.RWLayer == nil {
return errors.New("RWLayer of container " + container.ID + " is unexpectedly nil")
}
dir, err := container.RWLayer.Mount(container.GetMountLabel())
if err != nil {
return err
}
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
logrus.WithField("container", container.ID).Debugf("container mounted via layerStore: %v", dir)
if container.BaseFS != nil && container.BaseFS.Path() != dir.Path() {
// The mount path reported by the graph driver should always be trusted on Windows, since the
// volume path for a given mounted layer may change over time. This should only be an error
// on non-Windows operating systems.
if runtime.GOOS != "windows" {
daemon.Unmount(container)
return fmt.Errorf("Error: driver %s is returning inconsistent paths for container %s ('%s' then '%s')",
daemon.imageService.GraphDriverForOS(container.OS), container.ID, container.BaseFS, dir)
}
}
container.BaseFS = dir // TODO: combine these fields
return nil
}
// Unmount unsets the container base filesystem
func (daemon *Daemon) Unmount(container *container.Container) error {
if container.RWLayer == nil {
return errors.New("RWLayer of container " + container.ID + " is unexpectedly nil")
}
if err := container.RWLayer.Unmount(); err != nil {
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
logrus.WithField("container", container.ID).WithError(err).Error("error unmounting container")
return err
}
return nil
}
// Subnets return the IPv4 and IPv6 subnets of networks that are manager by Docker.
func (daemon *Daemon) Subnets() ([]net.IPNet, []net.IPNet) {
var v4Subnets []net.IPNet
var v6Subnets []net.IPNet
Split advertised address from listen address There are currently problems with "swarm init" and "swarm join" when an explicit --listen-addr flag is not provided. swarmkit defaults to finding the IP address associated with the default route, and in cloud setups this is often the wrong choice. Introduce a notion of "advertised address", with the client flag --advertise-addr, and the daemon flag --swarm-default-advertise-addr to provide a default. The default listening address is now 0.0.0.0, but a valid advertised address must be detected or specified. If no explicit advertised address is specified, error out if there is more than one usable candidate IP address on the system. This requires a user to explicitly choose instead of letting swarmkit make the wrong choice. For the purposes of this autodetection, we ignore certain interfaces that are unlikely to be relevant (currently docker*). The user is also required to choose a listen address on swarm init if they specify an explicit advertise address that is a hostname or an IP address that's not local to the system. This is a requirement for overlay networking. Also support specifying interface names to --listen-addr, --advertise-addr, and the daemon flag --swarm-default-advertise-addr. This will fail if the interface has multiple IP addresses (unless it has a single IPv4 address and a single IPv6 address - then we resolve the tie in favor of IPv4). This change also exposes the node's externally-reachable address in docker info, as requested by #24017. Make corresponding API and CLI docs changes. Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2016-07-01 01:07:35 +00:00
managedNetworks := daemon.netController.Networks()
for _, managedNetwork := range managedNetworks {
v4infos, v6infos := managedNetwork.Info().IpamInfo()
for _, info := range v4infos {
if info.IPAMData.Pool != nil {
v4Subnets = append(v4Subnets, *info.IPAMData.Pool)
Split advertised address from listen address There are currently problems with "swarm init" and "swarm join" when an explicit --listen-addr flag is not provided. swarmkit defaults to finding the IP address associated with the default route, and in cloud setups this is often the wrong choice. Introduce a notion of "advertised address", with the client flag --advertise-addr, and the daemon flag --swarm-default-advertise-addr to provide a default. The default listening address is now 0.0.0.0, but a valid advertised address must be detected or specified. If no explicit advertised address is specified, error out if there is more than one usable candidate IP address on the system. This requires a user to explicitly choose instead of letting swarmkit make the wrong choice. For the purposes of this autodetection, we ignore certain interfaces that are unlikely to be relevant (currently docker*). The user is also required to choose a listen address on swarm init if they specify an explicit advertise address that is a hostname or an IP address that's not local to the system. This is a requirement for overlay networking. Also support specifying interface names to --listen-addr, --advertise-addr, and the daemon flag --swarm-default-advertise-addr. This will fail if the interface has multiple IP addresses (unless it has a single IPv4 address and a single IPv6 address - then we resolve the tie in favor of IPv4). This change also exposes the node's externally-reachable address in docker info, as requested by #24017. Make corresponding API and CLI docs changes. Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2016-07-01 01:07:35 +00:00
}
}
for _, info := range v6infos {
if info.IPAMData.Pool != nil {
v6Subnets = append(v6Subnets, *info.IPAMData.Pool)
Split advertised address from listen address There are currently problems with "swarm init" and "swarm join" when an explicit --listen-addr flag is not provided. swarmkit defaults to finding the IP address associated with the default route, and in cloud setups this is often the wrong choice. Introduce a notion of "advertised address", with the client flag --advertise-addr, and the daemon flag --swarm-default-advertise-addr to provide a default. The default listening address is now 0.0.0.0, but a valid advertised address must be detected or specified. If no explicit advertised address is specified, error out if there is more than one usable candidate IP address on the system. This requires a user to explicitly choose instead of letting swarmkit make the wrong choice. For the purposes of this autodetection, we ignore certain interfaces that are unlikely to be relevant (currently docker*). The user is also required to choose a listen address on swarm init if they specify an explicit advertise address that is a hostname or an IP address that's not local to the system. This is a requirement for overlay networking. Also support specifying interface names to --listen-addr, --advertise-addr, and the daemon flag --swarm-default-advertise-addr. This will fail if the interface has multiple IP addresses (unless it has a single IPv4 address and a single IPv6 address - then we resolve the tie in favor of IPv4). This change also exposes the node's externally-reachable address in docker info, as requested by #24017. Make corresponding API and CLI docs changes. Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2016-07-01 01:07:35 +00:00
}
}
}
return v4Subnets, v6Subnets
Split advertised address from listen address There are currently problems with "swarm init" and "swarm join" when an explicit --listen-addr flag is not provided. swarmkit defaults to finding the IP address associated with the default route, and in cloud setups this is often the wrong choice. Introduce a notion of "advertised address", with the client flag --advertise-addr, and the daemon flag --swarm-default-advertise-addr to provide a default. The default listening address is now 0.0.0.0, but a valid advertised address must be detected or specified. If no explicit advertised address is specified, error out if there is more than one usable candidate IP address on the system. This requires a user to explicitly choose instead of letting swarmkit make the wrong choice. For the purposes of this autodetection, we ignore certain interfaces that are unlikely to be relevant (currently docker*). The user is also required to choose a listen address on swarm init if they specify an explicit advertise address that is a hostname or an IP address that's not local to the system. This is a requirement for overlay networking. Also support specifying interface names to --listen-addr, --advertise-addr, and the daemon flag --swarm-default-advertise-addr. This will fail if the interface has multiple IP addresses (unless it has a single IPv4 address and a single IPv6 address - then we resolve the tie in favor of IPv4). This change also exposes the node's externally-reachable address in docker info, as requested by #24017. Make corresponding API and CLI docs changes. Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2016-07-01 01:07:35 +00:00
}
// prepareTempDir prepares and returns the default directory to use
// for temporary files.
// If it doesn't exist, it is created. If it exists, its content is removed.
func prepareTempDir(rootDir string) (string, error) {
var tmpDir string
if tmpDir = os.Getenv("DOCKER_TMPDIR"); tmpDir == "" {
tmpDir = filepath.Join(rootDir, "tmp")
newName := tmpDir + "-old"
if err := os.Rename(tmpDir, newName); err == nil {
go func() {
if err := os.RemoveAll(newName); err != nil {
logrus.Warnf("failed to delete old tmp directory: %s", newName)
}
}()
} else if !os.IsNotExist(err) {
logrus.Warnf("failed to rename %s for background deletion: %s. Deleting synchronously", tmpDir, err)
if err := os.RemoveAll(tmpDir); err != nil {
logrus.Warnf("failed to delete old tmp directory: %s", tmpDir)
}
}
}
return tmpDir, idtools.MkdirAllAndChown(tmpDir, 0700, idtools.CurrentIdentity())
}
func (daemon *Daemon) setGenericResources(conf *config.Config) error {
genericResources, err := config.ParseGenericResources(conf.NodeGenericResources)
if err != nil {
return err
}
daemon.genericResources = genericResources
return nil
}
func setDefaultMtu(conf *config.Config) {
// do nothing if the config does not have the default 0 value.
if conf.Mtu != 0 {
return
}
conf.Mtu = config.DefaultNetworkMtu
}
// IsShuttingDown tells whether the daemon is shutting down or not
func (daemon *Daemon) IsShuttingDown() bool {
return daemon.shutdown
}
// initDiscovery initializes the discovery watcher for this daemon.
func (daemon *Daemon) initDiscovery(conf *config.Config) error {
advertise, err := config.ParseClusterAdvertiseSettings(conf.ClusterStore, conf.ClusterAdvertise)
if err != nil {
if err == discovery.ErrDiscoveryDisabled {
return nil
}
return err
}
conf.ClusterAdvertise = advertise
discoveryWatcher, err := discovery.Init(conf.ClusterStore, conf.ClusterAdvertise, conf.ClusterOpts)
if err != nil {
return fmt.Errorf("discovery initialization failed (%v)", err)
}
daemon.discoveryWatcher = discoveryWatcher
return nil
}
func isBridgeNetworkDisabled(conf *config.Config) bool {
return conf.BridgeConfig.Iface == config.DisableNetworkBridge
}
func (daemon *Daemon) networkOptions(dconfig *config.Config, pg plugingetter.PluginGetter, activeSandboxes map[string]interface{}) ([]nwconfig.Option, error) {
options := []nwconfig.Option{}
if dconfig == nil {
return options, nil
}
options = append(options, nwconfig.OptionExperimental(dconfig.Experimental))
options = append(options, nwconfig.OptionDataDir(dconfig.Root))
options = append(options, nwconfig.OptionExecRoot(dconfig.GetExecRoot()))
dd := runconfig.DefaultDaemonNetworkMode()
dn := runconfig.DefaultDaemonNetworkMode().NetworkName()
options = append(options, nwconfig.OptionDefaultDriver(string(dd)))
options = append(options, nwconfig.OptionDefaultNetwork(dn))
if strings.TrimSpace(dconfig.ClusterStore) != "" {
kv := strings.Split(dconfig.ClusterStore, "://")
if len(kv) != 2 {
return nil, errors.New("kv store daemon config must be of the form KV-PROVIDER://KV-URL")
}
options = append(options, nwconfig.OptionKVProvider(kv[0]))
options = append(options, nwconfig.OptionKVProviderURL(kv[1]))
}
if len(dconfig.ClusterOpts) > 0 {
options = append(options, nwconfig.OptionKVOpts(dconfig.ClusterOpts))
}
if daemon.discoveryWatcher != nil {
options = append(options, nwconfig.OptionDiscoveryWatcher(daemon.discoveryWatcher))
}
if dconfig.ClusterAdvertise != "" {
options = append(options, nwconfig.OptionDiscoveryAddress(dconfig.ClusterAdvertise))
}
options = append(options, nwconfig.OptionLabels(dconfig.Labels))
options = append(options, driverOptions(dconfig)...)
if len(dconfig.NetworkConfig.DefaultAddressPools.Value()) > 0 {
options = append(options, nwconfig.OptionDefaultAddressPoolConfig(dconfig.NetworkConfig.DefaultAddressPools.Value()))
}
if daemon.configStore != nil && daemon.configStore.LiveRestoreEnabled && len(activeSandboxes) != 0 {
options = append(options, nwconfig.OptionActiveSandboxes(activeSandboxes))
}
if pg != nil {
options = append(options, nwconfig.OptionPluginGetter(pg))
}
options = append(options, nwconfig.OptionNetworkControlPlaneMTU(dconfig.NetworkControlPlaneMTU))
return options, nil
}
// GetCluster returns the cluster
func (daemon *Daemon) GetCluster() Cluster {
return daemon.cluster
}
// SetCluster sets the cluster
func (daemon *Daemon) SetCluster(cluster Cluster) {
daemon.cluster = cluster
}
func (daemon *Daemon) pluginShutdown() {
manager := daemon.pluginManager
// Check for a valid manager object. In error conditions, daemon init can fail
// and shutdown called, before plugin manager is initialized.
if manager != nil {
manager.Shutdown()
}
}
// PluginManager returns current pluginManager associated with the daemon
func (daemon *Daemon) PluginManager() *plugin.Manager { // set up before daemon to avoid this method
return daemon.pluginManager
}
// PluginGetter returns current pluginStore associated with the daemon
func (daemon *Daemon) PluginGetter() *plugin.Store {
return daemon.PluginStore
}
// CreateDaemonRoot creates the root for the daemon
func CreateDaemonRoot(config *config.Config) error {
// get the canonical path to the Docker root directory
var realRoot string
if _, err := os.Stat(config.Root); err != nil && os.IsNotExist(err) {
realRoot = config.Root
} else {
realRoot, err = fileutils.ReadSymlinkedDirectory(config.Root)
if err != nil {
return fmt.Errorf("Unable to get the full path to root (%s): %s", config.Root, err)
}
}
idMapping, err := setupRemappedRoot(config)
if err != nil {
return err
}
return setupDaemonRoot(config, realRoot, idMapping.RootPair())
}
// checkpointAndSave grabs a container lock to safely call container.CheckpointTo
func (daemon *Daemon) checkpointAndSave(container *container.Container) error {
container.Lock()
defer container.Unlock()
if err := container.CheckpointTo(daemon.containersReplica); err != nil {
return fmt.Errorf("Error saving container state: %v", err)
}
return nil
}
// because the CLI sends a -1 when it wants to unset the swappiness value
// we need to clear it on the server side
func fixMemorySwappiness(resources *containertypes.Resources) {
if resources.MemorySwappiness != nil && *resources.MemorySwappiness == -1 {
resources.MemorySwappiness = nil
}
}
// GetAttachmentStore returns current attachment store associated with the daemon
func (daemon *Daemon) GetAttachmentStore() *network.AttachmentStore {
return &daemon.attachmentStore
}
// IdentityMapping returns uid/gid mapping or a SID (in the case of Windows) for the builder
func (daemon *Daemon) IdentityMapping() *idtools.IdentityMapping {
return daemon.idMapping
}
// ImageService returns the Daemon's ImageService
func (daemon *Daemon) ImageService() *images.ImageService {
return daemon.imageService
}
// BuilderBackend returns the backend used by builder
func (daemon *Daemon) BuilderBackend() builder.Backend {
return struct {
*Daemon
*images.ImageService
}{daemon, daemon.imageService}
}