moby/daemon/daemon.go

1657 lines
55 KiB
Go
Raw Permalink Normal View History

add //go:build directives to prevent downgrading to go1.16 language This repository is not yet a module (i.e., does not have a `go.mod`). This is not problematic when building the code in GOPATH or "vendor" mode, but when using the code as a module-dependency (in module-mode), different semantics are applied since Go1.21, which switches Go _language versions_ on a per-module, per-package, or even per-file base. A condensed summary of that logic [is as follows][1]: - For modules that have a go.mod containing a go version directive; that version is considered a minimum _required_ version (starting with the go1.19.13 and go1.20.8 patch releases: before those, it was only a recommendation). - For dependencies that don't have a go.mod (not a module), go language version go1.16 is assumed. - Likewise, for modules that have a go.mod, but the file does not have a go version directive, go language version go1.16 is assumed. - If a go.work file is present, but does not have a go version directive, language version go1.17 is assumed. When switching language versions, Go _downgrades_ the language version, which means that language features (such as generics, and `any`) are not available, and compilation fails. For example: # github.com/docker/cli/cli/context/store /go/pkg/mod/github.com/docker/cli@v25.0.0-beta.2+incompatible/cli/context/store/storeconfig.go:6:24: predeclared any requires go1.18 or later (-lang was set to go1.16; check go.mod) /go/pkg/mod/github.com/docker/cli@v25.0.0-beta.2+incompatible/cli/context/store/store.go:74:12: predeclared any requires go1.18 or later (-lang was set to go1.16; check go.mod) Note that these fallbacks are per-module, per-package, and can even be per-file, so _(indirect) dependencies_ can still use modern language features, as long as their respective go.mod has a version specified. Unfortunately, these failures do not occur when building locally (using vendor / GOPATH mode), but will affect consumers of the module. Obviously, this situation is not ideal, and the ultimate solution is to move to go modules (add a go.mod), but this comes with a non-insignificant risk in other areas (due to our complex dependency tree). We can revert to using go1.16 language features only, but this may be limiting, and may still be problematic when (e.g.) matching signatures of dependencies. There is an escape hatch: adding a `//go:build` directive to files that make use of go language features. From the [go toolchain docs][2]: > The go line for each module sets the language version the compiler enforces > when compiling packages in that module. The language version can be changed > on a per-file basis by using a build constraint. > > For example, a module containing code that uses the Go 1.21 language version > should have a `go.mod` file with a go line such as `go 1.21` or `go 1.21.3`. > If a specific source file should be compiled only when using a newer Go > toolchain, adding `//go:build go1.22` to that source file both ensures that > only Go 1.22 and newer toolchains will compile the file and also changes > the language version in that file to Go 1.22. This patch adds `//go:build` directives to those files using recent additions to the language. It's currently using go1.19 as version to match the version in our "vendor.mod", but we can consider being more permissive ("any" requires go1.18 or up), or more "optimistic" (force go1.21, which is the version we currently use to build). For completeness sake, note that any file _without_ a `//go:build` directive will continue to use go1.16 language version when used as a module. [1]: https://github.com/golang/go/blob/58c28ba286dd0e98fe4cca80f5d64bbcb824a685/src/cmd/go/internal/gover/version.go#L9-L56 [2]: https://go.dev/doc/toolchain Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-12-15 13:26:31 +00:00
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.19
// Package daemon exposes the functions that occur on the host server
// that the Docker daemon is running.
//
// In implementing the various functions of the daemon, there is often
// a method-specific struct for configuring the runtime behavior.
package daemon // import "github.com/docker/docker/daemon"
2013-01-19 00:13:39 +00:00
import (
"context"
2013-01-19 00:13:39 +00:00
"fmt"
"net"
"os"
"path"
"path/filepath"
"runtime"
"sync"
"sync/atomic"
"time"
"github.com/containerd/containerd"
"github.com/containerd/containerd/defaults"
"github.com/containerd/containerd/pkg/dialer"
"github.com/containerd/containerd/pkg/userns"
"github.com/containerd/containerd/remotes/docker"
"github.com/containerd/log"
"github.com/distribution/reference"
dist "github.com/docker/distribution"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
containertypes "github.com/docker/docker/api/types/container"
imagetypes "github.com/docker/docker/api/types/image"
networktypes "github.com/docker/docker/api/types/network"
registrytypes "github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/volume"
"github.com/docker/docker/builder"
"github.com/docker/docker/container"
executorpkg "github.com/docker/docker/daemon/cluster/executor"
"github.com/docker/docker/daemon/config"
ctrd "github.com/docker/docker/daemon/containerd"
"github.com/docker/docker/daemon/events"
_ "github.com/docker/docker/daemon/graphdriver/register" // register graph drivers
"github.com/docker/docker/daemon/images"
dlogger "github.com/docker/docker/daemon/logger"
add validation and migration for deprecated logentries driver A validation step was added to prevent the daemon from considering "logentries" as a dynamically loaded plugin, causing it to continue trying to load the plugin; WARN[2023-12-12T21:53:16.866857127Z] Unable to locate plugin: logentries, retrying in 1s WARN[2023-12-12T21:53:17.868296836Z] Unable to locate plugin: logentries, retrying in 2s WARN[2023-12-12T21:53:19.874259254Z] Unable to locate plugin: logentries, retrying in 4s WARN[2023-12-12T21:53:23.879869881Z] Unable to locate plugin: logentries, retrying in 8s But would ultimately be returned as an error to the user: docker container create --name foo --log-driver=logentries nginx:alpine Error response from daemon: error looking up logging plugin logentries: plugin "logentries" not found With the additional validation step, an error is returned immediately: docker container create --log-driver=logentries busybox Error response from daemon: the logentries logging driver has been deprecated and removed A migration step was added on container restore. Containers using the "logentries" logging driver are migrated to use the "local" logging driver: WARN[2023-12-12T22:38:53.108349297Z] migrated deprecated logentries logging driver container=4c9309fedce75d807340ea1820cc78dc5c774d7bfcae09f3744a91b84ce6e4f7 error="<nil>" As an alternative to the validation step, I also considered using a "stub" deprecation driver, however this would not result in an error when creating the container, and only produce an error when starting: docker container create --name foo --log-driver=logentries nginx:alpine 4c9309fedce75d807340ea1820cc78dc5c774d7bfcae09f3744a91b84ce6e4f7 docker start foo Error response from daemon: failed to create task for container: failed to initialize logging driver: the logentries logging driver has been deprecated and removed Error: failed to start containers: foo For containers, this validation is added in the backend (daemon). For services, this was not sufficient, as SwarmKit would try to schedule the task, which caused a close loop; docker service create --log-driver=logentries --name foo nginx:alpine zo0lputagpzaua7cwga4lfmhp overall progress: 0 out of 1 tasks 1/1: no suitable node (missing plugin on 1 node) Operation continuing in background. DEBU[2023-12-12T22:50:28.132732757Z] Calling GET /v1.43/tasks?filters=%7B%22_up-to-date%22%3A%7B%22true%22%3Atrue%7D%2C%22service%22%3A%7B%22zo0lputagpzaua7cwga4lfmhp%22%3Atrue%7D%7D DEBU[2023-12-12T22:50:28.137961549Z] Calling GET /v1.43/nodes DEBU[2023-12-12T22:50:28.340665007Z] Calling GET /v1.43/services/zo0lputagpzaua7cwga4lfmhp?insertDefaults=false DEBU[2023-12-12T22:50:28.343437632Z] Calling GET /v1.43/tasks?filters=%7B%22_up-to-date%22%3A%7B%22true%22%3Atrue%7D%2C%22service%22%3A%7B%22zo0lputagpzaua7cwga4lfmhp%22%3Atrue%7D%7D DEBU[2023-12-12T22:50:28.345201257Z] Calling GET /v1.43/nodes So a validation was added in the service create and update endpoints; docker service create --log-driver=logentries --name foo nginx:alpine Error response from daemon: the logentries logging driver has been deprecated and removed Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-12-12 23:03:37 +00:00
"github.com/docker/docker/daemon/logger/local"
"github.com/docker/docker/daemon/network"
"github.com/docker/docker/daemon/snapshotter"
"github.com/docker/docker/daemon/stats"
"github.com/docker/docker/distribution"
dmetadata "github.com/docker/docker/distribution/metadata"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/image"
"github.com/docker/docker/internal/compatcontext"
"github.com/docker/docker/layer"
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 22:30:52 +00:00
libcontainerdtypes "github.com/docker/docker/libcontainerd/types"
"github.com/docker/docker/libnetwork"
"github.com/docker/docker/libnetwork/cluster"
nwconfig "github.com/docker/docker/libnetwork/config"
"github.com/docker/docker/pkg/authorization"
"github.com/docker/docker/pkg/fileutils"
"github.com/docker/docker/pkg/idtools"
"github.com/docker/docker/pkg/plugingetter"
"github.com/docker/docker/pkg/sysinfo"
"github.com/docker/docker/pkg/system"
"github.com/docker/docker/plugin"
pluginexec "github.com/docker/docker/plugin/executor/containerd"
refstore "github.com/docker/docker/reference"
"github.com/docker/docker/registry"
"github.com/docker/docker/runconfig"
volumesservice "github.com/docker/docker/volume/service"
"github.com/moby/buildkit/util/resolver"
resolverconfig "github.com/moby/buildkit/util/resolver/config"
"github.com/moby/locker"
"github.com/pkg/errors"
"go.etcd.io/bbolt"
"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
"golang.org/x/sync/semaphore"
"google.golang.org/grpc"
"google.golang.org/grpc/backoff"
"google.golang.org/grpc/credentials/insecure"
"resenje.org/singleflight"
)
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
type configStore struct {
config.Config
Runtimes runtimes
}
// Daemon holds information about the Docker daemon.
type Daemon struct {
id string
repository string
containers container.Store
containersReplica *container.ViewDB
execCommands *container.ExecStore
imageService ImageService
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
configStore atomic.Pointer[configStore]
configReload sync.Mutex
statsCollector *stats.Collector
defaultLogConfig containertypes.LogConfig
registryService *registry.Service
EventsService *events.Events
netController *libnetwork.Controller
volumes *volumesservice.VolumesService
root string
sysInfoOnce sync.Once
sysInfo *sysinfo.SysInfo
shutdown bool
idMapping idtools.IdentityMapping
PluginStore *plugin.Store // TODO: remove
pluginManager *plugin.Manager
linkIndex *linkIndex
containerdClient *containerd.Client
Windows: Experimental: Allow containerd for runtime Signed-off-by: John Howard <jhoward@microsoft.com> This is the first step in refactoring moby (dockerd) to use containerd on Windows. Similar to the current model in Linux, this adds the option to enable it for runtime. It does not switch the graphdriver to containerd snapshotters. - Refactors libcontainerd to a series of subpackages so that either a "local" containerd (1) or a "remote" (2) containerd can be loaded as opposed to conditional compile as "local" for Windows and "remote" for Linux. - Updates libcontainerd such that Windows has an option to allow the use of a "remote" containerd. Here, it communicates over a named pipe using GRPC. This is currently guarded behind the experimental flag, an environment variable, and the providing of a pipename to connect to containerd. - Infrastructure pieces such as under pkg/system to have helper functions for determining whether containerd is being used. (1) "local" containerd is what the daemon on Windows has used since inception. It's not really containerd at all - it's simply local invocation of HCS APIs directly in-process from the daemon through the Microsoft/hcsshim library. (2) "remote" containerd is what docker on Linux uses for it's runtime. It means that there is a separate containerd service running, and docker communicates over GRPC to it. To try this out, you will need to start with something like the following: Window 1: containerd --log-level debug Window 2: $env:DOCKER_WINDOWS_CONTAINERD=1 dockerd --experimental -D --containerd \\.\pipe\containerd-containerd You will need the following binary from github.com/containerd/containerd in your path: - containerd.exe You will need the following binaries from github.com/Microsoft/hcsshim in your path: - runhcs.exe - containerd-shim-runhcs-v1.exe For LCOW, it will require and initrd.img and kernel in `C:\Program Files\Linux Containers`. This is no different to the current requirements. However, you may need updated binaries, particularly initrd.img built from Microsoft/opengcs as (at the time of writing), Linuxkit binaries are somewhat out of date. Note that containerd and hcsshim for HCS v2 APIs do not yet support all the required functionality needed for docker. This will come in time - this is a baby (although large) step to migrating Docker on Windows to containerd. Note that the HCS v2 APIs are only called on RS5+ builds. RS1..RS4 will still use HCS v1 APIs as the v2 APIs were not fully developed enough on these builds to be usable. This abstraction is done in HCSShim. (Referring specifically to runtime) Note the LCOW graphdriver still uses HCS v1 APIs regardless. Note also that this does not migrate docker to use containerd snapshotters rather than graphdrivers. This needs to be done in conjunction with Linux also doing the same switch.
2019-01-08 22:30:52 +00:00
containerd libcontainerdtypes.Client
defaultIsolation containertypes.Isolation // Default isolation mode on Windows
clusterProvider cluster.Provider
cluster Cluster
genericResources []swarm.GenericResource
metricsPluginListener net.Listener
ReferenceStore refstore.Store
machineMemory uint64
seccompProfile []byte
seccompProfilePath string
usageContainers singleflight.Group[struct{}, []*types.Container]
usageImages singleflight.Group[struct{}, []*imagetypes.Summary]
usageVolumes singleflight.Group[struct{}, []*volume.Volume]
usageLayer singleflight.Group[struct{}, int64]
pruneRunning int32
hosts map[string]bool // hosts stores the addresses the daemon is listening on
startupDone chan struct{}
Fix race in attachable network attachment Attachable networks are networks created on the cluster which can then be attached to by non-swarm containers. These networks are lazily created on the node that wants to attach to that network. When no container is currently attached to one of these networks on a node, and then multiple containers which want that network are started concurrently, this can cause a race condition in the network attachment where essentially we try to attach the same network to the node twice. To easily reproduce this issue you must use a multi-node cluster with a worker node that has lots of CPUs (I used a 36 CPU node). Repro steps: 1. On manager, `docker network create -d overlay --attachable test` 2. On worker, `docker create --restart=always --network test busybox top`, many times... 200 is a good number (but not much more due to subnet size restrictions) 3. Restart the daemon When the daemon restarts, it will attempt to start all those containers simultaneously. Note that you could try to do this yourself over the API, but it's harder to trigger due to the added latency from going over the API. The error produced happens when the daemon tries to start the container upon allocating the network resources: ``` attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded ``` What happens here is the worker makes a network attachment request to the manager. This is an async call which in the happy case would cause a task to be placed on the node, which the worker is waiting for to get the network configuration. In the case of this race, the error ocurrs on the manager like this: ``` task allocation failure" error="failed during network allocation for task n7bwwwbymj2o2h9asqkza8gom: failed to allocate network IP for task n7bwwwbymj2o2h9asqkza8gom network rj4szie2zfauqnpgh4eri1yue: could not find an available IP" module=node node.id=u3489c490fx1df8onlyfo1v6e ``` The task is not created and the worker times out waiting for the task. --- The mitigation for this is to make sure that only one attachment reuest is in flight for a given network at a time *when the network doesn't already exist on the node*. If the network already exists on the node there is no need for synchronization because the network is already allocated and on the node so there is no need to request it from the manager. This basically comes down to a race with `Find(network) || Create(network)` without any sort of syncronization. Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2018-02-02 17:54:08 +00:00
attachmentStore network.AttachmentStore
attachableNetworkLock *locker.Locker
// This is used for Windows which doesn't currently support running on containerd
// It stores metadata for the content store (used for manifest caching)
// This needs to be closed on daemon exit
mdDB *bbolt.DB
usesSnapshotter bool
}
// ID returns the daemon id
func (daemon *Daemon) ID() string {
return daemon.id
}
// StoreHosts stores the addresses the daemon is listening on
func (daemon *Daemon) StoreHosts(hosts []string) {
if daemon.hosts == nil {
daemon.hosts = make(map[string]bool)
}
for _, h := range hosts {
daemon.hosts[h] = true
}
}
// config returns an immutable snapshot of the current daemon configuration.
// Multiple calls to this function will return the same pointer until the
// configuration is reloaded so callers must take care not to modify the
// returned value.
//
// To ensure that the configuration used remains consistent throughout the
// lifetime of an operation, the configuration pointer should be passed down the
// call stack, like one would a [context.Context] value. Only the entrypoints
// for operations, the outermost functions, should call this function.
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
func (daemon *Daemon) config() *configStore {
cfg := daemon.configStore.Load()
if cfg == nil {
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
return &configStore{}
}
return cfg
}
fix "host-gateway-ip" label not set for builder workers Commit 21e50b89c92666589780eba6f83ad0851d8e9235 added a label on the buildkit worker to advertise the host-gateway-ip. This option can be either set by the user in the daemon config, or otherwise defaults to the gateway-ip. If no value is set by the user, discovery of the gateway-ip happens when initializing the network-controller (`NewDaemon`, `daemon.restore()`). However d222bf097cd4c8121900c292f0f6ff243773c3c5 changed how we handle the daemon config. As a result, the `cli.Config` used when initializing the builder only holds configuration information form the daemon config (user-specified or defaults), but is not updated with information set by `NewDaemon`. This patch adds an accessor on the daemon to get the current daemon config. An alternative could be to return the config by `NewDaemon` (which should likely be a _copy_ of the config). Before this patch: docker buildx inspect default Name: default Driver: docker Nodes: Name: default Endpoint: default Status: running Buildkit: v0.12.4+3b6880d2a00f Platforms: linux/arm64, linux/amd64, linux/amd64/v2, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6 Labels: org.mobyproject.buildkit.worker.moby.host-gateway-ip: <nil> After this patch: docker buildx inspect default Name: default Driver: docker Nodes: Name: default Endpoint: default Status: running Buildkit: v0.12.4+3b6880d2a00f Platforms: linux/arm64, linux/amd64, linux/amd64/v2, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6 Labels: org.mobyproject.buildkit.worker.moby.host-gateway-ip: 172.18.0.1 Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-23 13:35:25 +00:00
// Config returns daemon's config.
func (daemon *Daemon) Config() config.Config {
return daemon.config().Config
}
// HasExperimental returns whether the experimental features of the daemon are enabled or not
func (daemon *Daemon) HasExperimental() bool {
return daemon.config().Experimental
}
// Features returns the features map from configStore
func (daemon *Daemon) Features() map[string]bool {
return daemon.config().Features
}
// UsesSnapshotter returns true if feature flag to use containerd snapshotter is enabled
func (daemon *Daemon) UsesSnapshotter() bool {
return daemon.usesSnapshotter
}
// RegistryHosts returns the registry hosts configuration for the host component
// of a distribution image reference.
func (daemon *Daemon) RegistryHosts(host string) ([]docker.RegistryHost, error) {
m := map[string]resolverconfig.RegistryConfig{
"docker.io": {Mirrors: daemon.registryService.ServiceConfig().Mirrors},
}
conf := daemon.registryService.ServiceConfig().IndexConfigs
for k, v := range conf {
c := m[k]
if !v.Secure {
t := true
c.PlainHTTP = &t
c.Insecure = &t
}
m[k] = c
}
if c, ok := m[host]; !ok && daemon.registryService.IsInsecureRegistry(host) {
t := true
c.PlainHTTP = &t
c.Insecure = &t
m[host] = c
}
for k, v := range m {
v.TLSConfigDir = []string{registry.HostCertsDir(k)}
m[k] = v
}
certsDir := registry.CertsDir()
if fis, err := os.ReadDir(certsDir); err == nil {
for _, fi := range fis {
if _, ok := m[fi.Name()]; !ok {
m[fi.Name()] = resolverconfig.RegistryConfig{
TLSConfigDir: []string{filepath.Join(certsDir, fi.Name())},
}
}
}
}
return resolver.NewRegistryConfig(m)(host)
}
// layerAccessor may be implemented by ImageService
type layerAccessor interface {
GetLayerByID(cid string) (layer.RWLayer, error)
}
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
func (daemon *Daemon) restore(cfg *configStore) error {
var mapLock sync.Mutex
containers := make(map[string]*container.Container)
log.G(context.TODO()).Info("Loading containers: start.")
dir, err := os.ReadDir(daemon.repository)
2013-01-19 00:13:39 +00:00
if err != nil {
return err
}
// parallelLimit is the maximum number of parallel startup jobs that we
// allow (this is the limited used for all startup semaphores). The multipler
// (128) was chosen after some fairly significant benchmarking -- don't change
// it unless you've tested it significantly (this value is adjusted if
// RLIMIT_NOFILE is small to avoid EMFILE).
parallelLimit := adjustParallelLimit(len(dir), 128*runtime.NumCPU())
// Re-used for all parallel startup jobs.
var group sync.WaitGroup
sem := semaphore.NewWeighted(int64(parallelLimit))
2013-12-18 18:43:42 +00:00
for _, v := range dir {
group.Add(1)
go func(id string) {
defer group.Done()
_ = sem.Acquire(context.Background(), 1)
defer sem.Release(1)
logger := log.G(context.TODO()).WithField("container", id)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
c, err := daemon.load(id)
if err != nil {
logger.WithError(err).Error("failed to load container")
return
}
if c.Driver != daemon.imageService.StorageDriver() {
// Ignore the container if it wasn't created with the current storage-driver
logger.Debugf("not restoring container because it was created with another storage driver (%s)", c.Driver)
return
}
if accessor, ok := daemon.imageService.(layerAccessor); ok {
rwlayer, err := accessor.GetLayerByID(c.ID)
if err != nil {
logger.WithError(err).Error("failed to load container mount")
return
}
c.RWLayer = rwlayer
}
logger.WithFields(log.Fields{
"running": c.IsRunning(),
"paused": c.IsPaused(),
}).Debug("loaded container")
mapLock.Lock()
containers[c.ID] = c
mapLock.Unlock()
}(v.Name())
}
group.Wait()
removeContainers := make(map[string]*container.Container)
restartContainers := make(map[*container.Container]chan struct{})
activeSandboxes := make(map[string]interface{})
for _, c := range containers {
group.Add(1)
go func(c *container.Container) {
defer group.Done()
_ = sem.Acquire(context.Background(), 1)
defer sem.Release(1)
logger := log.G(context.TODO()).WithField("container", c.ID)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
if err := daemon.registerName(c); err != nil {
logger.WithError(err).Errorf("failed to register container name: %s", c.Name)
mapLock.Lock()
delete(containers, c.ID)
mapLock.Unlock()
return
}
if err := daemon.Register(c); err != nil {
logger.WithError(err).Error("failed to register container")
mapLock.Lock()
delete(containers, c.ID)
mapLock.Unlock()
return
}
}(c)
}
group.Wait()
for _, c := range containers {
group.Add(1)
go func(c *container.Container) {
defer group.Done()
_ = sem.Acquire(context.Background(), 1)
defer sem.Release(1)
baseLogger := log.G(context.TODO()).WithField("container", c.ID)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
add validation and migration for deprecated logentries driver A validation step was added to prevent the daemon from considering "logentries" as a dynamically loaded plugin, causing it to continue trying to load the plugin; WARN[2023-12-12T21:53:16.866857127Z] Unable to locate plugin: logentries, retrying in 1s WARN[2023-12-12T21:53:17.868296836Z] Unable to locate plugin: logentries, retrying in 2s WARN[2023-12-12T21:53:19.874259254Z] Unable to locate plugin: logentries, retrying in 4s WARN[2023-12-12T21:53:23.879869881Z] Unable to locate plugin: logentries, retrying in 8s But would ultimately be returned as an error to the user: docker container create --name foo --log-driver=logentries nginx:alpine Error response from daemon: error looking up logging plugin logentries: plugin "logentries" not found With the additional validation step, an error is returned immediately: docker container create --log-driver=logentries busybox Error response from daemon: the logentries logging driver has been deprecated and removed A migration step was added on container restore. Containers using the "logentries" logging driver are migrated to use the "local" logging driver: WARN[2023-12-12T22:38:53.108349297Z] migrated deprecated logentries logging driver container=4c9309fedce75d807340ea1820cc78dc5c774d7bfcae09f3744a91b84ce6e4f7 error="<nil>" As an alternative to the validation step, I also considered using a "stub" deprecation driver, however this would not result in an error when creating the container, and only produce an error when starting: docker container create --name foo --log-driver=logentries nginx:alpine 4c9309fedce75d807340ea1820cc78dc5c774d7bfcae09f3744a91b84ce6e4f7 docker start foo Error response from daemon: failed to create task for container: failed to initialize logging driver: the logentries logging driver has been deprecated and removed Error: failed to start containers: foo For containers, this validation is added in the backend (daemon). For services, this was not sufficient, as SwarmKit would try to schedule the task, which caused a close loop; docker service create --log-driver=logentries --name foo nginx:alpine zo0lputagpzaua7cwga4lfmhp overall progress: 0 out of 1 tasks 1/1: no suitable node (missing plugin on 1 node) Operation continuing in background. DEBU[2023-12-12T22:50:28.132732757Z] Calling GET /v1.43/tasks?filters=%7B%22_up-to-date%22%3A%7B%22true%22%3Atrue%7D%2C%22service%22%3A%7B%22zo0lputagpzaua7cwga4lfmhp%22%3Atrue%7D%7D DEBU[2023-12-12T22:50:28.137961549Z] Calling GET /v1.43/nodes DEBU[2023-12-12T22:50:28.340665007Z] Calling GET /v1.43/services/zo0lputagpzaua7cwga4lfmhp?insertDefaults=false DEBU[2023-12-12T22:50:28.343437632Z] Calling GET /v1.43/tasks?filters=%7B%22_up-to-date%22%3A%7B%22true%22%3Atrue%7D%2C%22service%22%3A%7B%22zo0lputagpzaua7cwga4lfmhp%22%3Atrue%7D%7D DEBU[2023-12-12T22:50:28.345201257Z] Calling GET /v1.43/nodes So a validation was added in the service create and update endpoints; docker service create --log-driver=logentries --name foo nginx:alpine Error response from daemon: the logentries logging driver has been deprecated and removed Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-12-12 23:03:37 +00:00
if c.HostConfig != nil {
// Migrate containers that don't have the default ("no") restart-policy set.
// The RestartPolicy.Name field may be empty for containers that were
// created with versions before v25.0.0.
//
// We also need to set the MaximumRetryCount to 0, to prevent
// validation from failing (MaximumRetryCount is not allowed if
// no restart-policy ("none") is set).
if c.HostConfig.RestartPolicy.Name == "" {
baseLogger.Debug("migrated restart-policy")
c.HostConfig.RestartPolicy.Name = containertypes.RestartPolicyDisabled
c.HostConfig.RestartPolicy.MaximumRetryCount = 0
}
// Migrate containers that use the deprecated (and now non-functional)
// logentries driver. Update them to use the "local" logging driver
// instead.
//
// TODO(thaJeztah): remove logentries check and migration code in release v26.0.0.
if c.HostConfig.LogConfig.Type == "logentries" {
baseLogger.Warn("migrated deprecated logentries logging driver")
c.HostConfig.LogConfig = containertypes.LogConfig{
Type: local.Name,
}
}
// Normalize the "default" network mode into the network mode
// it aliases ("bridge on Linux and "nat" on Windows). This is
// also done by the container router, for new containers. But
// we need to do it here too to handle containers that were
// created prior to v26.0.
//
// TODO(aker): remove this migration code once the next LTM version of MCR is released.
if c.HostConfig.NetworkMode.IsDefault() {
c.HostConfig.NetworkMode = runconfig.DefaultDaemonNetworkMode()
if nw, ok := c.NetworkSettings.Networks[networktypes.NetworkDefault]; ok {
c.NetworkSettings.Networks[c.HostConfig.NetworkMode.NetworkName()] = nw
delete(c.NetworkSettings.Networks, networktypes.NetworkDefault)
}
}
}
if err := daemon.checkpointAndSave(c); err != nil {
baseLogger.WithError(err).Error("failed to save migrated container config to disk")
}
daemon.setStateCounter(c)
logger := func(c *container.Container) *log.Entry {
return baseLogger.WithFields(log.Fields{
"running": c.IsRunning(),
"paused": c.IsPaused(),
"restarting": c.IsRestarting(),
})
}
logger(c).Debug("restoring container")
var es *containerd.ExitStatus
if err := c.RestoreTask(context.Background(), daemon.containerd); err != nil && !errdefs.IsNotFound(err) {
logger(c).WithError(err).Error("failed to restore container with containerd")
return
}
alive := false
status := containerd.Unknown
if tsk, ok := c.Task(); ok {
s, err := tsk.Status(context.Background())
if err != nil {
logger(c).WithError(err).Error("failed to get task status")
} else {
status = s.Status
alive = status != containerd.Stopped
if !alive {
logger(c).Debug("cleaning up dead container process")
es, err = tsk.Delete(context.Background())
if err != nil && !errdefs.IsNotFound(err) {
logger(c).WithError(err).Error("failed to delete task from containerd")
return
}
} else if !cfg.LiveRestoreEnabled {
logger(c).Debug("shutting down container considered alive by containerd")
if err := daemon.shutdownContainer(c); err != nil && !errdefs.IsNotFound(err) {
baseLogger.WithError(err).Error("error shutting down container")
return
}
status = containerd.Stopped
alive = false
c.ResetRestartManager(false)
}
}
}
// If the containerd task for the container was not found, docker's view of the
// container state will be updated accordingly via SetStopped further down.
if c.IsRunning() || c.IsPaused() {
logger(c).Debug("syncing container on disk state with real state")
c.RestartManager().Cancel() // manually start containers because some need to wait for swarm networking
switch {
case c.IsPaused() && alive:
logger(c).WithField("state", status).Info("restored container paused")
switch status {
case containerd.Paused, containerd.Pausing:
// nothing to do
case containerd.Unknown, containerd.Stopped, "":
baseLogger.WithField("status", status).Error("unexpected status for paused container during restore")
default:
// running
c.Lock()
c.Paused = false
daemon.setStateCounter(c)
daemon.initHealthMonitor(c)
if err := c.CheckpointTo(daemon.containersReplica); err != nil {
baseLogger.WithError(err).Error("failed to update paused container state")
}
c.Unlock()
}
case !c.IsPaused() && alive:
logger(c).Debug("restoring healthcheck")
c.Lock()
daemon.initHealthMonitor(c)
c.Unlock()
}
if !alive {
logger(c).Debug("setting stopped state")
c.Lock()
var ces container.ExitStatus
if es != nil {
ces.ExitCode = int(es.ExitCode())
ces.ExitedAt = es.ExitTime()
} else {
ces.ExitCode = 255
}
c.SetStopped(&ces)
daemon.Cleanup(context.TODO(), c)
if err := c.CheckpointTo(daemon.containersReplica); err != nil {
baseLogger.WithError(err).Error("failed to update stopped container state")
}
c.Unlock()
logger(c).Debug("set stopped state")
}
// we call Mount and then Unmount to get BaseFs of the container
if err := daemon.Mount(c); err != nil {
// The mount is unlikely to fail. However, in case mount fails
// the container should be allowed to restore here. Some functionalities
// (like docker exec -u user) might be missing but container is able to be
// stopped/restarted/removed.
// See #29365 for related information.
// The error is only logged here.
logger(c).WithError(err).Warn("failed to mount container to get BaseFs path")
} else {
if err := daemon.Unmount(c); err != nil {
logger(c).WithError(err).Warn("failed to umount container to get BaseFs path")
}
}
c.ResetRestartManager(false)
if !c.HostConfig.NetworkMode.IsContainer() && c.IsRunning() {
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
options, err := daemon.buildSandboxOptions(&cfg.Config, c)
if err != nil {
logger(c).WithError(err).Warn("failed to build sandbox option to restore container")
}
mapLock.Lock()
activeSandboxes[c.NetworkSettings.SandboxID] = options
mapLock.Unlock()
}
}
// get list of containers we need to restart
// Do not autostart containers which
// has endpoints in a swarm scope
// network yet since the cluster is
// not initialized yet. We will start
// it after the cluster is
// initialized.
if cfg.AutoRestart && c.ShouldRestart() && !c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore {
mapLock.Lock()
restartContainers[c] = make(chan struct{})
mapLock.Unlock()
} else if c.HostConfig != nil && c.HostConfig.AutoRemove {
// Remove the container if live-restore is disabled or if the container has already exited.
if !cfg.LiveRestoreEnabled || !alive {
mapLock.Lock()
removeContainers[c.ID] = c
mapLock.Unlock()
}
}
c.Lock()
if c.RemovalInProgress {
// We probably crashed in the middle of a removal, reset
// the flag.
//
// We DO NOT remove the container here as we do not
// know if the user had requested for either the
// associated volumes, network links or both to also
// be removed. So we put the container in the "dead"
// state and leave further processing up to them.
c.RemovalInProgress = false
c.Dead = true
if err := c.CheckpointTo(daemon.containersReplica); err != nil {
baseLogger.WithError(err).Error("failed to update RemovalInProgress container state")
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
} else {
baseLogger.Debugf("reset RemovalInProgress state for container")
}
}
c.Unlock()
logger(c).Debug("done restoring container")
}(c)
}
group.Wait()
// Initialize the network controller and configure network settings.
//
// Note that we cannot initialize the network controller earlier, as it
// needs to know if there's active sandboxes (running containers).
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
if err = daemon.initNetworkController(&cfg.Config, activeSandboxes); err != nil {
return fmt.Errorf("Error initializing network controller: %v", err)
}
// Now that all the containers are registered, register the links
for _, c := range containers {
group.Add(1)
go func(c *container.Container) {
_ = sem.Acquire(context.Background(), 1)
if err := daemon.registerLinks(c, c.HostConfig); err != nil {
log.G(context.TODO()).WithField("container", c.ID).WithError(err).Error("failed to register link for container")
}
sem.Release(1)
group.Done()
}(c)
}
group.Wait()
for c, notifyChan := range restartContainers {
group.Add(1)
go func(c *container.Container, chNotify chan struct{}) {
_ = sem.Acquire(context.Background(), 1)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
logger := log.G(context.TODO()).WithField("container", c.ID)
daemon: improve log messages during startup / shutdown Consistently set "container ID" as a field for log messages, so that logs can be associated with a container. With this logs look like; INFO[2020-12-15T12:30:46.239329903Z] Loading containers: start. DEBU[2020-12-15T12:30:46.239919357Z] processing event stream module=libcontainerd namespace=moby DEBU[2020-12-15T12:30:46.242061458Z] loaded container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.242185251Z] loaded container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.242912375Z] loaded container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false DEBU[2020-12-15T12:30:46.243165260Z] loaded container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.243585164Z] loaded container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.244870764Z] loaded container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.245140276Z] loaded container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.245457025Z] loaded container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292515417Z] restoring container container=b3256ff87fc6f243d9e044fb3d7988ef61c86bfb957d90c0227e8a9697ffa49c paused=false running=false DEBU[2020-12-15T12:30:46.292612379Z] restoring container container=31d40ee3e591a50ebee790b08c2bec751610d2eca51ca1a371ea1ff66ea46c1d paused=false running=false DEBU[2020-12-15T12:30:46.292573767Z] restoring container container=b8a7229824fb84ff6f5af537a8ba987d106bf9a24a9aad3b628605d26b3facc4 paused=false running=false DEBU[2020-12-15T12:30:46.292602437Z] restoring container container=b774141975cc511cc61fc5f374793503bb2e8fa774d6580ac47111a089de1b9b paused=false running=false DEBU[2020-12-15T12:30:46.305032730Z] restoring container container=47f348160645f46a17c758d120dec600967eed4adf08dd28b809725971d062cc paused=false running=false DEBU[2020-12-15T12:30:46.305421360Z] restoring container container=622dec5f737d532da347bc627655ebc351fa5887476e8b8c33e5fbc5d0e48b5c paused=false running=false DEBU[2020-12-15T12:30:46.305558773Z] restoring container container=03dd5b1dc251a12d2e74eb54cb3ace66c437db228238a8d4831a264c9313c192 paused=false running=false DEBU[2020-12-15T12:30:46.307662990Z] restoring container container=e29c34c14b84810bc1e6cb6978a81e863601bfbe9ffe076c07dd5f6a439289d6 paused=false running=false ... INFO[2020-12-15T12:30:46.536506204Z] Loading containers: done. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-12-15 12:06:39 +00:00
logger.Debug("starting container")
// ignore errors here as this is a best effort to wait for children to be
// running before we try to start the container
children := daemon.children(c)
timeout := time.NewTimer(5 * time.Second)
defer timeout.Stop()
for _, child := range children {
if notifier, exists := restartContainers[child]; exists {
select {
case <-notifier:
case <-timeout.C:
}
}
}
if err := daemon.prepareMountPoints(c); err != nil {
logger.WithError(err).Error("failed to prepare mount points for container")
}
if err := daemon.containerStart(context.Background(), cfg, c, "", "", true); err != nil {
logger.WithError(err).Error("failed to start container")
}
close(chNotify)
sem.Release(1)
group.Done()
}(c, notifyChan)
}
group.Wait()
for id := range removeContainers {
group.Add(1)
go func(cid string) {
_ = sem.Acquire(context.Background(), 1)
if err := daemon.containerRm(&cfg.Config, cid, &backend.ContainerRmConfig{ForceRemove: true, RemoveVolume: true}); err != nil {
log.G(context.TODO()).WithField("container", cid).WithError(err).Error("failed to remove container")
}
sem.Release(1)
group.Done()
}(id)
}
group.Wait()
// any containers that were started above would already have had this done,
// however we need to now prepare the mountpoints for the rest of the containers as well.
// This shouldn't cause any issue running on the containers that already had this run.
// This must be run after any containers with a restart policy so that containerized plugins
// can have a chance to be running before we try to initialize them.
for _, c := range containers {
// if the container has restart policy, do not
// prepare the mountpoints since it has been done on restarting.
// This is to speed up the daemon start when a restart container
// has a volume and the volume driver is not available.
if _, ok := restartContainers[c]; ok {
continue
} else if _, ok := removeContainers[c.ID]; ok {
// container is automatically removed, skip it.
continue
}
group.Add(1)
go func(c *container.Container) {
_ = sem.Acquire(context.Background(), 1)
if err := daemon.prepareMountPoints(c); err != nil {
log.G(context.TODO()).WithField("container", c.ID).WithError(err).Error("failed to prepare mountpoints for container")
}
sem.Release(1)
group.Done()
}(c)
}
group.Wait()
log.G(context.TODO()).Info("Loading containers: done.")
2013-01-19 00:13:39 +00:00
return nil
}
// RestartSwarmContainers restarts any autostart container which has a
// swarm endpoint.
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
func (daemon *Daemon) RestartSwarmContainers() {
daemon.restartSwarmContainers(context.Background(), daemon.config())
}
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
func (daemon *Daemon) restartSwarmContainers(ctx context.Context, cfg *configStore) {
// parallelLimit is the maximum number of parallel startup jobs that we
// allow (this is the limited used for all startup semaphores). The multipler
// (128) was chosen after some fairly significant benchmarking -- don't change
// it unless you've tested it significantly (this value is adjusted if
// RLIMIT_NOFILE is small to avoid EMFILE).
parallelLimit := adjustParallelLimit(len(daemon.List()), 128*runtime.NumCPU())
var group sync.WaitGroup
sem := semaphore.NewWeighted(int64(parallelLimit))
for _, c := range daemon.List() {
if !c.IsRunning() && !c.IsPaused() {
// Autostart all the containers which has a
// swarm endpoint now that the cluster is
// initialized.
if cfg.AutoRestart && c.ShouldRestart() && c.NetworkSettings.HasSwarmEndpoint && c.HasBeenStartedBefore {
group.Add(1)
go func(c *container.Container) {
if err := sem.Acquire(ctx, 1); err != nil {
// ctx is done.
group.Done()
return
}
if err := daemon.containerStart(ctx, cfg, c, "", "", true); err != nil {
log.G(ctx).WithField("container", c.ID).WithError(err).Error("failed to start swarm container")
}
sem.Release(1)
group.Done()
}(c)
}
}
}
group.Wait()
}
func (daemon *Daemon) children(c *container.Container) map[string]*container.Container {
return daemon.linkIndex.children(c)
}
// parents returns the names of the parent containers of the container
// with the given name.
func (daemon *Daemon) parents(c *container.Container) map[string]*container.Container {
return daemon.linkIndex.parents(c)
}
func (daemon *Daemon) registerLink(parent, child *container.Container, alias string) error {
fullName := path.Join(parent.Name, alias)
if err := daemon.containersReplica.ReserveName(fullName, child.ID); err != nil {
if errors.Is(err, container.ErrNameReserved) {
log.G(context.TODO()).Warnf("error registering link for %s, to %s, as alias %s, ignoring: %v", parent.ID, child.ID, alias, err)
return nil
}
return err
}
daemon.linkIndex.link(parent, child, fullName)
return nil
}
// DaemonJoinsCluster informs the daemon has joined the cluster and provides
// the handler to query the cluster component
func (daemon *Daemon) DaemonJoinsCluster(clusterProvider cluster.Provider) {
daemon.setClusterProvider(clusterProvider)
}
// DaemonLeavesCluster informs the daemon has left the cluster
func (daemon *Daemon) DaemonLeavesCluster() {
// Daemon is in charge of removing the attachable networks with
// connected containers when the node leaves the swarm
daemon.clearAttachableNetworks()
// We no longer need the cluster provider, stop it now so that
// the network agent will stop listening to cluster events.
daemon.setClusterProvider(nil)
// Wait for the networking cluster agent to stop
daemon.netController.AgentStopWait()
// Daemon is in charge of removing the ingress network when the
// node leaves the swarm. Wait for job to be done or timeout.
// This is called also on graceful daemon shutdown. We need to
// wait, because the ingress release has to happen before the
// network controller is stopped.
if done, err := daemon.ReleaseIngress(); err == nil {
timeout := time.NewTimer(5 * time.Second)
defer timeout.Stop()
select {
case <-done:
case <-timeout.C:
log.G(context.TODO()).Warn("timeout while waiting for ingress network removal")
}
} else {
log.G(context.TODO()).Warnf("failed to initiate ingress network removal: %v", err)
}
daemon.attachmentStore.ClearAttachments()
}
// setClusterProvider sets a component for querying the current cluster state.
func (daemon *Daemon) setClusterProvider(clusterProvider cluster.Provider) {
daemon.clusterProvider = clusterProvider
daemon.netController.SetClusterProvider(clusterProvider)
Fix race in attachable network attachment Attachable networks are networks created on the cluster which can then be attached to by non-swarm containers. These networks are lazily created on the node that wants to attach to that network. When no container is currently attached to one of these networks on a node, and then multiple containers which want that network are started concurrently, this can cause a race condition in the network attachment where essentially we try to attach the same network to the node twice. To easily reproduce this issue you must use a multi-node cluster with a worker node that has lots of CPUs (I used a 36 CPU node). Repro steps: 1. On manager, `docker network create -d overlay --attachable test` 2. On worker, `docker create --restart=always --network test busybox top`, many times... 200 is a good number (but not much more due to subnet size restrictions) 3. Restart the daemon When the daemon restarts, it will attempt to start all those containers simultaneously. Note that you could try to do this yourself over the API, but it's harder to trigger due to the added latency from going over the API. The error produced happens when the daemon tries to start the container upon allocating the network resources: ``` attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded ``` What happens here is the worker makes a network attachment request to the manager. This is an async call which in the happy case would cause a task to be placed on the node, which the worker is waiting for to get the network configuration. In the case of this race, the error ocurrs on the manager like this: ``` task allocation failure" error="failed during network allocation for task n7bwwwbymj2o2h9asqkza8gom: failed to allocate network IP for task n7bwwwbymj2o2h9asqkza8gom network rj4szie2zfauqnpgh4eri1yue: could not find an available IP" module=node node.id=u3489c490fx1df8onlyfo1v6e ``` The task is not created and the worker times out waiting for the task. --- The mitigation for this is to make sure that only one attachment reuest is in flight for a given network at a time *when the network doesn't already exist on the node*. If the network already exists on the node there is no need for synchronization because the network is already allocated and on the node so there is no need to request it from the manager. This basically comes down to a race with `Find(network) || Create(network)` without any sort of syncronization. Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2018-02-02 17:54:08 +00:00
daemon.attachableNetworkLock = locker.New()
}
// IsSwarmCompatible verifies if the current daemon
// configuration is compatible with the swarm mode
func (daemon *Daemon) IsSwarmCompatible() error {
return daemon.config().IsSwarmCompatible()
}
// NewDaemon sets up everything for the daemon to be able to service
// requests from the webserver.
func NewDaemon(ctx context.Context, config *config.Config, pluginStore *plugin.Store, authzMiddleware *authorization.Middleware) (daemon *Daemon, err error) {
// Verify platform-specific requirements.
// TODO(thaJeztah): this should be called before we try to create the daemon; perhaps together with the config validation.
if err := checkSystem(); err != nil {
return nil, err
}
registryService, err := registry.NewService(config.ServiceOptions)
if err != nil {
return nil, err
}
// Ensure that we have a correct root key limit for launching containers.
if err := modifyRootKeyLimit(); err != nil {
log.G(ctx).Warnf("unable to modify root key limit, number of containers could be limited by this quota: %v", err)
}
// Ensure we have compatible and valid configuration options
if err := verifyDaemonSettings(config); err != nil {
return nil, err
}
// Do we have a disabled network?
config.DisableBridge = isBridgeNetworkDisabled(config)
// Setup the resolv.conf
setupResolvConf(config)
idMapping, err := setupRemappedRoot(config)
if err != nil {
return nil, err
}
rootIDs := idMapping.RootPair()
if err := setMayDetachMounts(); err != nil {
log.G(ctx).WithError(err).Warn("Could not set may_detach_mounts kernel parameter")
}
// set up the tmpDir to use a canonical path
tmp, err := prepareTempDir(config.Root)
if err != nil {
return nil, fmt.Errorf("Unable to get the TempDir under %s: %s", config.Root, err)
}
realTmp, err := fileutils.ReadSymlinkedDirectory(tmp)
if err != nil {
return nil, fmt.Errorf("Unable to get the full path to the TempDir (%s): %s", tmp, err)
}
if isWindows {
2022-10-16 12:59:00 +00:00
if err := system.MkdirAll(realTmp, 0); err != nil {
return nil, fmt.Errorf("Unable to create the TempDir (%s): %s", realTmp, err)
}
os.Setenv("TEMP", realTmp)
os.Setenv("TMP", realTmp)
} else {
os.Setenv("TMPDIR", realTmp)
}
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
if err := initRuntimesDir(config); err != nil {
return nil, err
}
rts, err := setupRuntimes(config)
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
if err != nil {
return nil, err
}
d := &Daemon{
PluginStore: pluginStore,
startupDone: make(chan struct{}),
}
cfgStore := &configStore{
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
Config: *config,
Runtimes: rts,
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
}
d.configStore.Store(cfgStore)
// TEST_INTEGRATION_USE_SNAPSHOTTER is used for integration tests only.
if os.Getenv("TEST_INTEGRATION_USE_SNAPSHOTTER") != "" {
d.usesSnapshotter = true
} else {
d.usesSnapshotter = config.Features["containerd-snapshotter"]
}
// Ensure the daemon is properly shutdown if there is a failure during
// initialization
defer func() {
if err != nil {
// Use a fresh context here. Passed context could be cancelled.
if err := d.Shutdown(context.Background()); err != nil {
log.G(ctx).Error(err)
}
}
}()
if err := d.setGenericResources(&cfgStore.Config); err != nil {
return nil, err
}
// set up SIGUSR1 handler on Unix-like systems, or a Win32 global event
// on Windows to dump Go routine stacks
stackDumpDir := cfgStore.Root
if execRoot := cfgStore.GetExecRoot(); execRoot != "" {
stackDumpDir = execRoot
}
d.setupDumpStackTrap(stackDumpDir)
if err := d.setupSeccompProfile(&cfgStore.Config); err != nil {
return nil, err
}
// Set the default isolation mode (only applicable on Windows)
if err := d.setDefaultIsolation(&cfgStore.Config); err != nil {
return nil, fmt.Errorf("error setting default isolation mode: %v", err)
}
if err := configureMaxThreads(&cfgStore.Config); err != nil {
log.G(ctx).Warnf("Failed to configure golang's threads limit: %v", err)
}
// ensureDefaultAppArmorProfile does nothing if apparmor is disabled
if err := ensureDefaultAppArmorProfile(); err != nil {
log.G(ctx).Errorf(err.Error())
}
daemonRepo := filepath.Join(cfgStore.Root, "containers")
2022-10-16 12:59:00 +00:00
if err := idtools.MkdirAllAndChown(daemonRepo, 0o710, idtools.Identity{
UID: idtools.CurrentIdentity().UID,
GID: rootIDs.GID,
}); err != nil {
return nil, err
}
if isWindows {
2022-10-16 12:59:00 +00:00
// Note that permissions (0o700) are ignored on Windows; passing them to
// show intent only. We could consider using idtools.MkdirAndChown here
// to apply an ACL.
if err = os.Mkdir(filepath.Join(cfgStore.Root, "credentialspecs"), 0o700); err != nil && !errors.Is(err, os.ErrExist) {
return nil, err
}
}
d.registryService = registryService
dlogger.RegisterPluginGetter(d.PluginStore)
metricsSockPath, err := d.listenMetricsSock(&cfgStore.Config)
if err != nil {
return nil, err
}
registerMetricsPluginCallback(d.PluginStore, metricsSockPath)
backoffConfig := backoff.DefaultConfig
backoffConfig.MaxDelay = 3 * time.Second
connParams := grpc.ConnectParams{
Backoff: backoffConfig,
}
gopts := []grpc.DialOption{
// WithBlock makes sure that the following containerd request
// is reliable.
//
// NOTE: In one edge case with high load pressure, kernel kills
// dockerd, containerd and containerd-shims caused by OOM.
// When both dockerd and containerd restart, but containerd
// will take time to recover all the existing containers. Before
// containerd serving, dockerd will failed with gRPC error.
// That bad thing is that restore action will still ignore the
// any non-NotFound errors and returns running state for
// already stopped container. It is unexpected behavior. And
// we need to restart dockerd to make sure that anything is OK.
//
// It is painful. Add WithBlock can prevent the edge case. And
// n common case, the containerd will be serving in shortly.
// It is not harm to add WithBlock for containerd connection.
grpc.WithBlock(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithConnectParams(connParams),
grpc.WithContextDialer(dialer.ContextDialer),
// TODO(stevvooe): We may need to allow configuration of this on the client.
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(defaults.DefaultMaxRecvMsgSize)),
grpc.WithDefaultCallOptions(grpc.MaxCallSendMsgSize(defaults.DefaultMaxSendMsgSize)),
grpc.WithUnaryInterceptor(otelgrpc.UnaryClientInterceptor()), //nolint:staticcheck // TODO(thaJeztah): ignore SA1019 for deprecated options: see https://github.com/moby/moby/issues/47437
grpc.WithStreamInterceptor(otelgrpc.StreamClientInterceptor()), //nolint:staticcheck // TODO(thaJeztah): ignore SA1019 for deprecated options: see https://github.com/moby/moby/issues/47437
}
if cfgStore.ContainerdAddr != "" {
d.containerdClient, err = containerd.New(
cfgStore.ContainerdAddr,
containerd.WithDefaultNamespace(cfgStore.ContainerdNamespace),
containerd.WithDialOpts(gopts),
containerd.WithTimeout(60*time.Second),
)
if err != nil {
return nil, errors.Wrapf(err, "failed to dial %q", cfgStore.ContainerdAddr)
}
}
createPluginExec := func(m *plugin.Manager) (plugin.Executor, error) {
var pluginCli *containerd.Client
if cfgStore.ContainerdAddr != "" {
pluginCli, err = containerd.New(
cfgStore.ContainerdAddr,
containerd.WithDefaultNamespace(cfgStore.ContainerdPluginNamespace),
containerd.WithDialOpts(gopts),
containerd.WithTimeout(60*time.Second),
)
if err != nil {
return nil, errors.Wrapf(err, "failed to dial %q", cfgStore.ContainerdAddr)
}
}
var (
shim string
shimOpts interface{}
)
if runtime.GOOS != "windows" {
shim, shimOpts, err = rts.Get("")
if err != nil {
return nil, err
}
}
return pluginexec.New(ctx, getPluginExecRoot(&cfgStore.Config), pluginCli, cfgStore.ContainerdPluginNamespace, m, shim, shimOpts)
}
// Plugin system initialization should happen before restore. Do not change order.
d.pluginManager, err = plugin.NewManager(plugin.ManagerConfig{
Root: filepath.Join(cfgStore.Root, "plugins"),
ExecRoot: getPluginExecRoot(&cfgStore.Config),
Store: d.PluginStore,
CreateExecutor: createPluginExec,
RegistryService: registryService,
LiveRestoreEnabled: cfgStore.LiveRestoreEnabled,
LogPluginEvent: d.LogPluginEvent, // todo: make private
AuthzMiddleware: authzMiddleware,
})
if err != nil {
return nil, errors.Wrap(err, "couldn't create plugin manager")
}
d.defaultLogConfig, err = defaultLogConfig(&cfgStore.Config)
if err != nil {
return nil, errors.Wrap(err, "failed to set log opts")
}
log.G(ctx).Debugf("Using default logging driver %s", d.defaultLogConfig.Type)
d.volumes, err = volumesservice.NewVolumeService(cfgStore.Root, d.PluginStore, rootIDs, d)
if err != nil {
return nil, err
}
// Check if Devices cgroup is mounted, it is hard requirement for container security,
// on Linux.
daemon.NewDaemon(): fix network feature detection on first start Commit 483aa6294b457ad4f60df91e46c0038a0e953dad introduced a regression, causing spurious warnings to be shown when starting a daemon for the first time after a fresh install: docker info ... WARNING: IPv4 forwarding is disabled WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled The information shown is incorrect, as checking the corresponding options on the system, shows that these options are available: cat /proc/sys/net/ipv4/ip_forward 1 cat /proc/sys/net/bridge/bridge-nf-call-iptables 1 cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 1 The reason this is failing is because the daemon itself reconfigures those options during networking initialization in `configureIPForwarding()`; https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/libnetwork/drivers/bridge/setup_ip_forwarding.go#L14-L25 Network initialization happens in the `daemon.restore()` function within `daemon.NewDaemon()`: https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/daemon/daemon.go#L475-L478 However, 483aa6294b457ad4f60df91e46c0038a0e953dad moved detection of features earlier in the `daemon.NewDaemon()` function, and collects the system information (`d.RawSysInfo()`) before we enter `daemon.restore()`; https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/daemon/daemon.go#L1008-L1011 For optimization (collecting the system information comes at a cost), those results are cached on the daemon, and will only be performed once (using a `sync.Once`). This patch: - introduces a `getSysInfo()` utility, which collects system information without caching the results - uses `getSysInfo()` to collect the preliminary information needed at that point in the daemon's lifecycle. - moves printing warnings to the end of `daemon.NewDaemon()`, after all information can be read correctly. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-06-03 15:35:23 +00:00
//
// Important: we call getSysInfo() directly here, without storing the results,
// as networking has not yet been set up, so we only have partial system info
// at this point.
//
// TODO(thaJeztah) add a utility to only collect the CgroupDevicesEnabled information
if runtime.GOOS == "linux" && !userns.RunningInUserNS() && !getSysInfo(&cfgStore.Config).CgroupDevicesEnabled {
return nil, errors.New("Devices cgroup isn't mounted")
}
d.id, err = LoadOrCreateID(cfgStore.Root)
if err != nil {
return nil, err
}
d.repository = daemonRepo
d.containers = container.NewMemoryStore()
if d.containersReplica, err = container.NewViewDB(); err != nil {
return nil, err
}
d.execCommands = container.NewExecStore()
d.statsCollector = d.newStatsCollector(1 * time.Second)
d.EventsService = events.New()
d.root = cfgStore.Root
d.idMapping = idMapping
d.linkIndex = newLinkIndex()
// On Windows we don't support the environment variable, or a user supplied graphdriver
// Unix platforms however run a single graphdriver for all containers, and it can
// be set through an environment variable, a daemon start parameter, or chosen through
// initialization of the layerstore through driver priority order for example.
driverName := os.Getenv("DOCKER_DRIVER")
if isWindows && d.UsesSnapshotter() {
// Containerd WCOW snapshotter
driverName = "windows"
} else if isWindows {
// Docker WCOW graphdriver
driverName = "windowsfilter"
} else if driverName != "" {
log.G(ctx).Infof("Setting the storage driver from the $DOCKER_DRIVER environment variable (%s)", driverName)
} else {
driverName = cfgStore.GraphDriver
}
if d.UsesSnapshotter() {
if os.Getenv("TEST_INTEGRATION_USE_SNAPSHOTTER") != "" {
log.G(ctx).Warn("Enabling containerd snapshotter through the $TEST_INTEGRATION_USE_SNAPSHOTTER environment variable. This should only be used for testing.")
}
log.G(ctx).Info("Starting daemon with containerd snapshotter integration enabled")
// FIXME(thaJeztah): implement automatic snapshotter-selection similar to graph-driver selection; see https://github.com/moby/moby/issues/44076
if driverName == "" {
driverName = containerd.DefaultSnapshotter
}
// Configure and validate the kernels security support. Note this is a Linux/FreeBSD
// operation only, so it is safe to pass *just* the runtime OS graphdriver.
if err := configureKernelSecuritySupport(&cfgStore.Config, driverName); err != nil {
return nil, err
}
d.imageService = ctrd.NewService(ctrd.ImageServiceConfig{
Client: d.containerdClient,
Containers: d.containers,
Snapshotter: driverName,
RegistryHosts: d.RegistryHosts,
Registry: d.registryService,
EventsService: d.EventsService,
IDMapping: idMapping,
RefCountMounter: snapshotter.NewMounter(config.Root, driverName, idMapping),
})
} else {
layerStore, err := layer.NewStoreFromOptions(layer.StoreOptions{
Root: cfgStore.Root,
MetadataStorePathTemplate: filepath.Join(cfgStore.Root, "image", "%s", "layerdb"),
GraphDriver: driverName,
GraphDriverOptions: cfgStore.GraphOptions,
IDMapping: idMapping,
PluginGetter: d.PluginStore,
ExperimentalEnabled: cfgStore.Experimental,
})
if err != nil {
return nil, err
}
// Configure and validate the kernels security support. Note this is a Linux/FreeBSD
// operation only, so it is safe to pass *just* the runtime OS graphdriver.
if err := configureKernelSecuritySupport(&cfgStore.Config, layerStore.DriverName()); err != nil {
return nil, err
}
imageRoot := filepath.Join(cfgStore.Root, "image", layerStore.DriverName())
ifs, err := image.NewFSStoreBackend(filepath.Join(imageRoot, "imagedb"))
if err != nil {
return nil, err
}
// We have a single tag/reference store for the daemon globally. However, it's
// stored under the graphdriver. On host platforms which only support a single
// container OS, but multiple selectable graphdrivers, this means depending on which
// graphdriver is chosen, the global reference store is under there. For
// platforms which support multiple container operating systems, this is slightly
// more problematic as where does the global ref store get located? Fortunately,
// for Windows, which is currently the only daemon supporting multiple container
// operating systems, the list of graphdrivers available isn't user configurable.
// For backwards compatibility, we just put it under the windowsfilter
// directory regardless.
refStoreLocation := filepath.Join(imageRoot, `repositories.json`)
rs, err := refstore.NewReferenceStore(refStoreLocation)
if err != nil {
return nil, fmt.Errorf("Couldn't create reference store repository: %s", err)
}
d.ReferenceStore = rs
imageStore, err := image.NewImageStore(ifs, layerStore)
if err != nil {
return nil, err
}
distributionMetadataStore, err := dmetadata.NewFSMetadataStore(filepath.Join(imageRoot, "distribution"))
if err != nil {
return nil, err
}
imgSvcConfig := images.ImageServiceConfig{
ContainerStore: d.containers,
DistributionMetadataStore: distributionMetadataStore,
EventsService: d.EventsService,
ImageStore: imageStore,
LayerStore: layerStore,
MaxConcurrentDownloads: config.MaxConcurrentDownloads,
MaxConcurrentUploads: config.MaxConcurrentUploads,
MaxDownloadAttempts: config.MaxDownloadAttempts,
ReferenceStore: rs,
RegistryService: registryService,
ContentNamespace: config.ContainerdNamespace,
}
// containerd is not currently supported with Windows.
// So sometimes d.containerdCli will be nil
// In that case we'll create a local content store... but otherwise we'll use containerd
if d.containerdClient != nil {
imgSvcConfig.Leases = d.containerdClient.LeasesService()
imgSvcConfig.ContentStore = d.containerdClient.ContentStore()
} else {
imgSvcConfig.ContentStore, imgSvcConfig.Leases, err = d.configureLocalContentStore(config.ContainerdNamespace)
if err != nil {
return nil, err
}
}
// TODO: imageStore, distributionMetadataStore, and ReferenceStore are only
// used above to run migration. They could be initialized in ImageService
// if migration is called from daemon/images. layerStore might move as well.
d.imageService = images.NewImageService(imgSvcConfig)
log.G(ctx).Debugf("Max Concurrent Downloads: %d", imgSvcConfig.MaxConcurrentDownloads)
log.G(ctx).Debugf("Max Concurrent Uploads: %d", imgSvcConfig.MaxConcurrentUploads)
log.G(ctx).Debugf("Max Download Attempts: %d", imgSvcConfig.MaxDownloadAttempts)
}
go d.execCommandGC()
if err := d.initLibcontainerd(ctx, &cfgStore.Config); err != nil {
return nil, err
}
if err := d.restore(cfgStore); err != nil {
return nil, err
}
close(d.startupDone)
info, err := d.SystemInfo(ctx)
if err != nil {
return nil, err
}
daemon.NewDaemon(): fix network feature detection on first start Commit 483aa6294b457ad4f60df91e46c0038a0e953dad introduced a regression, causing spurious warnings to be shown when starting a daemon for the first time after a fresh install: docker info ... WARNING: IPv4 forwarding is disabled WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled The information shown is incorrect, as checking the corresponding options on the system, shows that these options are available: cat /proc/sys/net/ipv4/ip_forward 1 cat /proc/sys/net/bridge/bridge-nf-call-iptables 1 cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 1 The reason this is failing is because the daemon itself reconfigures those options during networking initialization in `configureIPForwarding()`; https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/libnetwork/drivers/bridge/setup_ip_forwarding.go#L14-L25 Network initialization happens in the `daemon.restore()` function within `daemon.NewDaemon()`: https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/daemon/daemon.go#L475-L478 However, 483aa6294b457ad4f60df91e46c0038a0e953dad moved detection of features earlier in the `daemon.NewDaemon()` function, and collects the system information (`d.RawSysInfo()`) before we enter `daemon.restore()`; https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/daemon/daemon.go#L1008-L1011 For optimization (collecting the system information comes at a cost), those results are cached on the daemon, and will only be performed once (using a `sync.Once`). This patch: - introduces a `getSysInfo()` utility, which collects system information without caching the results - uses `getSysInfo()` to collect the preliminary information needed at that point in the daemon's lifecycle. - moves printing warnings to the end of `daemon.NewDaemon()`, after all information can be read correctly. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-06-03 15:35:23 +00:00
for _, w := range info.Warnings {
log.G(ctx).Warn(w)
daemon.NewDaemon(): fix network feature detection on first start Commit 483aa6294b457ad4f60df91e46c0038a0e953dad introduced a regression, causing spurious warnings to be shown when starting a daemon for the first time after a fresh install: docker info ... WARNING: IPv4 forwarding is disabled WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled The information shown is incorrect, as checking the corresponding options on the system, shows that these options are available: cat /proc/sys/net/ipv4/ip_forward 1 cat /proc/sys/net/bridge/bridge-nf-call-iptables 1 cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 1 The reason this is failing is because the daemon itself reconfigures those options during networking initialization in `configureIPForwarding()`; https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/libnetwork/drivers/bridge/setup_ip_forwarding.go#L14-L25 Network initialization happens in the `daemon.restore()` function within `daemon.NewDaemon()`: https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/daemon/daemon.go#L475-L478 However, 483aa6294b457ad4f60df91e46c0038a0e953dad moved detection of features earlier in the `daemon.NewDaemon()` function, and collects the system information (`d.RawSysInfo()`) before we enter `daemon.restore()`; https://github.com/moby/moby/blob/cf4595265e7703e1e9745a30f1dd265acbc075d3/daemon/daemon.go#L1008-L1011 For optimization (collecting the system information comes at a cost), those results are cached on the daemon, and will only be performed once (using a `sync.Once`). This patch: - introduces a `getSysInfo()` utility, which collects system information without caching the results - uses `getSysInfo()` to collect the preliminary information needed at that point in the daemon's lifecycle. - moves printing warnings to the end of `daemon.NewDaemon()`, after all information can be read correctly. Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-06-03 15:35:23 +00:00
}
engineInfo.WithValues(
dockerversion.Version,
dockerversion.GitCommit,
info.Architecture,
info.Driver,
info.KernelVersion,
info.OperatingSystem,
info.OSType,
info.OSVersion,
info.ID,
).Set(1)
engineCpus.Set(float64(info.NCPU))
engineMemory.Set(float64(info.MemTotal))
log.G(ctx).WithFields(log.Fields{
daemon: improve daemon start informational log message When starting a daemon in debug mode (such as used in CI), many log-messages are printed during startup. As a result, the log message indicating whether graph-drivers or snapshotters are used may appear far separate from the informational log about the daemon (and selected storage-driver). The existing log-driver also unconditionally uses the legacy "graph-driver" terminology, instead of the more generic "storage-driver". This patch changes the log message shown during startup to use the generic "graph-driver" as field, and adds a new field that indicates wheter we're using snapshotters or graph-drivers. Given that snapshotters will be the default at some point, an alternative could be to include the _type_ of driver used, for example; `io.containerd.snapshotter.v1`, which may continue to be relevant after snapshotters become the default, and at which point (potentially) the type of snapshotter becomes more relevant. Before this change: TEST_INTEGRATION_USE_SNAPSHOTTER=1 DOCKER_GRAPHDRIVER=overlayfs dockerd ... INFO[2023-10-31T09:12:33.586269801Z] Starting daemon with containerd snapshotter integration enabled INFO[2023-10-31T09:12:33.586322176Z] Loading containers: start. INFO[2023-10-31T09:12:33.640514759Z] Loading containers: done. INFO[2023-10-31T09:12:33.646498134Z] Docker daemon commit=dcf7287d647bcb515015e389df46ccf1e09855b7 graphdriver=overlayfs version=dev INFO[2023-10-31T09:12:33.646706551Z] Daemon has completed initialization INFO[2023-10-31T09:12:33.658840592Z] API listen on /var/run/docker.sock With this change; TEST_INTEGRATION_USE_SNAPSHOTTER=1 DOCKER_GRAPHDRIVER=overlayfs dockerd ... INFO[2023-10-31T08:41:38.841155928Z] Starting daemon with containerd snapshotter integration enabled INFO[2023-10-31T08:41:38.841207512Z] Loading containers: start. INFO[2023-10-31T08:41:38.902461053Z] Loading containers: done. INFO[2023-10-31T08:41:38.910535137Z] Docker daemon commit=dcf7287d647bcb515015e389df46ccf1e09855b7 containerd-snapshotter=true storage-driver=overlayfs version=dev INFO[2023-10-31T08:41:38.910936803Z] Daemon has completed initialization Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-10-31 09:20:12 +00:00
"version": dockerversion.Version,
"commit": dockerversion.GitCommit,
"storage-driver": d.ImageService().StorageDriver(),
"containerd-snapshotter": d.UsesSnapshotter(),
}).Info("Docker daemon")
return d, nil
}
// DistributionServices returns services controlling daemon storage
func (daemon *Daemon) DistributionServices() images.DistributionServices {
return daemon.imageService.DistributionServices()
}
func (daemon *Daemon) waitForStartupDone() {
<-daemon.startupDone
}
func (daemon *Daemon) shutdownContainer(c *container.Container) error {
ctx := compatcontext.WithoutCancel(context.TODO())
// If container failed to exit in stopTimeout seconds of SIGTERM, then using the force
if err := daemon.containerStop(ctx, c, containertypes.StopOptions{}); err != nil {
return fmt.Errorf("Failed to stop container %s with error: %v", c.ID, err)
}
// Wait without timeout for the container to exit.
// Ignore the result.
<-c.Wait(ctx, container.WaitConditionNotRunning)
return nil
}
// ShutdownTimeout returns the timeout (in seconds) before containers are forcibly
// killed during shutdown. The default timeout can be configured both on the daemon
// and per container, and the longest timeout will be used. A grace-period of
// 5 seconds is added to the configured timeout.
//
// A negative (-1) timeout means "indefinitely", which means that containers
// are not forcibly killed, and the daemon shuts down after all containers exit.
func (daemon *Daemon) ShutdownTimeout() int {
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
return daemon.shutdownTimeout(&daemon.config().Config)
}
func (daemon *Daemon) shutdownTimeout(cfg *config.Config) int {
shutdownTimeout := cfg.ShutdownTimeout
if shutdownTimeout < 0 {
return -1
}
if daemon.containers == nil {
return shutdownTimeout
}
graceTimeout := 5
for _, c := range daemon.containers.List() {
stopTimeout := c.StopTimeout()
if stopTimeout < 0 {
return -1
}
if stopTimeout+graceTimeout > shutdownTimeout {
shutdownTimeout = stopTimeout + graceTimeout
}
}
return shutdownTimeout
}
// Shutdown stops the daemon.
func (daemon *Daemon) Shutdown(ctx context.Context) error {
daemon.shutdown = true
// Keep mounts and networking running on daemon shutdown if
// we are to keep containers running and restore them.
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
cfg := &daemon.config().Config
if cfg.LiveRestoreEnabled && daemon.containers != nil {
// check if there are any running containers, if none we should do some cleanup
if ls, err := daemon.Containers(ctx, &containertypes.ListOptions{}); len(ls) != 0 || err != nil {
// metrics plugins still need some cleanup
daemon.cleanupMetricsPlugins()
return err
}
}
if daemon.containers != nil {
log.G(ctx).Debugf("daemon configured with a %d seconds minimum shutdown timeout", cfg.ShutdownTimeout)
log.G(ctx).Debugf("start clean shutdown of all containers with a %d seconds timeout...", daemon.shutdownTimeout(cfg))
daemon.containers.ApplyAll(func(c *container.Container) {
if !c.IsRunning() {
return
}
logger := log.G(ctx).WithField("container", c.ID)
logger.Debug("shutting down container")
if err := daemon.shutdownContainer(c); err != nil {
logger.WithError(err).Error("failed to shut down container")
return
}
if mountid, err := daemon.imageService.GetLayerMountID(c.ID); err == nil {
daemon.cleanupMountsByID(mountid)
}
logger.Debugf("shut down container")
})
}
if daemon.volumes != nil {
if err := daemon.volumes.Shutdown(); err != nil {
log.G(ctx).Errorf("Error shutting down volume store: %v", err)
}
}
if daemon.imageService != nil {
if err := daemon.imageService.Cleanup(); err != nil {
log.G(ctx).Error(err)
}
}
// If we are part of a cluster, clean up cluster's stuff
if daemon.clusterProvider != nil {
log.G(ctx).Debugf("start clean shutdown of cluster resources...")
daemon.DaemonLeavesCluster()
}
daemon.cleanupMetricsPlugins()
// Shutdown plugins after containers and layerstore. Don't change the order.
daemon.pluginShutdown()
// trigger libnetwork Stop only if it's initialized
if daemon.netController != nil {
daemon.netController.Stop()
}
if daemon.containerdClient != nil {
daemon.containerdClient.Close()
}
if daemon.mdDB != nil {
daemon.mdDB.Close()
}
return daemon.cleanupMounts(cfg)
}
// Mount sets container.BaseFS
func (daemon *Daemon) Mount(container *container.Container) error {
return daemon.imageService.Mount(context.Background(), container)
}
// Unmount unsets the container base filesystem
func (daemon *Daemon) Unmount(container *container.Container) error {
return daemon.imageService.Unmount(context.Background(), container)
}
// Subnets return the IPv4 and IPv6 subnets of networks that are manager by Docker.
func (daemon *Daemon) Subnets() ([]net.IPNet, []net.IPNet) {
var v4Subnets []net.IPNet
var v6Subnets []net.IPNet
Split advertised address from listen address There are currently problems with "swarm init" and "swarm join" when an explicit --listen-addr flag is not provided. swarmkit defaults to finding the IP address associated with the default route, and in cloud setups this is often the wrong choice. Introduce a notion of "advertised address", with the client flag --advertise-addr, and the daemon flag --swarm-default-advertise-addr to provide a default. The default listening address is now 0.0.0.0, but a valid advertised address must be detected or specified. If no explicit advertised address is specified, error out if there is more than one usable candidate IP address on the system. This requires a user to explicitly choose instead of letting swarmkit make the wrong choice. For the purposes of this autodetection, we ignore certain interfaces that are unlikely to be relevant (currently docker*). The user is also required to choose a listen address on swarm init if they specify an explicit advertise address that is a hostname or an IP address that's not local to the system. This is a requirement for overlay networking. Also support specifying interface names to --listen-addr, --advertise-addr, and the daemon flag --swarm-default-advertise-addr. This will fail if the interface has multiple IP addresses (unless it has a single IPv4 address and a single IPv6 address - then we resolve the tie in favor of IPv4). This change also exposes the node's externally-reachable address in docker info, as requested by #24017. Make corresponding API and CLI docs changes. Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2016-07-01 01:07:35 +00:00
for _, managedNetwork := range daemon.netController.Networks(context.TODO()) {
v4infos, v6infos := managedNetwork.IpamInfo()
for _, info := range v4infos {
if info.IPAMData.Pool != nil {
v4Subnets = append(v4Subnets, *info.IPAMData.Pool)
Split advertised address from listen address There are currently problems with "swarm init" and "swarm join" when an explicit --listen-addr flag is not provided. swarmkit defaults to finding the IP address associated with the default route, and in cloud setups this is often the wrong choice. Introduce a notion of "advertised address", with the client flag --advertise-addr, and the daemon flag --swarm-default-advertise-addr to provide a default. The default listening address is now 0.0.0.0, but a valid advertised address must be detected or specified. If no explicit advertised address is specified, error out if there is more than one usable candidate IP address on the system. This requires a user to explicitly choose instead of letting swarmkit make the wrong choice. For the purposes of this autodetection, we ignore certain interfaces that are unlikely to be relevant (currently docker*). The user is also required to choose a listen address on swarm init if they specify an explicit advertise address that is a hostname or an IP address that's not local to the system. This is a requirement for overlay networking. Also support specifying interface names to --listen-addr, --advertise-addr, and the daemon flag --swarm-default-advertise-addr. This will fail if the interface has multiple IP addresses (unless it has a single IPv4 address and a single IPv6 address - then we resolve the tie in favor of IPv4). This change also exposes the node's externally-reachable address in docker info, as requested by #24017. Make corresponding API and CLI docs changes. Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2016-07-01 01:07:35 +00:00
}
}
for _, info := range v6infos {
if info.IPAMData.Pool != nil {
v6Subnets = append(v6Subnets, *info.IPAMData.Pool)
Split advertised address from listen address There are currently problems with "swarm init" and "swarm join" when an explicit --listen-addr flag is not provided. swarmkit defaults to finding the IP address associated with the default route, and in cloud setups this is often the wrong choice. Introduce a notion of "advertised address", with the client flag --advertise-addr, and the daemon flag --swarm-default-advertise-addr to provide a default. The default listening address is now 0.0.0.0, but a valid advertised address must be detected or specified. If no explicit advertised address is specified, error out if there is more than one usable candidate IP address on the system. This requires a user to explicitly choose instead of letting swarmkit make the wrong choice. For the purposes of this autodetection, we ignore certain interfaces that are unlikely to be relevant (currently docker*). The user is also required to choose a listen address on swarm init if they specify an explicit advertise address that is a hostname or an IP address that's not local to the system. This is a requirement for overlay networking. Also support specifying interface names to --listen-addr, --advertise-addr, and the daemon flag --swarm-default-advertise-addr. This will fail if the interface has multiple IP addresses (unless it has a single IPv4 address and a single IPv6 address - then we resolve the tie in favor of IPv4). This change also exposes the node's externally-reachable address in docker info, as requested by #24017. Make corresponding API and CLI docs changes. Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2016-07-01 01:07:35 +00:00
}
}
}
return v4Subnets, v6Subnets
Split advertised address from listen address There are currently problems with "swarm init" and "swarm join" when an explicit --listen-addr flag is not provided. swarmkit defaults to finding the IP address associated with the default route, and in cloud setups this is often the wrong choice. Introduce a notion of "advertised address", with the client flag --advertise-addr, and the daemon flag --swarm-default-advertise-addr to provide a default. The default listening address is now 0.0.0.0, but a valid advertised address must be detected or specified. If no explicit advertised address is specified, error out if there is more than one usable candidate IP address on the system. This requires a user to explicitly choose instead of letting swarmkit make the wrong choice. For the purposes of this autodetection, we ignore certain interfaces that are unlikely to be relevant (currently docker*). The user is also required to choose a listen address on swarm init if they specify an explicit advertise address that is a hostname or an IP address that's not local to the system. This is a requirement for overlay networking. Also support specifying interface names to --listen-addr, --advertise-addr, and the daemon flag --swarm-default-advertise-addr. This will fail if the interface has multiple IP addresses (unless it has a single IPv4 address and a single IPv6 address - then we resolve the tie in favor of IPv4). This change also exposes the node's externally-reachable address in docker info, as requested by #24017. Make corresponding API and CLI docs changes. Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2016-07-01 01:07:35 +00:00
}
// prepareTempDir prepares and returns the default directory to use
// for temporary files.
// If it doesn't exist, it is created. If it exists, its content is removed.
func prepareTempDir(rootDir string) (string, error) {
var tmpDir string
if tmpDir = os.Getenv("DOCKER_TMPDIR"); tmpDir == "" {
tmpDir = filepath.Join(rootDir, "tmp")
newName := tmpDir + "-old"
if err := os.Rename(tmpDir, newName); err == nil {
go func() {
if err := os.RemoveAll(newName); err != nil {
log.G(context.TODO()).Warnf("failed to delete old tmp directory: %s", newName)
}
}()
} else if !os.IsNotExist(err) {
log.G(context.TODO()).Warnf("failed to rename %s for background deletion: %s. Deleting synchronously", tmpDir, err)
if err := os.RemoveAll(tmpDir); err != nil {
log.G(context.TODO()).Warnf("failed to delete old tmp directory: %s", tmpDir)
}
}
}
2022-10-16 12:59:00 +00:00
return tmpDir, idtools.MkdirAllAndChown(tmpDir, 0o700, idtools.CurrentIdentity())
}
func (daemon *Daemon) setGenericResources(conf *config.Config) error {
genericResources, err := config.ParseGenericResources(conf.NodeGenericResources)
if err != nil {
return err
}
daemon.genericResources = genericResources
return nil
}
// IsShuttingDown tells whether the daemon is shutting down or not
func (daemon *Daemon) IsShuttingDown() bool {
return daemon.shutdown
}
func isBridgeNetworkDisabled(conf *config.Config) bool {
return conf.BridgeConfig.Iface == config.DisableNetworkBridge
}
func (daemon *Daemon) networkOptions(conf *config.Config, pg plugingetter.PluginGetter, activeSandboxes map[string]interface{}) ([]nwconfig.Option, error) {
dd := runconfig.DefaultDaemonNetworkMode()
options := []nwconfig.Option{
nwconfig.OptionDataDir(conf.Root),
nwconfig.OptionExecRoot(conf.GetExecRoot()),
nwconfig.OptionDefaultDriver(string(dd)),
nwconfig.OptionDefaultNetwork(dd.NetworkName()),
nwconfig.OptionLabels(conf.Labels),
nwconfig.OptionNetworkControlPlaneMTU(conf.NetworkControlPlaneMTU),
driverOptions(conf),
}
if len(conf.NetworkConfig.DefaultAddressPools.Value()) > 0 {
options = append(options, nwconfig.OptionDefaultAddressPoolConfig(conf.NetworkConfig.DefaultAddressPools.Value()))
}
if conf.LiveRestoreEnabled && len(activeSandboxes) != 0 {
options = append(options, nwconfig.OptionActiveSandboxes(activeSandboxes))
}
if pg != nil {
options = append(options, nwconfig.OptionPluginGetter(pg))
}
return options, nil
}
// GetCluster returns the cluster
func (daemon *Daemon) GetCluster() Cluster {
return daemon.cluster
}
// SetCluster sets the cluster
func (daemon *Daemon) SetCluster(cluster Cluster) {
daemon.cluster = cluster
}
func (daemon *Daemon) pluginShutdown() {
manager := daemon.pluginManager
// Check for a valid manager object. In error conditions, daemon init can fail
// and shutdown called, before plugin manager is initialized.
if manager != nil {
manager.Shutdown()
}
}
// PluginManager returns current pluginManager associated with the daemon
func (daemon *Daemon) PluginManager() *plugin.Manager { // set up before daemon to avoid this method
return daemon.pluginManager
}
// PluginGetter returns current pluginStore associated with the daemon
func (daemon *Daemon) PluginGetter() *plugin.Store {
return daemon.PluginStore
}
// CreateDaemonRoot creates the root for the daemon
func CreateDaemonRoot(config *config.Config) error {
// get the canonical path to the Docker root directory
var realRoot string
if _, err := os.Stat(config.Root); err != nil && os.IsNotExist(err) {
realRoot = config.Root
} else {
realRoot, err = fileutils.ReadSymlinkedDirectory(config.Root)
if err != nil {
return fmt.Errorf("Unable to get the full path to root (%s): %s", config.Root, err)
}
}
idMapping, err := setupRemappedRoot(config)
if err != nil {
return err
}
return setupDaemonRoot(config, realRoot, idMapping.RootPair())
}
// RemapContainerdNamespaces returns the right containerd namespaces to use:
// - if they are not already set in the config file
// - and the daemon is running with user namespace remapping enabled
// Then it will return new namespace names, otherwise it will return the existing
// namespaces
func RemapContainerdNamespaces(config *config.Config) (ns string, pluginNs string, err error) {
idMapping, err := setupRemappedRoot(config)
if err != nil {
return "", "", err
}
if idMapping.Empty() {
return config.ContainerdNamespace, config.ContainerdPluginNamespace, nil
}
root := idMapping.RootPair()
ns = config.ContainerdNamespace
if _, ok := config.ValuesSet["containerd-namespace"]; !ok {
ns = fmt.Sprintf("%s-%d.%d", config.ContainerdNamespace, root.UID, root.GID)
}
pluginNs = config.ContainerdPluginNamespace
if _, ok := config.ValuesSet["containerd-plugin-namespace"]; !ok {
pluginNs = fmt.Sprintf("%s-%d.%d", config.ContainerdPluginNamespace, root.UID, root.GID)
}
return
}
// checkpointAndSave grabs a container lock to safely call container.CheckpointTo
func (daemon *Daemon) checkpointAndSave(container *container.Container) error {
container.Lock()
defer container.Unlock()
if err := container.CheckpointTo(daemon.containersReplica); err != nil {
return fmt.Errorf("Error saving container state: %v", err)
}
return nil
}
// because the CLI sends a -1 when it wants to unset the swappiness value
// we need to clear it on the server side
func fixMemorySwappiness(resources *containertypes.Resources) {
if resources.MemorySwappiness != nil && *resources.MemorySwappiness == -1 {
resources.MemorySwappiness = nil
}
}
// GetAttachmentStore returns current attachment store associated with the daemon
func (daemon *Daemon) GetAttachmentStore() *network.AttachmentStore {
return &daemon.attachmentStore
}
// IdentityMapping returns uid/gid mapping or a SID (in the case of Windows) for the builder
func (daemon *Daemon) IdentityMapping() idtools.IdentityMapping {
return daemon.idMapping
}
// ImageService returns the Daemon's ImageService
func (daemon *Daemon) ImageService() ImageService {
return daemon.imageService
}
// ImageBackend returns an image-backend for Swarm and the distribution router.
func (daemon *Daemon) ImageBackend() executorpkg.ImageBackend {
return &imageBackend{
ImageService: daemon.imageService,
registryService: daemon.registryService,
}
}
// RegistryService returns the Daemon's RegistryService
func (daemon *Daemon) RegistryService() *registry.Service {
return daemon.registryService
}
// BuilderBackend returns the backend used by builder
func (daemon *Daemon) BuilderBackend() builder.Backend {
return struct {
*Daemon
ImageService
}{daemon, daemon.imageService}
}
// RawSysInfo returns *sysinfo.SysInfo .
func (daemon *Daemon) RawSysInfo() *sysinfo.SysInfo {
daemon.sysInfoOnce.Do(func() {
// We check if sysInfo is not set here, to allow some test to
// override the actual sysInfo.
if daemon.sysInfo == nil {
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
daemon.sysInfo = getSysInfo(&daemon.config().Config)
}
})
return daemon.sysInfo
}
// imageBackend is used to satisfy the [executorpkg.ImageBackend] and
// [github.com/docker/docker/api/server/router/distribution.Backend]
// interfaces.
type imageBackend struct {
ImageService
registryService *registry.Service
}
api: fix "GET /distribution" endpoint ignoring mirrors If the daemon is configured to use a mirror for the default (Docker Hub) registry, the endpoint did not fall back to querying the upstream if the mirror did not contain the given reference. If the daemon is configured to use a mirror for the default (Docker Hub) registry, did not fall back to querying the upstream if the mirror did not contain the given reference. For pull-through registry-mirrors, this was not a problem, as in that case the registry would forward the request, but for other mirrors, no fallback would happen. This was inconsistent with how "pulling" images handled this situation; when pulling images, both the mirror and upstream would be tried. This problem was caused by the logic used in GetRepository, which had an optimization to only return the first registry it was successfully able to configure (and connect to), with the assumption that the mirror either contained all images used, or to be configured as a pull-through mirror. This patch: - Introduces a GetRepositories method, which returns all candidates (both mirror(s) and upstream). - Updates the endpoint to try all Before this patch: # the daemon is configured to use a mirror for Docker Hub cat /etc/docker/daemon.json { "registry-mirrors": ["http://localhost:5000"]} # start the mirror (empty registry, not configured as pull-through mirror) docker run -d --name registry -p 127.0.0.1:5000:5000 registry:2 # querying the endpoint fails, because the image-manifest is not found in the mirror: curl -s --unix-socket /var/run/docker.sock http://localhost/v1.43/distribution/docker.io/library/hello-world:latest/json { "message": "manifest unknown: manifest unknown" } With this patch applied: # the daemon is configured to use a mirror for Docker Hub cat /etc/docker/daemon.json { "registry-mirrors": ["http://localhost:5000"]} # start the mirror (empty registry, not configured as pull-through mirror) docker run -d --name registry -p 127.0.0.1:5000:5000 registry:2 # querying the endpoint succeeds (manifest is fetched from the upstream Docker Hub registry): curl -s --unix-socket /var/run/docker.sock http://localhost/v1.43/distribution/docker.io/library/hello-world:latest/json | jq . { "Descriptor": { "mediaType": "application/vnd.oci.image.index.v1+json", "digest": "sha256:1b9844d846ce3a6a6af7013e999a373112c3c0450aca49e155ae444526a2c45e", "size": 3849 }, "Platforms": [ { "architecture": "amd64", "os": "linux" } ] } Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-04 12:20:35 +00:00
// GetRepositories returns a list of repositories configured for the given
// reference. Multiple repositories can be returned if the reference is for
// the default (Docker Hub) registry and a mirror is configured, but it omits
// registries that were not reachable (pinging the /v2/ endpoint failed).
//
// It returns an error if it was unable to reach any of the registries for
// the given reference, or if the provided reference is invalid.
func (i *imageBackend) GetRepositories(ctx context.Context, ref reference.Named, authConfig *registrytypes.AuthConfig) ([]dist.Repository, error) {
return distribution.GetRepositories(ctx, ref, &distribution.ImagePullConfig{
Config: distribution.Config{
AuthConfig: authConfig,
RegistryService: i.registryService,
},
})
}