moby/daemon/info.go

322 lines
10 KiB
Go
Raw Normal View History

package daemon // import "github.com/docker/docker/daemon"
import (
"context"
"fmt"
"os"
"runtime"
"strings"
"time"
"github.com/containerd/log"
"github.com/docker/docker/api"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/system"
"github.com/docker/docker/cli/debug"
"github.com/docker/docker/daemon/config"
"github.com/docker/docker/daemon/logger"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/pkg/fileutils"
"github.com/docker/docker/pkg/meminfo"
"github.com/docker/docker/pkg/parsers/kernel"
"github.com/docker/docker/pkg/parsers/operatingsystem"
"github.com/docker/docker/pkg/platform"
"github.com/docker/docker/pkg/sysinfo"
"github.com/docker/docker/registry"
metrics "github.com/docker/go-metrics"
"github.com/opencontainers/selinux/go-selinux"
)
// SystemInfo returns information about the host server the daemon is running on.
func (daemon *Daemon) SystemInfo() *system.Info {
defer metrics.StartTimer(hostInfoFunctions.WithValues("system_info"))()
sysInfo := daemon.RawSysInfo()
cfg := daemon.config()
v := &system.Info{
ID: daemon.id,
Images: daemon.imageService.CountImages(),
IPv4Forwarding: !sysInfo.IPv4ForwardingDisabled,
BridgeNfIptables: !sysInfo.BridgeNFCallIPTablesDisabled,
BridgeNfIP6tables: !sysInfo.BridgeNFCallIP6TablesDisabled,
Name: hostName(),
SystemTime: time.Now().Format(time.RFC3339Nano),
LoggingDriver: daemon.defaultLogConfig.Type,
KernelVersion: kernelVersion(),
OperatingSystem: operatingSystem(),
OSVersion: osVersion(),
IndexServerAddress: registry.IndexServer,
OSType: runtime.GOOS,
Architecture: platform.Architecture,
RegistryConfig: daemon.registryService.ServiceConfig(),
NCPU: sysinfo.NumCPU(),
MemTotal: memInfo().MemTotal,
GenericResources: daemon.genericResources,
DockerRootDir: cfg.Root,
Labels: cfg.Labels,
ExperimentalBuild: cfg.Experimental,
ServerVersion: dockerversion.Version,
HTTPProxy: config.MaskCredentials(getConfigOrEnv(cfg.HTTPProxy, "HTTP_PROXY", "http_proxy")),
HTTPSProxy: config.MaskCredentials(getConfigOrEnv(cfg.HTTPSProxy, "HTTPS_PROXY", "https_proxy")),
NoProxy: getConfigOrEnv(cfg.NoProxy, "NO_PROXY", "no_proxy"),
LiveRestoreEnabled: cfg.LiveRestoreEnabled,
Isolation: daemon.defaultIsolation,
CDISpecDirs: promoteNil(cfg.CDISpecDirs),
}
daemon.fillContainerStates(v)
daemon.fillDebugInfo(v)
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
daemon.fillAPIInfo(v, &cfg.Config)
// Retrieve platform specific info
daemon: consolidate runtimes config validation The daemon has made a habit of mutating the DefaultRuntime and Runtimes values in the Config struct to merge defaults. This would be fine if it was a part of the regular configuration loading and merging process, as is done with other config options. The trouble is it does so in surprising places, such as in functions with 'verify' or 'validate' in their name. It has been necessary in order to validate that the user has not defined a custom runtime named "runc" which would shadow the built-in runtime of the same name. Other daemon code depends on the runtime named "runc" always being defined in the config, but merging it with the user config at the same time as the other defaults are merged would trip the validation. The root of the issue is that the daemon has used the same config values for both validating the daemon runtime configuration as supplied by the user and for keeping track of which runtimes have been set up by the daemon. Now that a completely separate value is used for the latter purpose, surprising contortions are no longer required to make the validation work as intended. Consolidate the validation of the runtimes config and merging of the built-in runtimes into the daemon.setupRuntimes() function. Set the result of merging the built-in runtimes config and default default runtime on the returned runtimes struct, without back-propagating it onto the config.Config argument. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 21:24:22 +00:00
daemon.fillPlatformInfo(v, sysInfo, cfg)
daemon.fillDriverInfo(v)
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
daemon.fillPluginsInfo(v, &cfg.Config)
daemon.fillSecurityOptions(v, sysInfo, &cfg.Config)
daemon.fillLicense(v)
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
daemon.fillDefaultAddressPools(v, &cfg.Config)
return v
}
// SystemVersion returns version information about the daemon.
func (daemon *Daemon) SystemVersion() types.Version {
defer metrics.StartTimer(hostInfoFunctions.WithValues("system_version"))()
kernelVersion := kernelVersion()
cfg := daemon.config()
v := types.Version{
Components: []types.ComponentVersion{
{
Name: "Engine",
Version: dockerversion.Version,
Details: map[string]string{
"GitCommit": dockerversion.GitCommit,
"ApiVersion": api.DefaultVersion,
"MinAPIVersion": api.MinVersion,
"GoVersion": runtime.Version(),
"Os": runtime.GOOS,
"Arch": runtime.GOARCH,
"BuildTime": dockerversion.BuildTime,
"KernelVersion": kernelVersion,
"Experimental": fmt.Sprintf("%t", cfg.Experimental),
},
},
},
// Populate deprecated fields for older clients
Version: dockerversion.Version,
GitCommit: dockerversion.GitCommit,
APIVersion: api.DefaultVersion,
MinAPIVersion: api.MinVersion,
GoVersion: runtime.Version(),
Os: runtime.GOOS,
Arch: runtime.GOARCH,
BuildTime: dockerversion.BuildTime,
KernelVersion: kernelVersion,
Experimental: cfg.Experimental,
}
v.Platform.Name = dockerversion.PlatformName
daemon: consolidate runtimes config validation The daemon has made a habit of mutating the DefaultRuntime and Runtimes values in the Config struct to merge defaults. This would be fine if it was a part of the regular configuration loading and merging process, as is done with other config options. The trouble is it does so in surprising places, such as in functions with 'verify' or 'validate' in their name. It has been necessary in order to validate that the user has not defined a custom runtime named "runc" which would shadow the built-in runtime of the same name. Other daemon code depends on the runtime named "runc" always being defined in the config, but merging it with the user config at the same time as the other defaults are merged would trip the validation. The root of the issue is that the daemon has used the same config values for both validating the daemon runtime configuration as supplied by the user and for keeping track of which runtimes have been set up by the daemon. Now that a completely separate value is used for the latter purpose, surprising contortions are no longer required to make the validation work as intended. Consolidate the validation of the runtimes config and merging of the built-in runtimes into the daemon.setupRuntimes() function. Set the result of merging the built-in runtimes config and default default runtime on the returned runtimes struct, without back-propagating it onto the config.Config argument. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 21:24:22 +00:00
daemon.fillPlatformVersion(&v, cfg)
return v
}
func (daemon *Daemon) fillDriverInfo(v *system.Info) {
v.Driver = daemon.imageService.StorageDriver()
v.DriverStatus = daemon.imageService.LayerStoreStatus()
daemon: require storage-driver to be set if the driver is deprecated Previously, we only printed a warning if a storage driver was deprecated. The intent was to continue supporting these drivers, to allow users to migrate to a different storage driver. This patch changes the behavior; if the user has no storage driver specified in the daemon configuration (so if we try to detect the previous storage driver based on what's present in /var/lib/docker), we now produce an error, informing the user that the storage driver is deprecated (and to be removed), as well as instructing them to change the daemon configuration to explicitly select the storage driver (to allow them to migrate). This should make the deprecation more visible; this will be disruptive, but it's better to have the failure happening *now* (while the drivers are still there), than for users to discover the storage driver is no longer there (which would require them to *downgrade* the daemon in order to migrate to a different driver). With this change, `docker info` includes a link in the warnings that: / # docker info Client: Context: default Debug Mode: false Server: ... Live Restore Enabled: false WARNING: The overlay storage-driver is deprecated, and will be removed in a future release. Refer to the documentation for more information: https://docs.docker.com/go/storage-driver/ When starting the daemon without a storage driver configured explicitly, but previous state was using a deprecated driver, the error is both logged and printed: ... ERRO[2022-03-25T14:14:06.032014013Z] [graphdriver] prior storage driver overlay is deprecated and will be removed in a future release; update the the daemon configuration and explicitly choose this storage driver to continue using it; visit https://docs.docker.com/go/storage-driver/ for more information ... failed to start daemon: error initializing graphdriver: prior storage driver overlay is deprecated and will be removed in a future release; update the the daemon configuration and explicitly choose this storage driver to continue using it; visit https://docs.docker.com/go/storage-driver/ for more information When starting the daemon and explicitly configuring it with a deprecated storage driver: WARN[2022-03-25T14:15:59.042335412Z] [graphdriver] WARNING: the overlay storage-driver is deprecated and will be removed in a future release; visit https://docs.docker.com/go/storage-driver/ for more information Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-03-11 16:51:12 +00:00
const warnMsg = `
WARNING: The %s storage-driver is deprecated, and will be removed in a future release.
Refer to the documentation for more information: https://docs.docker.com/go/storage-driver/`
switch v.Driver {
case "overlay":
v.Warnings = append(v.Warnings, fmt.Sprintf(warnMsg, v.Driver))
}
Add "Warnings" to /info endpoint, and move detection to the daemon When requesting information about the daemon's configuration through the `/info` endpoint, missing features (or non-recommended settings) may have to be presented to the user. Detecting these situations, and printing warnings currently is handled by the cli, which results in some complications: - duplicated effort: each client has to re-implement detection and warnings. - it's not possible to generate warnings for reasons outside of the information returned in the `/info` response. - cli-side detection has to be updated for new conditions. This means that an older cli connecting to a new daemon may not print all warnings (due to it not detecting the new conditions) - some warnings (in particular, warnings about storage-drivers) depend on driver-status (`DriverStatus`) information. The format of the information returned in this field is not part of the API specification and can change over time, resulting in cli-side detection no longer being functional. This patch adds a new `Warnings` field to the `/info` response. This field is to return warnings to be presented by the user. Existing warnings that are currently handled by the CLI are copied to the daemon as part of this patch; This change is backward-compatible with existing clients; old client can continue to use the client-side warnings, whereas new clients can skip client-side detection, and print warnings that are returned by the daemon. Example response with this patch applied; ```bash curl --unix-socket /var/run/docker.sock http://localhost/info | jq .Warnings ``` ```json [ "WARNING: bridge-nf-call-iptables is disabled", "WARNING: bridge-nf-call-ip6tables is disabled" ] ``` Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-19 11:45:32 +00:00
fillDriverWarnings(v)
}
func (daemon *Daemon) fillPluginsInfo(v *system.Info, cfg *config.Config) {
v.Plugins = system.PluginsInfo{
Volume: daemon.volumes.GetDriverList(),
Network: daemon.GetNetworkDriverList(),
// The authorization plugins are returned in the order they are
// used as they constitute a request/response modification chain.
Authorization: cfg.AuthorizationPlugins,
Log: logger.ListDrivers(),
}
}
func (daemon *Daemon) fillSecurityOptions(v *system.Info, sysInfo *sysinfo.SysInfo, cfg *config.Config) {
var securityOptions []string
if sysInfo.AppArmor {
securityOptions = append(securityOptions, "name=apparmor")
}
if sysInfo.Seccomp && supportsSeccomp {
if daemon.seccompProfilePath != config.SeccompProfileDefault {
v.Warnings = append(v.Warnings, "WARNING: daemon is not using the default seccomp profile")
}
securityOptions = append(securityOptions, "name=seccomp,profile="+daemon.seccompProfilePath)
}
if selinux.GetEnabled() {
securityOptions = append(securityOptions, "name=selinux")
}
if rootIDs := daemon.idMapping.RootPair(); rootIDs.UID != 0 || rootIDs.GID != 0 {
securityOptions = append(securityOptions, "name=userns")
}
if Rootless(cfg) {
securityOptions = append(securityOptions, "name=rootless")
}
if cgroupNamespacesEnabled(sysInfo, cfg) {
securityOptions = append(securityOptions, "name=cgroupns")
}
if noNewPrivileges(cfg) {
securityOptions = append(securityOptions, "name=no-new-privileges")
}
v.SecurityOptions = securityOptions
}
func (daemon *Daemon) fillContainerStates(v *system.Info) {
cRunning, cPaused, cStopped := stateCtr.get()
v.Containers = cRunning + cPaused + cStopped
v.ContainersPaused = cPaused
v.ContainersRunning = cRunning
v.ContainersStopped = cStopped
}
// fillDebugInfo sets the current debugging state of the daemon, and additional
// debugging information, such as the number of Go-routines, and file descriptors.
//
// Note that this currently always collects the information, but the CLI only
// prints it if the daemon has debug enabled. We should consider to either make
// this information optional (cli to request "with debugging information"), or
// only collect it if the daemon has debug enabled. For the CLI code, see
// https://github.com/docker/cli/blob/v20.10.12/cli/command/system/info.go#L239-L244
func (daemon *Daemon) fillDebugInfo(v *system.Info) {
v.Debug = debug.IsEnabled()
v.NFd = fileutils.GetTotalUsedFds()
v.NGoroutines = runtime.NumGoroutine()
v.NEventsListener = daemon.EventsService.SubscribersCount()
}
func (daemon *Daemon) fillAPIInfo(v *system.Info, cfg *config.Config) {
const warn string = `
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/go/attack-surface/`
for _, host := range cfg.Hosts {
// cnf.Hosts is normalized during startup, so should always have a scheme/proto
proto, addr, _ := strings.Cut(host, "://")
if proto != "tcp" {
continue
}
if cfg.TLS == nil || !*cfg.TLS {
v.Warnings = append(v.Warnings, fmt.Sprintf("WARNING: API is accessible on http://%s without encryption.%s", addr, warn))
continue
}
if cfg.TLSVerify == nil || !*cfg.TLSVerify {
v.Warnings = append(v.Warnings, fmt.Sprintf("WARNING: API is accessible on https://%s without TLS client verification.%s", addr, warn))
continue
}
}
}
func (daemon *Daemon) fillDefaultAddressPools(v *system.Info, cfg *config.Config) {
for _, pool := range cfg.DefaultAddressPools.Value() {
v.DefaultAddressPools = append(v.DefaultAddressPools, system.NetworkAddressPool{
Base: pool.Base,
Size: pool.Size,
})
}
}
func hostName() string {
hostname := ""
if hn, err := os.Hostname(); err != nil {
log.G(context.TODO()).Warnf("Could not get hostname: %v", err)
} else {
hostname = hn
}
return hostname
}
func kernelVersion() string {
var kernelVersion string
if kv, err := kernel.GetKernelVersion(); err != nil {
log.G(context.TODO()).Warnf("Could not get kernel version: %v", err)
} else {
kernelVersion = kv.String()
}
return kernelVersion
}
func memInfo() *meminfo.Memory {
memInfo, err := meminfo.Read()
if err != nil {
log.G(context.TODO()).Errorf("Could not read system memory info: %v", err)
memInfo = &meminfo.Memory{}
}
return memInfo
}
func operatingSystem() (operatingSystem string) {
defer metrics.StartTimer(hostInfoFunctions.WithValues("operating_system"))()
if s, err := operatingsystem.GetOperatingSystem(); err != nil {
log.G(context.TODO()).Warnf("Could not get operating system name: %v", err)
} else {
operatingSystem = s
}
if inContainer, err := operatingsystem.IsContainerized(); err != nil {
log.G(context.TODO()).Errorf("Could not determine if daemon is containerized: %v", err)
operatingSystem += " (error determining if containerized)"
} else if inContainer {
operatingSystem += " (containerized)"
}
return operatingSystem
}
func osVersion() (version string) {
defer metrics.StartTimer(hostInfoFunctions.WithValues("os_version"))()
version, err := operatingsystem.GetOperatingSystemVersion()
if err != nil {
log.G(context.TODO()).Warnf("Could not get operating system version: %v", err)
}
return version
}
func getEnvAny(names ...string) string {
for _, n := range names {
if val := os.Getenv(n); val != "" {
return val
}
}
return ""
}
Add http(s) proxy properties to daemon configuration This allows configuring the daemon's proxy server through the daemon.json con- figuration file or command-line flags configuration file, in addition to the existing option (through environment variables). Configuring environment variables on Windows to configure a service is more complicated than on Linux, and adding alternatives for this to the daemon con- figuration makes the configuration more transparent and easier to use. The configuration as set through command-line flags or through the daemon.json configuration file takes precedence over env-vars in the daemon's environment, which allows the daemon to use a different proxy. If both command-line flags and a daemon.json configuration option is set, an error is produced when starting the daemon. Note that this configuration is not "live reloadable" due to Golang's use of `sync.Once()` for proxy configuration, which means that changing the proxy configuration requires a restart of the daemon (reload / SIGHUP will not update the configuration. With this patch: cat /etc/docker/daemon.json { "http-proxy": "http://proxytest.example.com:80", "https-proxy": "https://proxytest.example.com:443" } docker pull busybox Using default tag: latest Error response from daemon: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp: lookup proxytest.example.com on 127.0.0.11:53: no such host docker build . Sending build context to Docker daemon 89.28MB Step 1/3 : FROM golang:1.16-alpine AS base Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp: lookup proxytest.example.com on 127.0.0.11:53: no such host Integration tests were added to test the behavior: - verify that the configuration through all means are used (env-var, command-line flags, damon.json), and used in the expected order of preference. - verify that conflicting options produce an error. Signed-off-by: Anca Iordache <anca.iordache@docker.com> Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-07-16 07:33:00 +00:00
func getConfigOrEnv(config string, env ...string) string {
if config != "" {
return config
}
return getEnvAny(env...)
}
// promoteNil converts a nil slice to an empty slice of that type.
// A non-nil slice is returned as is.
func promoteNil[S ~[]E, E any](s S) S {
if s == nil {
return S{}
}
return s
}