moby/daemon/info.go

366 lines
12 KiB
Go
Raw Normal View History

add //go:build directives to prevent downgrading to go1.16 language This repository is not yet a module (i.e., does not have a `go.mod`). This is not problematic when building the code in GOPATH or "vendor" mode, but when using the code as a module-dependency (in module-mode), different semantics are applied since Go1.21, which switches Go _language versions_ on a per-module, per-package, or even per-file base. A condensed summary of that logic [is as follows][1]: - For modules that have a go.mod containing a go version directive; that version is considered a minimum _required_ version (starting with the go1.19.13 and go1.20.8 patch releases: before those, it was only a recommendation). - For dependencies that don't have a go.mod (not a module), go language version go1.16 is assumed. - Likewise, for modules that have a go.mod, but the file does not have a go version directive, go language version go1.16 is assumed. - If a go.work file is present, but does not have a go version directive, language version go1.17 is assumed. When switching language versions, Go _downgrades_ the language version, which means that language features (such as generics, and `any`) are not available, and compilation fails. For example: # github.com/docker/cli/cli/context/store /go/pkg/mod/github.com/docker/cli@v25.0.0-beta.2+incompatible/cli/context/store/storeconfig.go:6:24: predeclared any requires go1.18 or later (-lang was set to go1.16; check go.mod) /go/pkg/mod/github.com/docker/cli@v25.0.0-beta.2+incompatible/cli/context/store/store.go:74:12: predeclared any requires go1.18 or later (-lang was set to go1.16; check go.mod) Note that these fallbacks are per-module, per-package, and can even be per-file, so _(indirect) dependencies_ can still use modern language features, as long as their respective go.mod has a version specified. Unfortunately, these failures do not occur when building locally (using vendor / GOPATH mode), but will affect consumers of the module. Obviously, this situation is not ideal, and the ultimate solution is to move to go modules (add a go.mod), but this comes with a non-insignificant risk in other areas (due to our complex dependency tree). We can revert to using go1.16 language features only, but this may be limiting, and may still be problematic when (e.g.) matching signatures of dependencies. There is an escape hatch: adding a `//go:build` directive to files that make use of go language features. From the [go toolchain docs][2]: > The go line for each module sets the language version the compiler enforces > when compiling packages in that module. The language version can be changed > on a per-file basis by using a build constraint. > > For example, a module containing code that uses the Go 1.21 language version > should have a `go.mod` file with a go line such as `go 1.21` or `go 1.21.3`. > If a specific source file should be compiled only when using a newer Go > toolchain, adding `//go:build go1.22` to that source file both ensures that > only Go 1.22 and newer toolchains will compile the file and also changes > the language version in that file to Go 1.22. This patch adds `//go:build` directives to those files using recent additions to the language. It's currently using go1.19 as version to match the version in our "vendor.mod", but we can consider being more permissive ("any" requires go1.18 or up), or more "optimistic" (force go1.21, which is the version we currently use to build). For completeness sake, note that any file _without_ a `//go:build` directive will continue to use go1.16 language version when used as a module. [1]: https://github.com/golang/go/blob/58c28ba286dd0e98fe4cca80f5d64bbcb824a685/src/cmd/go/internal/gover/version.go#L9-L56 [2]: https://go.dev/doc/toolchain Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-12-15 13:26:31 +00:00
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.19
package daemon // import "github.com/docker/docker/daemon"
import (
"context"
"fmt"
"os"
"runtime"
"strings"
"time"
"github.com/containerd/containerd/tracing"
"github.com/containerd/log"
"github.com/docker/docker/api"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/system"
"github.com/docker/docker/cli/debug"
"github.com/docker/docker/daemon/config"
"github.com/docker/docker/daemon/logger"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/pkg/fileutils"
"github.com/docker/docker/pkg/meminfo"
"github.com/docker/docker/pkg/parsers/kernel"
"github.com/docker/docker/pkg/parsers/operatingsystem"
"github.com/docker/docker/pkg/platform"
"github.com/docker/docker/pkg/sysinfo"
"github.com/docker/docker/registry"
metrics "github.com/docker/go-metrics"
"github.com/opencontainers/selinux/go-selinux"
)
func doWithTrace[T any](ctx context.Context, name string, f func() T) T {
_, span := tracing.StartSpan(ctx, name)
defer span.End()
return f()
}
// SystemInfo returns information about the host server the daemon is running on.
//
// The only error this should return is due to context cancellation/deadline.
// Anything else should be logged and ignored because this is looking up
// multiple things and is often used for debugging.
// The only case valid early return is when the caller doesn't want the result anymore (ie context cancelled).
func (daemon *Daemon) SystemInfo(ctx context.Context) (*system.Info, error) {
defer metrics.StartTimer(hostInfoFunctions.WithValues("system_info"))()
sysInfo := daemon.RawSysInfo()
cfg := daemon.config()
v := &system.Info{
ID: daemon.id,
Images: daemon.imageService.CountImages(ctx),
IPv4Forwarding: !sysInfo.IPv4ForwardingDisabled,
BridgeNfIptables: !sysInfo.BridgeNFCallIPTablesDisabled,
BridgeNfIP6tables: !sysInfo.BridgeNFCallIP6TablesDisabled,
Name: hostName(ctx),
SystemTime: time.Now().Format(time.RFC3339Nano),
LoggingDriver: daemon.defaultLogConfig.Type,
KernelVersion: kernelVersion(ctx),
OperatingSystem: operatingSystem(ctx),
OSVersion: osVersion(ctx),
IndexServerAddress: registry.IndexServer,
OSType: runtime.GOOS,
Architecture: platform.Architecture,
RegistryConfig: doWithTrace(ctx, "registry.ServiceConfig", daemon.registryService.ServiceConfig),
NCPU: doWithTrace(ctx, "sysinfo.NumCPU", sysinfo.NumCPU),
MemTotal: memInfo(ctx).MemTotal,
GenericResources: daemon.genericResources,
DockerRootDir: cfg.Root,
Labels: cfg.Labels,
ExperimentalBuild: cfg.Experimental,
ServerVersion: dockerversion.Version,
HTTPProxy: config.MaskCredentials(getConfigOrEnv(cfg.HTTPProxy, "HTTP_PROXY", "http_proxy")),
HTTPSProxy: config.MaskCredentials(getConfigOrEnv(cfg.HTTPSProxy, "HTTPS_PROXY", "https_proxy")),
NoProxy: getConfigOrEnv(cfg.NoProxy, "NO_PROXY", "no_proxy"),
LiveRestoreEnabled: cfg.LiveRestoreEnabled,
Isolation: daemon.defaultIsolation,
CDISpecDirs: promoteNil(cfg.CDISpecDirs),
}
daemon.fillContainerStates(v)
daemon.fillDebugInfo(ctx, v)
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
daemon.fillAPIInfo(v, &cfg.Config)
// Retrieve platform specific info
if err := daemon.fillPlatformInfo(ctx, v, sysInfo, cfg); err != nil {
return nil, err
}
daemon.fillDriverInfo(v)
daemon.fillPluginsInfo(ctx, v, &cfg.Config)
daemon: reload runtimes w/o breaking containers The existing runtimes reload logic went to great lengths to replace the directory containing runtime wrapper scripts as atomically as possible within the limitations of the Linux filesystem ABI. Trouble is, atomically swapping the wrapper scripts directory solves the wrong problem! The runtime configuration is "locked in" when a container is started, including the path to the runC binary. If a container is started with a runtime which requires a daemon-managed wrapper script and then the daemon is reloaded with a config which no longer requires the wrapper script (i.e. some args -> no args, or the runtime is dropped from the config), that container would become unmanageable. Any attempts to stop, exec or otherwise perform lifecycle management operations on the container are likely to fail due to the wrapper script no longer existing at its original path. Atomically swapping the wrapper scripts is also incompatible with the read-copy-update paradigm for reloading configuration. A handler in the daemon could retain a reference to the pre-reload configuration for an indeterminate amount of time after the daemon configuration has been reloaded and updated. It is possible for the daemon to attempt to start a container using a deleted wrapper script if a request to run a container races a reload. Solve the problem of deleting referenced wrapper scripts by ensuring that all wrapper scripts are *immutable* for the lifetime of the daemon process. Any given runtime wrapper script must always exist with the same contents, no matter how many times the daemon config is reloaded, or what changes are made to the config. This is accomplished by using everyone's favourite design pattern: content-addressable storage. Each wrapper script file name is suffixed with the SHA-256 digest of its contents to (probabilistically) guarantee immutability without needing any concurrency control. Stale runtime wrapper scripts are only cleaned up on the next daemon restart. Split the derived runtimes configuration from the user-supplied configuration to have a place to store derived state without mutating the user-supplied configuration or exposing daemon internals in API struct types. Hold the derived state and the user-supplied configuration in a single struct value so that they can be updated as an atomic unit. Signed-off-by: Cory Snider <csnider@mirantis.com>
2022-08-31 20:12:30 +00:00
daemon.fillSecurityOptions(v, sysInfo, &cfg.Config)
daemon.fillLicense(v)
daemon.fillDefaultAddressPools(ctx, v, &cfg.Config)
return v, nil
}
// SystemVersion returns version information about the daemon.
//
// The only error this should return is due to context cancellation/deadline.
// Anything else should be logged and ignored because this is looking up
// multiple things and is often used for debugging.
// The only case valid early return is when the caller doesn't want the result anymore (ie context cancelled).
func (daemon *Daemon) SystemVersion(ctx context.Context) (types.Version, error) {
defer metrics.StartTimer(hostInfoFunctions.WithValues("system_version"))()
kernelVersion := kernelVersion(ctx)
cfg := daemon.config()
v := types.Version{
Components: []types.ComponentVersion{
{
Name: "Engine",
Version: dockerversion.Version,
Details: map[string]string{
"GitCommit": dockerversion.GitCommit,
"ApiVersion": api.DefaultVersion,
daemon: raise default minimum API version to v1.24 The daemon currently provides support for API versions all the way back to v1.12, which is the version of the API that shipped with docker 1.0. On Windows, the minimum supported version is v1.24. Such old versions of the client are rare, and supporting older API versions has accumulated significant amounts of code to remain backward-compatible (which is largely untested, and a "best-effort" at most). This patch updates the minimum API version to v1.24, which is the fallback API version used when API-version negotiation fails. The intent is to start deprecating older API versions, but no code is removed yet as part of this patch, and a DOCKER_MIN_API_VERSION environment variable is added, which allows overriding the minimum version (to allow restoring the behavior from before this patch). With this patch the daemon defaults to API v1.24 as minimum: docker version Client: Version: 24.0.2 API version: 1.43 Go version: go1.20.4 Git commit: cb74dfc Built: Thu May 25 21:50:49 2023 OS/Arch: linux/arm64 Context: default Server: Engine: Version: dev API version: 1.44 (minimum version 1.24) Go version: go1.21.3 Git commit: 0322a29b9ef8806aaa4b45dc9d9a2ebcf0244bf4 Built: Mon Dec 4 15:22:17 2023 OS/Arch: linux/arm64 Experimental: false containerd: Version: v1.7.9 GitCommit: 4f03e100cb967922bec7459a78d16ccbac9bb81d runc: Version: 1.1.10 GitCommit: v1.1.10-0-g18a0cb0 docker-init: Version: 0.19.0 GitCommit: de40ad0 Trying to use an older version of the API produces an error: DOCKER_API_VERSION=1.23 docker version Client: Version: 24.0.2 API version: 1.23 (downgraded from 1.43) Go version: go1.20.4 Git commit: cb74dfc Built: Thu May 25 21:50:49 2023 OS/Arch: linux/arm64 Context: default Error response from daemon: client version 1.23 is too old. Minimum supported API version is 1.24, please upgrade your client to a newer version To restore the previous minimum, users can start the daemon with the DOCKER_MIN_API_VERSION environment variable set: DOCKER_MIN_API_VERSION=1.12 dockerd API 1.12 is the oldest supported API version on Linux; docker version Client: Version: 24.0.2 API version: 1.43 Go version: go1.20.4 Git commit: cb74dfc Built: Thu May 25 21:50:49 2023 OS/Arch: linux/arm64 Context: default Server: Engine: Version: dev API version: 1.44 (minimum version 1.12) Go version: go1.21.3 Git commit: 0322a29b9ef8806aaa4b45dc9d9a2ebcf0244bf4 Built: Mon Dec 4 15:22:17 2023 OS/Arch: linux/arm64 Experimental: false containerd: Version: v1.7.9 GitCommit: 4f03e100cb967922bec7459a78d16ccbac9bb81d runc: Version: 1.1.10 GitCommit: v1.1.10-0-g18a0cb0 docker-init: Version: 0.19.0 GitCommit: de40ad0 When using the `DOCKER_MIN_API_VERSION` with a version of the API that is not supported, an error is produced when starting the daemon; DOCKER_MIN_API_VERSION=1.11 dockerd --validate invalid DOCKER_MIN_API_VERSION: minimum supported API version is 1.12: 1.11 DOCKER_MIN_API_VERSION=1.45 dockerd --validate invalid DOCKER_MIN_API_VERSION: maximum supported API version is 1.44: 1.45 Specifying a malformed API version also produces the same error; DOCKER_MIN_API_VERSION=hello dockerd --validate invalid DOCKER_MIN_API_VERSION: minimum supported API version is 1.12: hello Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-12-04 12:44:21 +00:00
"MinAPIVersion": cfg.MinAPIVersion,
"GoVersion": runtime.Version(),
"Os": runtime.GOOS,
"Arch": runtime.GOARCH,
"BuildTime": dockerversion.BuildTime,
"KernelVersion": kernelVersion,
"Experimental": fmt.Sprintf("%t", cfg.Experimental),
},
},
},
// Populate deprecated fields for older clients
Version: dockerversion.Version,
GitCommit: dockerversion.GitCommit,
APIVersion: api.DefaultVersion,
daemon: raise default minimum API version to v1.24 The daemon currently provides support for API versions all the way back to v1.12, which is the version of the API that shipped with docker 1.0. On Windows, the minimum supported version is v1.24. Such old versions of the client are rare, and supporting older API versions has accumulated significant amounts of code to remain backward-compatible (which is largely untested, and a "best-effort" at most). This patch updates the minimum API version to v1.24, which is the fallback API version used when API-version negotiation fails. The intent is to start deprecating older API versions, but no code is removed yet as part of this patch, and a DOCKER_MIN_API_VERSION environment variable is added, which allows overriding the minimum version (to allow restoring the behavior from before this patch). With this patch the daemon defaults to API v1.24 as minimum: docker version Client: Version: 24.0.2 API version: 1.43 Go version: go1.20.4 Git commit: cb74dfc Built: Thu May 25 21:50:49 2023 OS/Arch: linux/arm64 Context: default Server: Engine: Version: dev API version: 1.44 (minimum version 1.24) Go version: go1.21.3 Git commit: 0322a29b9ef8806aaa4b45dc9d9a2ebcf0244bf4 Built: Mon Dec 4 15:22:17 2023 OS/Arch: linux/arm64 Experimental: false containerd: Version: v1.7.9 GitCommit: 4f03e100cb967922bec7459a78d16ccbac9bb81d runc: Version: 1.1.10 GitCommit: v1.1.10-0-g18a0cb0 docker-init: Version: 0.19.0 GitCommit: de40ad0 Trying to use an older version of the API produces an error: DOCKER_API_VERSION=1.23 docker version Client: Version: 24.0.2 API version: 1.23 (downgraded from 1.43) Go version: go1.20.4 Git commit: cb74dfc Built: Thu May 25 21:50:49 2023 OS/Arch: linux/arm64 Context: default Error response from daemon: client version 1.23 is too old. Minimum supported API version is 1.24, please upgrade your client to a newer version To restore the previous minimum, users can start the daemon with the DOCKER_MIN_API_VERSION environment variable set: DOCKER_MIN_API_VERSION=1.12 dockerd API 1.12 is the oldest supported API version on Linux; docker version Client: Version: 24.0.2 API version: 1.43 Go version: go1.20.4 Git commit: cb74dfc Built: Thu May 25 21:50:49 2023 OS/Arch: linux/arm64 Context: default Server: Engine: Version: dev API version: 1.44 (minimum version 1.12) Go version: go1.21.3 Git commit: 0322a29b9ef8806aaa4b45dc9d9a2ebcf0244bf4 Built: Mon Dec 4 15:22:17 2023 OS/Arch: linux/arm64 Experimental: false containerd: Version: v1.7.9 GitCommit: 4f03e100cb967922bec7459a78d16ccbac9bb81d runc: Version: 1.1.10 GitCommit: v1.1.10-0-g18a0cb0 docker-init: Version: 0.19.0 GitCommit: de40ad0 When using the `DOCKER_MIN_API_VERSION` with a version of the API that is not supported, an error is produced when starting the daemon; DOCKER_MIN_API_VERSION=1.11 dockerd --validate invalid DOCKER_MIN_API_VERSION: minimum supported API version is 1.12: 1.11 DOCKER_MIN_API_VERSION=1.45 dockerd --validate invalid DOCKER_MIN_API_VERSION: maximum supported API version is 1.44: 1.45 Specifying a malformed API version also produces the same error; DOCKER_MIN_API_VERSION=hello dockerd --validate invalid DOCKER_MIN_API_VERSION: minimum supported API version is 1.12: hello Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-12-04 12:44:21 +00:00
MinAPIVersion: cfg.MinAPIVersion,
GoVersion: runtime.Version(),
Os: runtime.GOOS,
Arch: runtime.GOARCH,
BuildTime: dockerversion.BuildTime,
KernelVersion: kernelVersion,
Experimental: cfg.Experimental,
}
v.Platform.Name = dockerversion.PlatformName
if err := daemon.fillPlatformVersion(ctx, &v, cfg); err != nil {
return v, err
}
return v, nil
}
func (daemon *Daemon) fillDriverInfo(v *system.Info) {
v.Driver = daemon.imageService.StorageDriver()
v.DriverStatus = daemon.imageService.LayerStoreStatus()
daemon: require storage-driver to be set if the driver is deprecated Previously, we only printed a warning if a storage driver was deprecated. The intent was to continue supporting these drivers, to allow users to migrate to a different storage driver. This patch changes the behavior; if the user has no storage driver specified in the daemon configuration (so if we try to detect the previous storage driver based on what's present in /var/lib/docker), we now produce an error, informing the user that the storage driver is deprecated (and to be removed), as well as instructing them to change the daemon configuration to explicitly select the storage driver (to allow them to migrate). This should make the deprecation more visible; this will be disruptive, but it's better to have the failure happening *now* (while the drivers are still there), than for users to discover the storage driver is no longer there (which would require them to *downgrade* the daemon in order to migrate to a different driver). With this change, `docker info` includes a link in the warnings that: / # docker info Client: Context: default Debug Mode: false Server: ... Live Restore Enabled: false WARNING: The overlay storage-driver is deprecated, and will be removed in a future release. Refer to the documentation for more information: https://docs.docker.com/go/storage-driver/ When starting the daemon without a storage driver configured explicitly, but previous state was using a deprecated driver, the error is both logged and printed: ... ERRO[2022-03-25T14:14:06.032014013Z] [graphdriver] prior storage driver overlay is deprecated and will be removed in a future release; update the the daemon configuration and explicitly choose this storage driver to continue using it; visit https://docs.docker.com/go/storage-driver/ for more information ... failed to start daemon: error initializing graphdriver: prior storage driver overlay is deprecated and will be removed in a future release; update the the daemon configuration and explicitly choose this storage driver to continue using it; visit https://docs.docker.com/go/storage-driver/ for more information When starting the daemon and explicitly configuring it with a deprecated storage driver: WARN[2022-03-25T14:15:59.042335412Z] [graphdriver] WARNING: the overlay storage-driver is deprecated and will be removed in a future release; visit https://docs.docker.com/go/storage-driver/ for more information Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-03-11 16:51:12 +00:00
const warnMsg = `
WARNING: The %s storage-driver is deprecated, and will be removed in a future release.
Refer to the documentation for more information: https://docs.docker.com/go/storage-driver/`
switch v.Driver {
case "overlay":
v.Warnings = append(v.Warnings, fmt.Sprintf(warnMsg, v.Driver))
}
Add "Warnings" to /info endpoint, and move detection to the daemon When requesting information about the daemon's configuration through the `/info` endpoint, missing features (or non-recommended settings) may have to be presented to the user. Detecting these situations, and printing warnings currently is handled by the cli, which results in some complications: - duplicated effort: each client has to re-implement detection and warnings. - it's not possible to generate warnings for reasons outside of the information returned in the `/info` response. - cli-side detection has to be updated for new conditions. This means that an older cli connecting to a new daemon may not print all warnings (due to it not detecting the new conditions) - some warnings (in particular, warnings about storage-drivers) depend on driver-status (`DriverStatus`) information. The format of the information returned in this field is not part of the API specification and can change over time, resulting in cli-side detection no longer being functional. This patch adds a new `Warnings` field to the `/info` response. This field is to return warnings to be presented by the user. Existing warnings that are currently handled by the CLI are copied to the daemon as part of this patch; This change is backward-compatible with existing clients; old client can continue to use the client-side warnings, whereas new clients can skip client-side detection, and print warnings that are returned by the daemon. Example response with this patch applied; ```bash curl --unix-socket /var/run/docker.sock http://localhost/info | jq .Warnings ``` ```json [ "WARNING: bridge-nf-call-iptables is disabled", "WARNING: bridge-nf-call-ip6tables is disabled" ] ``` Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-19 11:45:32 +00:00
fillDriverWarnings(v)
}
func (daemon *Daemon) fillPluginsInfo(ctx context.Context, v *system.Info, cfg *config.Config) {
v.Plugins = system.PluginsInfo{
Volume: daemon.volumes.GetDriverList(),
Network: daemon.GetNetworkDriverList(ctx),
// The authorization plugins are returned in the order they are
// used as they constitute a request/response modification chain.
Authorization: cfg.AuthorizationPlugins,
Log: logger.ListDrivers(),
}
}
func (daemon *Daemon) fillSecurityOptions(v *system.Info, sysInfo *sysinfo.SysInfo, cfg *config.Config) {
var securityOptions []string
if sysInfo.AppArmor {
securityOptions = append(securityOptions, "name=apparmor")
}
if sysInfo.Seccomp && supportsSeccomp {
if daemon.seccompProfilePath != config.SeccompProfileDefault {
v.Warnings = append(v.Warnings, "WARNING: daemon is not using the default seccomp profile")
}
securityOptions = append(securityOptions, "name=seccomp,profile="+daemon.seccompProfilePath)
}
if selinux.GetEnabled() {
securityOptions = append(securityOptions, "name=selinux")
}
if rootIDs := daemon.idMapping.RootPair(); rootIDs.UID != 0 || rootIDs.GID != 0 {
securityOptions = append(securityOptions, "name=userns")
}
if Rootless(cfg) {
securityOptions = append(securityOptions, "name=rootless")
}
if cgroupNamespacesEnabled(sysInfo, cfg) {
securityOptions = append(securityOptions, "name=cgroupns")
}
if noNewPrivileges(cfg) {
securityOptions = append(securityOptions, "name=no-new-privileges")
}
v.SecurityOptions = securityOptions
}
func (daemon *Daemon) fillContainerStates(v *system.Info) {
cRunning, cPaused, cStopped := stateCtr.get()
v.Containers = cRunning + cPaused + cStopped
v.ContainersPaused = cPaused
v.ContainersRunning = cRunning
v.ContainersStopped = cStopped
}
// fillDebugInfo sets the current debugging state of the daemon, and additional
// debugging information, such as the number of Go-routines, and file descriptors.
//
// Note that this currently always collects the information, but the CLI only
// prints it if the daemon has debug enabled. We should consider to either make
// this information optional (cli to request "with debugging information"), or
// only collect it if the daemon has debug enabled. For the CLI code, see
// https://github.com/docker/cli/blob/v20.10.12/cli/command/system/info.go#L239-L244
func (daemon *Daemon) fillDebugInfo(ctx context.Context, v *system.Info) {
v.Debug = debug.IsEnabled()
v.NFd = fileutils.GetTotalUsedFds(ctx)
v.NGoroutines = runtime.NumGoroutine()
v.NEventsListener = daemon.EventsService.SubscribersCount()
}
func (daemon *Daemon) fillAPIInfo(v *system.Info, cfg *config.Config) {
const warn string = `
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/go/attack-surface/`
for _, host := range cfg.Hosts {
// cnf.Hosts is normalized during startup, so should always have a scheme/proto
proto, addr, _ := strings.Cut(host, "://")
if proto != "tcp" {
continue
}
const removal = "In future versions this will be a hard failure preventing the daemon from starting! Learn more at: https://docs.docker.com/go/api-security/"
if cfg.TLS == nil || !*cfg.TLS {
v.Warnings = append(v.Warnings, fmt.Sprintf("[DEPRECATION NOTICE]: API is accessible on http://%s without encryption.%s\n%s", addr, warn, removal))
continue
}
if cfg.TLSVerify == nil || !*cfg.TLSVerify {
v.Warnings = append(v.Warnings, fmt.Sprintf("[DEPRECATION NOTICE]: API is accessible on https://%s without TLS client verification.%s\n%s", addr, warn, removal))
continue
}
}
}
func (daemon *Daemon) fillDefaultAddressPools(ctx context.Context, v *system.Info, cfg *config.Config) {
_, span := tracing.StartSpan(ctx, "fillDefaultAddressPools")
defer span.End()
for _, pool := range cfg.DefaultAddressPools.Value() {
v.DefaultAddressPools = append(v.DefaultAddressPools, system.NetworkAddressPool{
Base: pool.Base,
Size: pool.Size,
})
}
}
func hostName(ctx context.Context) string {
ctx, span := tracing.StartSpan(ctx, "hostName")
defer span.End()
hostname := ""
if hn, err := os.Hostname(); err != nil {
log.G(ctx).Warnf("Could not get hostname: %v", err)
} else {
hostname = hn
}
return hostname
}
func kernelVersion(ctx context.Context) string {
ctx, span := tracing.StartSpan(ctx, "kernelVersion")
defer span.End()
var kernelVersion string
if kv, err := kernel.GetKernelVersion(); err != nil {
log.G(ctx).Warnf("Could not get kernel version: %v", err)
} else {
kernelVersion = kv.String()
}
return kernelVersion
}
func memInfo(ctx context.Context) *meminfo.Memory {
ctx, span := tracing.StartSpan(ctx, "memInfo")
defer span.End()
memInfo, err := meminfo.Read()
if err != nil {
log.G(ctx).Errorf("Could not read system memory info: %v", err)
memInfo = &meminfo.Memory{}
}
return memInfo
}
func operatingSystem(ctx context.Context) (operatingSystem string) {
ctx, span := tracing.StartSpan(ctx, "operatingSystem")
defer span.End()
defer metrics.StartTimer(hostInfoFunctions.WithValues("operating_system"))()
if s, err := operatingsystem.GetOperatingSystem(); err != nil {
log.G(ctx).Warnf("Could not get operating system name: %v", err)
} else {
operatingSystem = s
}
if inContainer, err := operatingsystem.IsContainerized(); err != nil {
log.G(ctx).Errorf("Could not determine if daemon is containerized: %v", err)
operatingSystem += " (error determining if containerized)"
} else if inContainer {
operatingSystem += " (containerized)"
}
return operatingSystem
}
func osVersion(ctx context.Context) (version string) {
ctx, span := tracing.StartSpan(ctx, "osVersion")
defer span.End()
defer metrics.StartTimer(hostInfoFunctions.WithValues("os_version"))()
version, err := operatingsystem.GetOperatingSystemVersion()
if err != nil {
log.G(ctx).Warnf("Could not get operating system version: %v", err)
}
return version
}
func getEnvAny(names ...string) string {
for _, n := range names {
if val := os.Getenv(n); val != "" {
return val
}
}
return ""
}
Add http(s) proxy properties to daemon configuration This allows configuring the daemon's proxy server through the daemon.json con- figuration file or command-line flags configuration file, in addition to the existing option (through environment variables). Configuring environment variables on Windows to configure a service is more complicated than on Linux, and adding alternatives for this to the daemon con- figuration makes the configuration more transparent and easier to use. The configuration as set through command-line flags or through the daemon.json configuration file takes precedence over env-vars in the daemon's environment, which allows the daemon to use a different proxy. If both command-line flags and a daemon.json configuration option is set, an error is produced when starting the daemon. Note that this configuration is not "live reloadable" due to Golang's use of `sync.Once()` for proxy configuration, which means that changing the proxy configuration requires a restart of the daemon (reload / SIGHUP will not update the configuration. With this patch: cat /etc/docker/daemon.json { "http-proxy": "http://proxytest.example.com:80", "https-proxy": "https://proxytest.example.com:443" } docker pull busybox Using default tag: latest Error response from daemon: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp: lookup proxytest.example.com on 127.0.0.11:53: no such host docker build . Sending build context to Docker daemon 89.28MB Step 1/3 : FROM golang:1.16-alpine AS base Get "https://registry-1.docker.io/v2/": proxyconnect tcp: dial tcp: lookup proxytest.example.com on 127.0.0.11:53: no such host Integration tests were added to test the behavior: - verify that the configuration through all means are used (env-var, command-line flags, damon.json), and used in the expected order of preference. - verify that conflicting options produce an error. Signed-off-by: Anca Iordache <anca.iordache@docker.com> Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-07-16 07:33:00 +00:00
func getConfigOrEnv(config string, env ...string) string {
if config != "" {
return config
}
return getEnvAny(env...)
}
// promoteNil converts a nil slice to an empty slice.
// A non-nil slice is returned as is.
//
// TODO: make generic again once we are a go module,
// go.dev/issue/64759 is fixed, or we drop support for Go 1.21.
func promoteNil(s []string) []string {
if s == nil {
return []string{}
}
return s
}