Compare commits

..

921 commits

Author SHA1 Message Date
Sebastiaan van Stijn
69f9c8c906
Merge pull request #41948 from AkihiroSuda/cherrypick-41892-1903
[19.03 backport] pkg/archive: allow mknodding FIFO inside userns
2021-02-12 11:58:29 +01:00
Brian Goff
420b1d3625 pull: Validate layer digest format
Otherwise a malformed or empty digest may cause a panic.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit a7d4af84bd)
Signed-off-by: Tibor Vass <tibor@docker.com>
2021-01-28 21:43:36 +00:00
Brian Goff
5472f39022 buildkit: Apply apparmor profile
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 611eb6ffb3)

Renamed constant defaultAppArmorProfile to defaultApparmorProfile.

Signed-off-by: Tibor Vass <tibor@docker.com>
2021-01-28 21:43:09 +00:00
Tibor Vass
b96fb8837b vendor buildkit 396bfe20b590914cd77945ef0d70d976a0ed093c
Signed-off-by: Tibor Vass <tibor@docker.com>
2021-01-28 21:43:06 +00:00
Brian Goff
67de83e70b Use real root with 0701 perms
Various dirs in /var/lib/docker contain data that needs to be mounted
into a container. For this reason, these dirs are set to be owned by the
remapped root user, otherwise there can be permissions issues.
However, this uneccessarily exposes these dirs to an unprivileged user
on the host.

Instead, set the ownership of these dirs to the real root (or rather the
UID/GID of dockerd) with 0701 permissions, which allows the remapped
root to enter the directories but not read/write to them.
The remapped root needs to enter these dirs so the container's rootfs
can be configured... e.g. to mount /etc/resolve.conf.

This prevents an unprivileged user from having read/write access to
these dirs on the host.
The flip side of this is now any user can enter these directories.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit e908cc3901)

Cherry-pick conflict with eb14d936bf:
Kept old `container` variable name.
Signed-off-by: Tibor Vass <tibor@docker.com>
2021-01-28 21:42:41 +00:00
Brian Goff
5eff67a2c2 Do not set DOCKER_TMP to be owned by remapped root
The remapped root does not need access to this dir.
Having this owned by the remapped root opens the host up to an
uprivileged user on the host being able to escalate privileges.

While it would not be normal for the remapped UID to be used outside of
the container context, it could happen.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit bfedd27259)
Signed-off-by: Tibor Vass <tibor@docker.com>
2021-01-28 21:42:20 +00:00
Brian Goff
1342c51d5e Ensure MkdirAllAndChown also sets perms
Generally if we ever need to change perms of a dir, between versions,
this ensures the permissions actually change when we think it should
change without having to handle special cases if it already existed.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit edb62a3ace)
Signed-off-by: Tibor Vass <tibor@docker.com>
2021-01-28 21:42:01 +00:00
Akihiro Suda
df6c53c924
pkg/archive: allow mknodding FIFO inside userns
Fix #41803

Also attempt to mknod devices.
Mknodding devices are likely to fail, but still worth trying when
running with a seccomp user notification.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit d5d5cccb7e)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-01-28 16:46:37 +09:00
Akihiro Suda
7d75c1d40d
Merge pull request #41731 from thaJeztah/19.03_container_1.3.9
[19.03] update containerd binary to v1.3.9 (address CVE-2020-15257)
2020-12-01 12:45:08 +09:00
Sebastiaan van Stijn
d3c5506330
update containerd binary to v1.3.9 (address CVE-2020-15257)
full diff: https://github.com/containerd/containerd/compare/v1.3.8...v1.3.9

Release notes:

containerd 1.3.9
---------------------

Welcome to the v1.3.9 release of containerd!

The ninth patch release for containerd 1.3 is a security release to address
CVE-2020-15257. See GHSA-36xw-fx78-c5r4 for more details:
https://github.com/containerd/containerd/security/advisories/GHSA-36xw-fx78-c5r4

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-11-30 20:10:30 +01:00
Sebastiaan van Stijn
1babdf81e7
update containerd binary to v1.3.8
full diff: https://github.com/containerd/containerd/compare/v1.3.7...v1.3.8

Release notes:

containerd 1.3.8
----------------------

Welcome to the v1.3.8 release of containerd!

The eighth patch release for containerd 1.3 includes several bug fixes and updates.

Notable Updates

- Fix metrics monitoring of v2 runtime tasks
- Fix nil pointer error when restoring checkpoint
- Fix devmapper device deletion on rollback
- Fix integer overflow on Windows
- Update seccomp default profile

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-11-30 20:04:31 +01:00
Brian Goff
35968c420d
Merge pull request #41685 from ameyag/19.03-bmp-libnetwork-nil-deference
[19.03] docker/libnetwork 55e924b8a84231a065879156c0de95aefc5f5435 (bump_19.03 branch)
2020-11-18 10:03:17 -08:00
Ameya Gawde
f80f6304e2
Bump libnetwork
Signed-off-by: Ameya Gawde <agawde@mirantis.com>
2020-11-17 16:21:39 -08:00
Sebastiaan van Stijn
837baebb74
Merge pull request #41635 from AkihiroSuda/rootlesskit-0.11.0-1903
[19.03 backport] bump up rootlesskit to v0.11.0
2020-11-09 20:50:00 +01:00
Akihiro Suda
4b181db52b
bump up rootlesskit to v0.11.0
Important fix: Lock state dir for preventing automatic clean-up by systemd-tmpfiles
(https://github.com/rootless-containers/rootlesskit/pull/188)

Full changes:https://github.com/rootless-containers/rootlesskit/compare/v0.10.0...v0.11.0

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit c6accc67f2)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-11-05 16:53:57 +09:00
Akihiro Suda
619f1b54c6
Merge pull request #41596 from thaJeztah/19.03_backport_swagger_fix
[19.03 backport] docs: fix builder-version swagger
2020-10-29 12:37:35 +09:00
Tonis Tiigi
7487dca8a5
docs: fix builder-version swagger
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 8cc0fd811e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-10-27 20:42:13 +01:00
Brian Goff
bb69504a4a
Merge pull request #41557 from AkihiroSuda/cherrypick-41156-1903
[19.03 backport] dockerd-rootless.sh: support new containerd shim socket path convention
2020-10-16 13:06:56 -07:00
Akihiro Suda
c7253a0e1a
dockerd-rootless.sh: support containerd v1.4 shim socket path convention
The new shim socket path convention hardcodes `/run/containerd`:
https://github.com/containerd/containerd/pull/4343

`dockerd-rootless.sh` is updated to hide the rootful `/run/containerd`
from the mount namespace of the rootless dockerd.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 794aa20983)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-10-16 13:33:56 +09:00
Brian Goff
b27122246a
Merge pull request #41542 from thaJeztah/19.03_backport_fix_41517 2020-10-09 16:14:30 -07:00
Tianon Gravi
88eec2e811
Also trim "~..." from AppArmor versions
Signed-off-by: Tianon Gravi <admwiggin@gmail.com>
(cherry picked from commit 654cad4d9d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-10-09 22:22:56 +02:00
Akihiro Suda
ecd3baca25
pkg/aaparser: support parsing version like "3.0.0-beta1"
Fix #41517

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit ee079e4692)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-10-09 22:22:53 +02:00
Brian Goff
233a6379e5
Merge pull request #41522 from thaJeztah/19.03_backport_gcp_leak
[19.03 backport] Fix gcplogs memory/connection leak
2020-10-06 14:27:10 -07:00
Patrick Haas
74c0c5b7f1
Fix gcplogs memory/connection leak
The cloud logging client should be closed when the log driver is closed. Otherwise dockerd will keep a gRPC connection to the logging endpoint open indefinitely.

This results in a slow leak of tcp sockets (1) and memory (~200Kb) any time that a container using `--log-driver=gcplogs` is terminates.

Signed-off-by: Patrick Haas <patrickhaas@google.com>
(cherry picked from commit ef553e14a4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-10-03 00:30:30 +02:00
Tianon Gravi
88623e101c
Merge pull request #41293 from thaJeztah/19.03_backport_fix_getexecuser
[19.03 backport] oci: correctly use user.GetExecUser interface
2020-09-25 18:35:14 -07:00
Brian Goff
705762f23c
Merge pull request #41494 from thaJeztah/19.03_backport_aws_sdk_go
[19.03 backport] awslogs: Update aws-sdk-go to support IMDSv2
2020-09-25 12:24:39 -07:00
Samuel Karp
5f32bd9ced
awslogs: Update aws-sdk-go to support IMDSv2
AWS recently launched a new version of the EC2 Instance Metadata
Service, which is used to provide credentials to the awslogs driver when
running on Amazon EC2.  This new version of the IMDS adds
defense-in-depth mechanisms against open firewalls, reverse proxies, and
SSRF vulnerabilities and is generally an improvement over the previous
version.  An updated version of the AWS SDK is able to handle the both
the previous version and the new version of the IMDS and functions when
either is enabled.

More information about IMDSv2 is available at the following links:

* https://aws.amazon.com/blogs/security/defense-in-depth-open-firewalls-reverse-proxies-ssrf-vulnerabilities-ec2-instance-metadata-service/
* https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html

Closes https://github.com/moby/moby/issues/40422

Signed-off-by: Samuel Karp <skarp@amazon.com>
(cherry picked from commit 44a8e10bfc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-09-25 16:14:50 +02:00
Tibor Vass
bd33bbf049
Merge pull request #41314 from thaJeztah/19.03_backport_fix_racey_logger_test
[19.03 backport] test-fixes for flaky test: TestCheckCapacityAndRotate
2020-09-16 07:28:27 -07:00
Tibor Vass
426396f438
Merge pull request #41451 from thaJeztah/19.03_update_buildkit
[19.03] vendor: buildkit v0.6.4-32-gdf89d4dc
2020-09-15 16:02:53 -07:00
Tibor Vass
406dba269c
Merge pull request #41446 from thaJeztah/19.03_backport_swagger_fixes
[19.03 backport] swagger: fix MemTotal units in SystemInfo endpoint
2020-09-15 16:00:28 -07:00
Tibor Vass
50b33bd3cd
Merge pull request #41312 from thaJeztah/19.03_backport_pass_network_error
[19.03 backport] Check for context error that is wrapped in url.Error
2020-09-15 15:56:29 -07:00
Tibor Vass
519462f3df
Merge pull request #41334 from thaJeztah/19.03_backport_bump_golang_1.13.15
[19.03 backport] Bump Golang 1.13.15
2020-09-15 15:55:08 -07:00
Tibor Vass
64fffefffa
Merge pull request #40408 from thaJeztah/19.03_backport_update_containerd_1.3
[19.03 backport] update containerd binary v1.3.7
2020-09-15 15:54:32 -07:00
Sebastiaan van Stijn
8cf9d50fc0
[19.03] vendor: buildkit v0.6.4-32-gdf89d4dc
full diff: https://github.com/moby/buildkit/compare/v0.6.4-28-gda1f4bf1...v0.6.4-32-gdf89d4dc

no local changes in the daemon code

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-09-15 11:19:58 +02:00
Nikolay Edigaryev
a4e96a486f
swagger: fix MemTotal units in SystemInfo endpoint
MemTotal represents bytes, not kilobytes. See Linux[1] and Windows[2]
implementations.

[1]: f50a40e889/pkg/system/meminfo_linux.go (L49)
[2]: f50a40e889/pkg/system/meminfo_windows.go (L40)

Signed-off-by: Nikolay Edigaryev <edigaryev@gmail.com>
(cherry picked from commit 13e0ba700a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-09-14 14:37:54 +02:00
Sebastiaan van Stijn
9fe291827a
Bump Golang 1.13.15
full diff: https://github.com/golang/go/compare/go1.13.14...go1.13.15

go1.13.15 (released 2020/08/06) includes security fixes to the encoding/binary
package. See the Go 1.13.15 milestone on the issue tracker for details.

https://github.com/golang/go/issues?q=milestone%3AGo1.13.15+label%3ACherryPickApproved

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2a6325e310)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-08-10 12:16:14 +02:00
Akihiro Suda
a15a770e1b
update containerd to v1.3.7
Release note: https://github.com/containerd/containerd/releases/tag/v1.3.7

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 43d13054c5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-08-05 22:40:36 +02:00
Jintao Zhang
9380ec7397
update containerd to v1.3.6
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 85e3dddccd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-08-05 22:40:17 +02:00
Jintao Zhang
80cef48453
update containerd to v1.3.5
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 0e915e5413)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-08-05 22:40:15 +02:00
Jintao Zhang
fc8f88dc14
update containerd to v1.3.4
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit fbaaca6351)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-08-05 22:40:13 +02:00
Sebastiaan van Stijn
89a4208757
update containerd binary to v1.3.3
full diff: https://github.com/containerd/containerd/compare/v1.3.2...v1.3.3
release notes: https://github.com/containerd/containerd/releases/tag/v1.3.3

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 27649ee44f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-08-05 22:40:11 +02:00
Jintao Zhang
490c45b756
Update containerd to v1.3.2
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 7f809e1080)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-08-05 22:40:09 +02:00
Jintao Zhang
56d897347d
Update containerd to v1.3.1
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 517946eb47)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-08-05 22:40:07 +02:00
Derek McGowan
d4c63720e9
update containerd binary v1.3.0
full diff: https://github.com/containerd/containerd/compare/v1.2.8..v1.3.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
(cherry picked from commit 6c94a50f41)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-08-05 22:40:04 +02:00
Brian Goff
ec14dc44d1
Fix log file rotation test.
The test was looking for the wrong file name.
Since compression happens asyncronously, sometimes the test would
succeed and sometimes fail.

This change makes sure to wait for the compressed version of the file
since we can't know when the compression is going to occur.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit c6d860ace6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-08-05 12:48:27 +02:00
Brian Goff
a958fc3e65
Fix flakey test for log file rotate.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 5ea5c02c88)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-08-05 12:48:17 +02:00
Evgeniy Makhrov
89da709cb7
Check for context error that is wrapped in url.Error
Signed-off-by: Evgeniy Makhrov <e.makhrov@corp.badoo.com>
(cherry picked from commit 8ccb46a521)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-08-04 17:44:42 +02:00
Tibor Vass
88820a4793
Merge pull request #41287 from thaJeztah/19.03_backport_bump_netns
[19.03 backport] vendor: vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202
2020-07-31 12:30:33 +02:00
Aleksa Sarai
83baeafc3c
oci: correctly use user.GetExecUser interface
A nil interface in Go is not the same as a nil pointer that satisfies
the interface. libcontainer/user has special handling for missing
/etc/{passwd,group} files but this is all based on nil interface checks,
which were broken by Docker's usage of the API.

When combined with some recent changes in runc that made read errors
actually be returned to the caller, this results in spurrious -EINVAL
errors when we should detect the situation as "there is no passwd file".

Signed-off-by: Aleksa Sarai <asarai@suse.de>
(cherry picked from commit 3108ae6226)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-29 16:04:23 +02:00
Sebastiaan van Stijn
dae08c333e
vendor: vishvananda/netns db3c7e526aae966c4ccfa6c8189b693d6ac5d202
full diff: 0a2b9b5464...db3c7e526a

- Use golang.org/x/sys/unix instead of syscall
- Set O_CLOEXEC when opening a network namespace
    - Fixes "the container‘s netns fds leak, causing the container netns to not
      clean up successfully after the container stops"
- Allows to create and delete named network namespaces

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 818bad6ef2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-29 12:48:55 +02:00
Sebastiaan van Stijn
93cb737687
[19.03] vendor: vishvananda/netns 0a2b9b5464df8343199164a0321edf3313202f7e
Same update as was vendored in e26e1cc5c1 on
master.

full diff: 7109fa855b...0a2b9b5464

- Add support for Go modules

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-29 12:46:12 +02:00
Tibor Vass
7d597ee2c9
Merge pull request #41273 from thaJeztah/19.03_backport_swagger_fixes
[19.03 backport] Assorted swagger fixes
2020-07-28 14:30:31 +02:00
Tibor Vass
22c458b67c
Merge pull request #41274 from thaJeztah/19.03_backport_Double_RLock
[19.03 backport] plugin: fix a double RLock bug
2020-07-28 14:27:10 +02:00
Tibor Vass
8b97280f11
Merge pull request #41279 from thaJeztah/19.03_bump_buildkit
[19.03] vendor: moby/buildkit v0.6.4-28-gda1f4bf1
2020-07-28 14:25:15 +02:00
Sebastiaan van Stijn
eda52d433e
[19.03] vendor: moby/buildkit v0.6.4-28-gda1f4bf1
full diff: a1e4f48e71...da1f4bf179

- [v0.6 backport] cache: avoid nil dereference
    - fixes panic: interface conversion: interface {} is nil, not int64

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-28 13:16:50 +02:00
Sebastiaan van Stijn
168254fcfa
Merge pull request #41277 from AkihiroSuda/rootlesskit-0.10.0-1903
[19.03 backport] bump up rootlesskit to v0.10.0
2020-07-28 11:25:20 +02:00
Akihiro Suda
9dc455dffb
bump up rootlesskit to v0.10.0
Fix port forwarder resource leak (https://github.com/rootless-containers/rootlesskit/issues/153).

Changes: https://github.com/rootless-containers/rootlesskit/compare/v0.9.5...v0.10.0

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 5bc41368d9)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-07-28 17:01:10 +09:00
Akihiro Suda
c200868fa2
Merge pull request #41271 from thaJeztah/19.03_backport_remove_dockerproject_from_tests
[19.03 backport] Remove apt.dockerproject.org from test
2020-07-28 16:44:42 +09:00
Sebastiaan van Stijn
9eade7d03c
docs: API v1.39: move system version response to definitions
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f2cc755f66)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-28 09:36:34 +02:00
Sebastiaan van Stijn
4685e9ef72
docs: API v1.40: move system version response to definitions
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e221931ccd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-28 09:36:25 +02:00
Sebastiaan van Stijn
d8f22d0307
swagger: move system version response to definitions
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d4c4323e54)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-28 09:36:16 +02:00
Ziheng Liu
32366de5f9
plugin: fix a double RLock bug
Signed-off-by: Ziheng Liu <lzhfromustc@gmail.com>
(cherry picked from commit 34837febc4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-28 09:28:25 +02:00
Sebastiaan van Stijn
ad0278f002
docs: API v1.39: fix type for BuildCache CreatedAt and LastUsedAt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9a6402d761)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-28 09:22:10 +02:00
Sebastiaan van Stijn
cb8b7a282d
docs: API v1.40: fix type for BuildCache CreatedAt and LastUsedAt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a305abb1d1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-28 09:22:04 +02:00
Sebastiaan van Stijn
e1ae07b7a0
swagger: fix type for BuildCache CreatedAt and LastUsedAt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 61b770a63d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-28 09:22:01 +02:00
Tibor Vass
d49278cc17
Merge pull request #41269 from thaJeztah/19.03_update_buildkit
[19.03] vendor: moby/buildkit v0.6.4-26-ga1e4f48e
2020-07-28 00:15:13 +02:00
Sebastiaan van Stijn
892c228219
Remove apt.dockerproject.org from test
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit aa225972df)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-28 00:14:46 +02:00
Brian Goff
a7e309944b
Merge pull request #41248 from thaJeztah/19.03_backport_swagger_updates 2020-07-27 12:02:16 -07:00
Sebastiaan van Stijn
765245d54b
[19.03] vendor: moby/buildkit v0.6.4-26-ga1e4f48e
full diff: 4cb720ef64...a1e4f48e71

Brings in the cherry-picks from moby/buildkit#1596 and moby/buildkit#1598 :

- Add --force flag in git fetch command
- Fix socket handling during copy (Treat unix sockets as regular files)
- Remotecache: Only visit each item once when walking results.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-27 20:17:05 +02:00
Akihiro Suda
2d4bfdc789
Merge pull request #41081 from thaJeztah/19.03_backport_fix_sandbox_cleanup
[19.03 backport] allocateNetwork: fix network sandbox not cleaned up on failure
2020-07-26 16:17:32 +09:00
Tibor Vass
b990b6c2b0
Merge pull request #41235 from thaJeztah/19.03_backport_bump_golang_1.13.14
[19.03 backport] Bump Golang 1.13.14
2020-07-23 15:43:41 +02:00
Sebastiaan van Stijn
4d9397c268
swagger: sync updates to v1.39
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a8b2272ab3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-23 13:55:09 +02:00
Sebastiaan van Stijn
51bd95dc95
swagger: sync updates to v1.40
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1e89ca40ba)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-23 13:55:00 +02:00
Sebastiaan van Stijn
d5ba93575c
docs: sync API v1.40 swagger formatting with current version
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 01244e85e7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-23 13:26:21 +02:00
Sebastiaan van Stijn
12b7746a84
docs: sync API v1.39 swagger formatting with current version
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 63382e5f3b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-23 13:25:33 +02:00
Velko Ivanov
0c6bdf5974
docs: add example calculations to container stats API
Signed-off-by: Velko Ivanov <vivanov@deeperplane.com>
(cherry picked from commit 441211986c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-23 13:22:02 +02:00
Sebastiaan van Stijn
630185b4ae
swagger: add DeviceRequests to container create, inspect example
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d4d62b658d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-23 13:20:38 +02:00
Sebastiaan van Stijn
d7423180e7
swagger: move NetworkingConfig to definitions
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 89876e8165)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-23 13:20:28 +02:00
Sebastiaan van Stijn
c30ff6885e
swagger: reformat, and wrap to ~80-chars
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3b261d7709)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-23 13:19:53 +02:00
Nikolay Edigaryev
7005841048
swagger: clarify the meaning of Image field in ContainerInspect endpoint
"Container's image" term is rather ambiguous: it can be both a name and an ID.

Looking at the sources[1], it's actually an image ID, so bring some clarity.

[1]: a6a47d1a49/daemon/inspect.go (L170)

Signed-off-by: Nikolay Edigaryev <edigaryev@gmail.com>
(cherry picked from commit c44fb42377)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-23 13:15:46 +02:00
Sebastiaan van Stijn
1608292c09
Bump Golang 1.13.14
full diff: https://github.com/golang/go/compare/go1.13.13...go1.13.14

go1.13.14 (released 2020/07/16) includes fixes to the compiler, vet, and the
database/sql, net/http, and reflect packages. See the Go 1.13.14 milestone on
the issue tracker for details:

https://github.com/golang/go/issues?q=milestone%3AGo1.13.14+label%3ACherryPickApproved

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9c66a2f4e1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-20 10:09:49 +02:00
Akihiro Suda
1763b4e88b
Bump Go 1.13.13
Includes security fixes to the `crypto/x509` and `net/http` packages.

https://github.com/golang/go/issues?q=milestone%3AGo1.13.13+label%3ACherryPickApproved

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit bc4f242e79)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-20 10:09:04 +02:00
Sebastiaan van Stijn
5e8ab898c7
Merge pull request #41222 from thaJeztah/19.03_bump_buildkit
[19.03] vendor: moby/buildkit v0.6.4-20-g4cb720ef
2020-07-17 08:52:23 +02:00
Sebastiaan van Stijn
23d47bd12e
[19.03] vendor: moby/buildkit v0.6.4-20-g4cb720ef
full diff: dc6afa0f75...4cb720ef64

- contenthash: ignore system and security xattrs in calculation
    - fixes moby/buildkit#1330 COPY cache not re-used depending on SELinux environment
    - fixes https://github.com/moby/moby/issues/39003#issuecomment-574615437
- contenthash: allow security.capability in cache checksum
- inline cache: fix handling of duplicate blobs
    - fixes moby/buildkit#1388 cache-from working unreliably

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-16 21:56:22 +02:00
Akihiro Suda
789bd1c67b
Merge pull request #41192 from ameyag/19.03-hcsshim-vndr
[19.03 backport]  vendor: hcsshim 9dcb42f100215f8d375b4a9265e5bba009217a85
2020-07-10 10:07:41 +09:00
Tõnis Tiigi
0eaa22b95d
Merge pull request #41185 from thaJeztah/19.03_bump_buildkit
[19.03] vendor: buildkit dc6afa0f755f6cbb7e85f0df4ff4b87ec280cb32 (v0.6.4-15-gdc6afa0f)
2020-07-09 12:21:35 -07:00
Kevin Parsons
9d6053eda2
Revendor hcsshim to fix image import bug
This change brings in a single new commit from Microsoft/hcsshim. The
commit fixes an issue when unpacking a Windows container layer which
could result in incorrect directory timestamps.

This manifested most significantly in an impact to startup times of
some Windows container images (such as anything based on servercore).

Signed-off-by: Kevin Parsons <kevpar@microsoft.com>
(cherry picked from commit 2865478487)
Signed-off-by: Ameya Gawde <agawde@mirantis.com>
2020-07-08 14:08:50 -07:00
Sebastiaan van Stijn
589b07262c
vendor: Microsoft/hcsshim v0.8.9
full diff: https://github.com/Microsoft/hcsshim/compare/v0.8.7...v0.8.9

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 08d3774304)
Signed-off-by: Ameya Gawde <agawde@mirantis.com>
2020-07-08 14:07:24 -07:00
Sebastiaan van Stijn
e7c2b106ec
[19.03] vendor: buildkit dc6afa0f755f6cbb7e85f0df4ff4b87ec280cb32 (v0.6.4-15-gdc6afa0f)
full diff: a7d7b7f1e6...dc6afa0f75

- solver: avoid recursive loop on cache-export
    - fixes moby/buildkit#1336 --export-cache option crashes buildkitd on custom frontend
    - fixes moby/buildkit#1313 Dockerd / buildkit in a infinite loop and burning cpu
    - fixes / addresses moby/moby#41044 19.03.9 goroutine stack exceeds 1000000000-byte limit
    - fixes / addresses moby/moby#40993 Multistage docker build fails with unexpected EOF

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-08 11:42:15 +02:00
Sebastiaan van Stijn
a40b877fbb
Merge pull request #41133 from roidelapluie/bsd2
[19.03] Enable build on Dragonfly/NetBSD
2020-07-06 17:08:19 +02:00
Julien Pivotto
7dd9fdcfbe Enable client on netbsd and dragonfly
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
(cherry picked from commit 87a7fc1ced)
2020-06-21 07:43:09 +02:00
Brian Goff
9dc6525e61
Merge pull request #41124 from thaJeztah/19.03_bump_libnetwork
[19.03] vendor: docker/libnetwork 026aabaa659832804b01754aaadd2c0f420c68b6 (bump_19.03 branch)
2020-06-18 11:13:00 -07:00
Tibor Vass
abb5beffff
Merge pull request #41088 from thaJeztah/19.03_backport_invalid_cpu_shares_fix
[19.03 backport] int-cli/TestRunInvalidCPUShares: fix for newer runc
2020-06-17 10:38:05 -07:00
Sebastiaan van Stijn
b4ca19a992
vendor: docker/libnetwork 026aabaa659832804b01754aaadd2c0f420c68b6 (bump_19.03 branch)
full diff: 153d0769a1...026aabaa65

- Fix 'failed to get network during CreateEndpoint'
- log error instead if disabling IPv6 router advertisement failed

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-06-17 15:59:05 +02:00
Akihiro Suda
d5a82971a4
Merge pull request #41082 from thaJeztah/19.03_backport_bump_golang_1.13.12
[19.03 backport] Bump Golang 1.13.12
2020-06-12 07:38:30 +09:00
Kir Kolyshkin
5fce12cf25
int-cli/TestRunInvalidCPUShares: fix for newer runc
A newer runc changed [1] a couple of certain error messages checked in this

test to be lowercased, which lead to a mismatch in this test case.

Fix is to remove "The" (which was replaced with "the").

[1] https://github.com/opencontainers/runc/pull/2441

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 56de0489fc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-06-10 13:49:39 +02:00
Jintao Zhang
058ea43c5c
Bump Golang 1.13.12
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 004fd7be92)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-06-08 12:13:07 +02:00
Sebastiaan van Stijn
ae158b371c
allocateNetwork: fix network sandbox not cleaned up on failure
The defer function was checking for the local `err` variable, not
on the error that was returned by the function. As a result, the
sandbox would never be cleaned up for containers that used "none"
networking, and a failiure occured during setup.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b98b8df886)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-06-08 12:05:58 +02:00
Tibor Vass
77e06fda0c vendor libnetwork to 153d0769a1181bf591a9637fd487a541ec7db1e6
Signed-off-by: Tibor Vass <tibor@docker.com>
2020-05-31 23:42:53 +00:00
Tibor Vass
b47e742558
Merge pull request #41027 from thaJeztah/19.03_bump_criu
[19.03 backport] Dockerfile: bump CRIU 3.14
2020-05-28 11:23:17 -07:00
Sebastiaan van Stijn
b85d75e29a
Merge pull request #41009 from tiborvass/19.03-fix-dns-fallback-regression
[19.03] Fix dns fallback regression
2020-05-28 18:41:06 +02:00
Tibor Vass
c104a50de4 integration: Add TestDaemonDNSFallback
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit e5ad8b14daf0a1ddb12c0b83d153531afffb908b)
Signed-off-by: Tibor Vass <tibor@docker.com>
2020-05-28 10:52:02 +00:00
Tibor Vass
9482566a5c vendor libnetwork to 71d4d82a5ce50453b1121d95544f0a2ae95bef9b
Signed-off-by: Tibor Vass <tibor@docker.com>
2020-05-28 10:52:02 +00:00
Tibor Vass
d4e12315cd hack: add more debugging to understand exit codepath
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit b280ea114f)
Signed-off-by: Tibor Vass <tibor@docker.com>
2020-05-28 03:32:24 +00:00
Sebastiaan van Stijn
4c24512241
Dockerfile: bump CRIU 3.14
full diff: https://github.com/checkpoint-restore/criu/compare/v3.13...v3.14

New features

- C/R of memfd memory mappings and file descriptors.
- Add time namespace support.
- Add the read pre-dump mode which uses process_vm_readv.
- Add --cgroup-yard option
- Add support of the cgroup v2 freezer.
- Add support of opened O_PATH fds.

Bugfixes

- Fix C/R ia32 processes on AMD
- Fix cross-compilation
- Many fixes here and there

Improvements

- Use clone3() with set_tid to restore processes
- Clean up compel headers.
- Use the new mount API

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a342010823)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-05-26 10:45:29 +02:00
Tibor Vass
ad0f0b3970
Merge pull request #40978 from thaJeztah/19.03_backport_bump_golang_1.13.11
[19.03 backport] Bump Golang 1.13.11
2020-05-20 14:35:26 -07:00
Sebastiaan van Stijn
29796375c9
Bump Golang 1.13.11
full diff: https://github.com/golang/go/compare/go1.13.10...go1.13.11

go1.13.11 (released 2020/05/14) includes fixes to the compiler. See the Go 1.13.11
milestone on the issue tracker for details:

https://github.com/golang/go/issues?q=milestone%3AGo1.13.11+label%3ACherryPickApproved

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 90758fb028)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-05-19 20:18:02 +02:00
Tibor Vass
c1cc6ec81a
Merge pull request #40988 from thaJeztah/19.03_backport_fix_gotestsum_install
[19.03 backport] Fix bug in gotestsum installer causing dependencies to not be downloaded
2020-05-19 10:41:05 -07:00
Sebastiaan van Stijn
8f1ab4e612
Fix bug in gotestsum installer causing dependencies to not be downloaded
Building gotestsum started to fail after the repository removed some
dependencies on master.

What happens is that first, we `go get` the package (with go modules disabled);

    GO111MODULE=off go get -d gotest.tools/gotestsum

Which gets the latest version from master, and fetches the dependencies used
on master. Then we checkout the version we want to install (for example `v0.3.5`)
and run go build.

However, `v0.3.5` depends on logrus, and given that we ran `go get` for `master`,
that dependency was not fetched, and build fails.

This patch modifies the installer to use go modules (alternatively we could
probably run `go get .` after checking out the `v0.3.5` version),

We need to modify all installers, as it looks like this is a standard pattern
we use, but other dependencies were not failing (yet), so this patch only
addresses the immediate failure.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1d9da1b233)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-05-17 22:52:54 +02:00
Tibor Vass
811a247d06
Merge pull request #40970 from AkihiroSuda/archive-40939-1903
[19.03 backport] pkg/archive: escape ":" symbol in overlay lowerdir
2020-05-14 16:03:53 -07:00
Tibor Vass
4d1885fb94
Merge pull request #40964 from AkihiroSuda/rootless-requires-slirp4netns-040-1903
[19.03 backport] dockerd-rootless.sh: bump up slirp4netns requirement to v0.4.0
2020-05-14 15:37:08 -07:00
Akihiro Suda
0a3b2bda34 pkg/archive: escape ":" symbol in overlay lowerdir
lowerdir needs escaping:
https://github.com/torvalds/linux/blob/v5.4/fs/overlayfs/super.c#L835-L853

Fix #40939

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 6a5e3547fb)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-05-15 06:57:49 +09:00
Akihiro Suda
9057ddf37c dockerd-rootless.sh: bump up slirp4netns requirement to v0.4.0
slirp4netns v0.3.X turned out not to work with RootlessKit >= v0.7.1:
https://github.com/rootless-containers/rootlesskit/issues/143

As slirp4netns v0.3.X reached EOL on Mar 31, 2020, RootlessKit is not
going to fix support for slirp4netns v0.3.X.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit c86abee1a4)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-05-13 12:22:27 +09:00
Akihiro Suda
ab567a4327
Merge pull request #40955 from tonistiigi/19.03-buildkit-update
[19.03] vendor: update buildkit to a7d7b7f1
2020-05-12 13:56:06 +09:00
Akihiro Suda
ee3f3ece72
Merge pull request #40951 from AkihiroSuda/rootlesskit-095-1903
[19.03 backport] bump up rootlesskit to v0.9.5
2020-05-12 13:39:56 +09:00
Tonis Tiigi
a76633684b vendor: update buildkit to a7d7b7f1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-05-11 16:28:55 -07:00
Akihiro Suda
0803200be9
Merge pull request #40946 from thaJeztah/19.03_backport_fix_selinux_enotsup
[19.03 backport] SELinux: fix ENOTSUP errors not being detected when relabeling
2020-05-12 00:33:11 +09:00
Akihiro Suda
706008a1da bump up rootlesskit to v0.9.5
Supports numeric ID in /etc/subuid and /etc/subgid .
Fix #40926

Full changes: https://github.com/rootless-containers/rootlesskit/compare/v0.9.4...v0.9.5

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 17bb5f4b15)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-05-11 23:53:21 +09:00
Sebastiaan van Stijn
57f6c9a0ef
SELinux: fix ENOTSUP errors not being detected when relabeling
Commit 12c7541f1f updated the
opencontainers/selinux dependency to v1.3.1, which had a breaking
change in the errors that were returned.

Before v1.3.1, the "raw" `syscall.ENOTSUP` was returned if the
underlying filesystem did not support xattrs, but later versions
wrapped the error, which caused our detection to fail.

This patch uses `errors.Is()` to check for the underlying error.
This requires github.com/pkg/errors v0.9.1 or above (older versions
could use `errors.Cause()`, but are not compatible with "native"
wrapping of errors in Go 1.13 and up, and could potentially cause
these errors to not being detected again.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 49f8a4224c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-05-10 17:08:42 +02:00
Sebastiaan van Stijn
c4c6cf6b6a
Merge pull request #40921 from cpuguy83/19.03_log_rotate_error_handling
19.03: logfile: Check if log is closed on close error during rotate
2020-05-08 01:13:30 +02:00
Brian Goff
7d4dd91a52 logfile: Check if log is closed on close error during rotate
This prevents getting into a situation where a container log cannot make
progress because we tried to rotate a file, got an error, and now the
file is closed. The next time we try to write a log entry it will try
and rotate again but error that the file is already closed.

I wonder if there is more we can do to beef up this rotation logic.
Found this issue while investigating missing logs with errors in the
docker daemon logs like:

```
Failed to log message for json-file: error closing file: close <file>:
file already closed
```

I'm not sure why the original rotation failed since the data was no
longer available.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 3989f91075)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2020-05-07 12:22:58 -07:00
Sebastiaan van Stijn
edf2c49410 vendor: pkg/errors v0.9.1
full diff: https://github.com/pkg/errors/compare/v0.8.1...v0.9.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit dc089c22ce)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2020-05-07 12:22:58 -07:00
Tibor Vass
1adcc64f40
Merge pull request #40877 from thaJeztah/19.03_update_buildkit
[19.03] vendor: buildkit v0.6.4-5-g59e305aa
2020-05-01 15:51:47 -07:00
Tibor Vass
e7349349fd
Merge pull request #40850 from thaJeztah/19.03_backport_criu_3.13
[19.03 backport] Update CRIU to v3.13 "Silicon Willet"
2020-04-30 08:59:55 -07:00
Tibor Vass
3677003554
Merge pull request #40782 from thaJeztah/19.03_backport_switch_to_s390x_ubuntu_1804
[19.03 backport] Switch to s390x Ubuntu 18.04
2020-04-30 08:26:41 -07:00
Sebastiaan van Stijn
63841af153
[19.03] vendor: buildkit v0.6.4-5-g59e305aa
full diff: b26cff2413...59e305aa33

- moby/buildkit#1469 Avoid creation of irrelevant temporary files on Windows
    - backport of moby/buildkit#1462 for the docker-19.03/v0.6 branch

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-30 17:13:00 +02:00
Tianon Gravi
2fbb374ab7
Merge pull request #40863 from AkihiroSuda/rootlesskit-094-1903
[19.03 backport] bump up rootlesskit to v0.9.4
2020-04-28 23:23:49 -07:00
Akihiro Suda
946d0ff67e bump up rootlesskit to v0.9.4
Now `rootlesskit-docker-proxy` returns detailed error message on
exposing privileged ports: https://github.com/rootless-containers/rootlesskit/pull/136

Full changes: https://github.com/rootless-containers/rootlesskit/compare/v0.7.1...v0.9.4

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit f6ac841633)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-04-28 05:18:11 +09:00
Sebastiaan van Stijn
70e7d6fe4a
Update CRIU to v3.13 "Silicon Willet"
full diff: https://github.com/checkpoint-restore/criu/compare/v3.12...v3.13

Here we have some bugfixes, huuuge *.py patch for coding style
and nice set of new features like 32bit for ARM, TLS for page
server and new mode for CGroups.

New features

- VDSO: arm32 support
- Add TLS support for page server communications
- "Ignore" mode for --manage-cgroups
- Restore SO_BROADCAST option for inet sockets

Bugfixes

- Auxiliary events were left in inotify queues
- Lazy-pages daemon didn't detect stack pages and surrounders properly and marked them as "lazy"
- Memory and resource leakage were detected by coverity, cppcheck and clang

Improvements

- Use gettimeofday() directly from vdso for restore timings
- Reformat all .py code into pep8 style

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f508db4833)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-22 13:55:32 +02:00
Akihiro Suda
f432f71595
Merge pull request #40563 from thaJeztah/19.03_backport_fix_windows_file_handles
[19.03 backport] Use FILE_SHARE_DELETE for log files on Windows.
2020-04-17 17:00:19 +09:00
Akihiro Suda
47a6d9b54f
Merge pull request #40565 from thaJeztah/19.03_backport_fix_bip_subnet_config
[19.03 backport] Set the bip network value as the subnet
2020-04-17 16:59:34 +09:00
Akihiro Suda
6a0995e0d8
Merge pull request #40831 from thaJeztah/19.03_bump_swarmkit
[19.03] vendor: swarmkit 0b8364e7d08aa0e972241eb59ae981a67a587a0e
2020-04-17 16:35:05 +09:00
Sebastiaan van Stijn
e4f239d68e
[19.03] vendor: swarmkit 0b8364e7d08aa0e972241eb59ae981a67a587a0e
full diff: 062b694b46...0b8364e7d0

- Fix leaking tasks.db

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-16 21:55:43 +02:00
Brian Goff
25b82fa9b8
Merge pull request #40801 from thaJeztah/19.03_backport_update_go_events
[19.03 backport] vendor: update go-events to fix alignment for 32bit systems
2020-04-15 14:38:17 -07:00
Sebastiaan van Stijn
e149ff62fe
vendor: update go-events to fix alignment for 32bit systems
- relates to moby/buildkit 1111
- relates to moby/buildkit 1079
- relates to docker/buildx 129

full diff: 9461782956...e31b211e4f

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e7183dbfe9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-15 21:17:11 +02:00
Brian Goff
90a31c4829
Merge pull request #40809 from thaJeztah/19.03_update_libnetwork
[19.03] update libnetwork b9bcf0c3fba9ef8897c9676c5b70ba0345b84b17
2020-04-15 06:19:44 -07:00
Brian Goff
aa98b4f5d6
Merge pull request #40803 from thaJeztah/19.03_backport_bump_golang_1.13.10
[19.03 backport] Bump Golang 1.13.10
2020-04-13 10:59:40 -07:00
Sebastiaan van Stijn
860e7e273d
Merge pull request #40800 from thaJeztah/19.03_backport_api_docs_fix_link
[19.03 backport] api docs: fix broken link on GitHub
2020-04-12 15:47:12 +02:00
Tianon Gravi
a58b52b037
Merge pull request #40799 from thaJeztah/19.03_backport_fix_test_filter
[19.03 backport] Fix TEST_FILTER to work for both "integration" and "integration-cli"
2020-04-10 12:35:59 -07:00
Sebastiaan van Stijn
a6beb24dc5
[19.03] update libnetwork b9bcf0c3fba9ef8897c9676c5b70ba0345b84b17
full diff: 0941c3f409...b9bcf0c3fb

- docker/libnetwork#2545 Fix NPE due to null value returned by ep.Iface()
    - backport of docker/libnetwork#2544
    - addresses docker/docker#37506

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-10 20:34:31 +02:00
Sebastiaan van Stijn
282567a58d
Bump Golang 1.13.10
go1.13.10 (released 2020/04/08) includes fixes to the go command, the runtime,
os/exec, and time packages. See the Go 1.13.10 milestone on the issue tracker
for details:

https://github.com/golang/go/issues?q=milestone%3AGo1.13.10+label%3ACherryPickApproved

full diff: https://github.com/golang/go/compare/go1.13.9...go1.13.10

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7cb13d4d85)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-09 22:00:49 +02:00
Sebastiaan van Stijn
b66813eb45
api docs: fix broken link on GitHub
The pages that were linked to have moved, so changing the
links to point to docs.docker.com instead.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e9348898d3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-09 21:36:10 +02:00
Brian Goff
edbb1d9e95
Merge pull request #40784 from thaJeztah/19.03_update_buildkit
[19.03] vendor buildkit b26cff2413cc6a466f8739262efa13bd126f8fc7
2020-04-09 12:04:55 -07:00
Sebastiaan van Stijn
9d8eccec8e
Fix TEST_FILTER to work for both "integration" and "integration-cli"
The TEST_FILTER variable allows running a single integration or integration-cli
test. However, it failed to work properly for integration-cli tests.

Before:
-----------

    # Filtering "integration" tests works:
    make TEST_FILTER=TestInspectCpusetInConfigPre120 test-integration
    ...
    DONE 1 tests in 18.331s

    # But running a single test in "integration-cli" did not:

    make TEST_FILTER=TestSwarmNetworkCreateIssue27866 test-integration
    ...
    DONE 0 tests in 17.314s

Trying to manually add the `/` prefix, didn't work either, because that made the
"grep" fail to find which test-suites to run/skip:

    make TEST_FILTER=/TestSwarmNetworkCreateIssue27866 test-integration
    ---> Making bundle: test-integration (in bundles/test-integration)
    make: *** [test-integration] Error 1

After:
-----------

    make TEST_FILTER=TestInspectCpusetInConfigPre120 test-integration
    ...
    DONE 1 tests in 18.331s

    make TEST_FILTER=TestSwarmNetworkCreateIssue27866 test-integration
    ...
    DONE 12 tests in 26.527s

Note that the `12` tests is still a bit misleading, because every _suite_ is
started (which is counted as a test), but no tests are run. This is still
something that could be improved on.

This patch also makes a small modification to the code that's setting
`integration_api_dirs`, and no longer runs `go list` if not needed.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e7805653b8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-09 13:46:15 +02:00
Akihiro Suda
4275aec641
Merge pull request #40592 from thaJeztah/19.03_backport_bump_golang_1.13
[19.03 backport] Update Golang 1.13.9
2020-04-09 05:43:14 +09:00
Akihiro Suda
4b040147cf
Merge pull request #40417 from thaJeztah/19.03_backport_test_fixes
[19.03 backport] Testing changes
2020-04-07 09:50:27 +09:00
Sebastiaan van Stijn
08a2fe0d56
[19.03] vendor buildkit b26cff2413cc6a466f8739262efa13bd126f8fc7
full diff: https://github.com/moby/buildkit/compare/v0.6.4...b26cff2413cc6a466f8739262efa13bd126f8fc7

- solver: avoid looping over same keys in loadwithparents

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 19:06:25 +02:00
Brian Goff
1e1caccb13
Merge pull request #40780 from thaJeztah/19.03_backport_map_sync
[19.03 backport] builder: fix concurrent map write
2020-04-06 08:56:01 -07:00
Sebastiaan van Stijn
5ba2bf37a8
Bump Golang 1.13.9
go1.13.9 (released 2020/03/19) includes fixes to the go command, tools, the
runtime, the toolchain, and the crypto/cypher package. See the Go 1.13.9
milestone on the issue tracker for details:

https://github.com/golang/go/issues?q=milestone%3AGo1.13.9+label%3ACherryPickApproved

full diff: https://github.com/golang/go/compare/go1.13.8...go1.13.9

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6ee9a1ad29)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 15:38:14 +02:00
Sebastiaan van Stijn
f432a04243
Update Golang 1.13.8
full diff: https://github.com/golang/go/compare/go1.13.7...go1.13.8

go1.13.8 (released 2020/02/12) includes fixes to the runtime, the crypto/x509,
and net/http packages. See the Go 1.13.8 milestone on the issue tracker for details.

https://github.com/golang/go/issues?q=milestone%3AGo1.13.8+label%3ACherryPickApproved

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3f7503f98a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 15:36:28 +02:00
Sebastiaan van Stijn
81458b3144
Update Golang 1.13.7 (CVE-2020-0601, CVE-2020-7919)
full diff: https://github.com/golang/go/compare/go1.13.6...go1.13.7

go1.13.7 (released 2020/01/28) includes two security fixes. One mitigates
the CVE-2020-0601 certificate verification bypass on Windows. The other affects
only 32-bit architectures.

https://github.com/golang/go/issues?q=milestone%3AGo1.13.7+label%3ACherryPickApproved

- X.509 certificate validation bypass on Windows 10
  A Windows vulnerability allows attackers to spoof valid certificate chains when
  the system root store is in use. These releases include a mitigation for Go
  applications, but it’s strongly recommended that affected users install the
  Windows security update to protect their system.
  This issue is CVE-2020-0601 and Go issue golang.org/issue/36834.
- Panic in crypto/x509 certificate parsing and golang.org/x/crypto/cryptobyte
  On 32-bit architectures, a malformed input to crypto/x509 or the ASN.1 parsing
  functions of golang.org/x/crypto/cryptobyte can lead to a panic.
  The malformed certificate can be delivered via a crypto/tls connection to a
  client, or to a server that accepts client certificates. net/http clients can
  be made to crash by an HTTPS server, while net/http servers that accept client
  certificates will recover the panic and are unaffected.
  Thanks to Project Wycheproof for providing the test cases that led to the
  discovery of this issue. The issue is CVE-2020-7919 and Go issue golang.org/issue/36837.
  This is also fixed in version v0.0.0-20200124225646-8b5121be2f68 of golang.org/x/crypto/cryptobyte.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 878db479be)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 15:36:26 +02:00
Sebastiaan van Stijn
6e1d159680
Update Golang 1.13.6
full diff: https://github.com/golang/go/compare/go1.13.5...go1.13.6

go1.13.6 (released 2020/01/09) includes fixes to the runtime and the net/http
package. See the Go 1.13.6 milestone on the issue tracker for details.

https://github.com/golang/go/issues?q=milestone%3AGo1.13.6+label%3ACherryPickApproved

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d68385b861)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 15:36:24 +02:00
Sebastiaan van Stijn
4241093b63
Update Golang 1.13.5
go1.13.5 (released 2019/12/04) includes fixes to the go command, the runtime, the
linker, and the net/http package. See the Go 1.13.5 milestone on our issue tracker
for details:

https://github.com/golang/go/issues?q=milestone%3AGo1.13.5+label%3ACherryPickApproved

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a218e9b7b0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 15:36:22 +02:00
Jintao Zhang
162fd8b856
Bump Golang 1.13.4
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit cf86eeaf96)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 15:36:21 +02:00
Jintao Zhang
05a1ebd0fd
Bump Golang 1.13.3 (CVE-2019-17596)
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 635584280b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 15:36:19 +02:00
Sebastiaan van Stijn
f8d4276a89
bump golang 1.13.1 (CVE-2019-16276)
full diff: https://github.com/golang/go/compare/go1.13...go1.13.1

```
Hi gophers,

We have just released Go 1.13.1 and Go 1.12.10 to address a recently reported security issue. We recommend that all affected users update to one of these releases (if you're not sure which, choose Go 1.13.1).

net/http (through net/textproto) used to accept and normalize invalid HTTP/1.1 headers with a space before the colon, in violation of RFC 7230. If a Go server is used behind an uncommon reverse proxy that accepts and forwards but doesn't normalize such invalid headers, the reverse proxy and the server can interpret the headers differently. This can lead to filter bypasses or request smuggling, the latter if requests from separate clients are multiplexed onto the same upstream connection by the proxy. Such invalid headers are now rejected by Go servers, and passed without normalization to Go client applications.

The issue is CVE-2019-16276 and Go issue golang.org/issue/34540.

Thanks to Andrew Stucki, Adam Scarr (99designs.com), and Jan Masarik (masarik.sh) for discovering and reporting this issue.

Downloads are available at https://golang.org/dl for all supported platforms.

Alla prossima,
Filippo on behalf of the Go team
```

From the patch: 6e6f4aaf70

```
net/textproto: don't normalize headers with spaces before the colon

RFC 7230 is clear about headers with a space before the colon, like

X-Answer : 42

being invalid, but we've been accepting and normalizing them for compatibility
purposes since CL 5690059 in 2012.

On the client side, this is harmless and indeed most browsers behave the same
to this day. On the server side, this becomes a security issue when the
behavior doesn't match that of a reverse proxy sitting in front of the server.

For example, if a WAF accepts them without normalizing them, it might be
possible to bypass its filters, because the Go server would interpret the
header differently. Worse, if the reverse proxy coalesces requests onto a
single HTTP/1.1 connection to a Go server, the understanding of the request
boundaries can get out of sync between them, allowing an attacker to tack an
arbitrary method and path onto a request by other clients, including
authentication headers unknown to the attacker.

This was recently presented at multiple security conferences:
https://portswigger.net/blog/http-desync-attacks-request-smuggling-reborn

net/http servers already reject header keys with invalid characters.
Simply stop normalizing extra spaces in net/textproto, let it return them
unchanged like it does for other invalid headers, and let net/http enforce
RFC 7230, which is HTTP specific. This loses us normalization on the client
side, but there's no right answer on the client side anyway, and hiding the
issue sounds worse than letting the application decide.
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8eb23cde95)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 15:36:17 +02:00
Sebastiaan van Stijn
7df2d881f3
Bump Golang version 1.13.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 38e4ae3bca)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 15:36:15 +02:00
Jintao Zhang
fed832e224
Update to using alpine 3.10
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 330bf32971)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 15:36:12 +02:00
Stefan Scherer
4581499848
Switch to s390x Ubuntu 18.04
Signed-off-by: Stefan Scherer <stefan.scherer@docker.com>
(cherry picked from commit c239bbbcb2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 14:18:50 +02:00
Tonis Tiigi
f34a5b5af0
builder: fix concurrent map write
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 5ad981640f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 14:10:23 +02:00
Sebastiaan van Stijn
0df114a8f8
Merge pull request #40779 from thaJeztah/19.03_backport_unify_apis
[19.03 backport] docs: add API versions v1.30 - v1.37
2020-04-06 12:02:08 +02:00
Sebastiaan van Stijn
9f5a5da4cb
docs: add API versions v1.30 - v1.37
Adding separate documents for older API versions, so that these don't have to
be collected from each tag/release branch. For each version of the API, I picked
the highest release that uses the API (to make sure to include possible fixes
in the swagger);

    git mv api/swagger.yaml api/swagger-current.yaml

    git checkout v18.05.0-ce -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.37.yaml

    git checkout v18.02.0-ce -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.36.yaml

    git checkout v18.01.0-ce -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.35.yaml

    git checkout v17.11.0-ce -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.34.yaml

    git checkout v17.10.0-ce -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.33.yaml

    git checkout v17.09.1-ce -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.32.yaml

    git checkout v17.07.0-ce -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.31.yaml

    git checkout v17.06.2-ce -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.30.yaml

    git mv api/swagger-current.yaml api/swagger.yaml

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2b8ae08571)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-06 10:17:54 +02:00
Akihiro Suda
037d5a9e9a
Merge pull request #40769 from thaJeztah/19.03_backport_seccomp_time64
[19.03 backport] seccomp: add 64-bit time_t syscalls
2020-04-03 14:12:31 +09:00
Sebastiaan van Stijn
5ed8f9a203
Merge pull request #40681 from thaJeztah/19.03_backport_schema2v1_dep_notice_on_error_only
[19.03 backport] Move schema1 deprecation notice
2020-04-03 01:22:50 +02:00
Sebastiaan van Stijn
284bbde996
seccomp: add 64-bit time_t syscalls
Relates to https://patchwork.kernel.org/patch/10756415/

Added to whitelist:

- `clock_getres_time64` (equivalent of `clock_getres`, which was whitelisted)
- `clock_gettime64` (equivalent of `clock_gettime`, which was whitelisted)
- `clock_nanosleep_time64` (equivalent of `clock_nanosleep`, which was whitelisted)
- `futex_time64` (equivalent of `futex`, which was whitelisted)
- `io_pgetevents_time64` (equivalent of `io_pgetevents`, which was whitelisted)
- `mq_timedreceive_time64` (equivalent of `mq_timedreceive`, which was whitelisted)
- `mq_timedsend_time64 ` (equivalent of `mq_timedsend`, which was whitelisted)
- `ppoll_time64` (equivalent of `ppoll`, which was whitelisted)
- `pselect6_time64` (equivalent of `pselect6`, which was whitelisted)
- `recvmmsg_time64` (equivalent of `recvmmsg`, which was whitelisted)
- `rt_sigtimedwait_time64` (equivalent of `rt_sigtimedwait`, which was whitelisted)
- `sched_rr_get_interval_time64` (equivalent of `sched_rr_get_interval`, which was whitelisted)
- `semtimedop_time64` (equivalent of `semtimedop`, which was whitelisted)
- `timer_gettime64` (equivalent of `timer_gettime`, which was whitelisted)
- `timer_settime64` (equivalent of `timer_settime`, which was whitelisted)
- `timerfd_gettime64` (equivalent of `timerfd_gettime`, which was whitelisted)
- `timerfd_settime64` (equivalent of `timerfd_settime`, which was whitelisted)
- `utimensat_time64` (equivalent of `utimensat`, which was whitelisted)

Not added to whitelist:

- `clock_adjtime64` (equivalent of `clock_adjtime`, which was not whitelisted)
- `clock_settime64` (equivalent of `clock_settime`, which was not whitelisted)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 89fabf0f24)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-04-03 00:50:14 +02:00
Tibor Vass
43b0a73273
Merge pull request #40758 from thaJeztah/19.03_backport_arm_matching
[19.03] vendor: containerd 481103c8793316c118d9f795cde18060847c370e
2020-04-02 15:30:24 -07:00
Sebastiaan van Stijn
89f296a534 Merge pull request #40562 from thaJeztah/19.03_backport_39353_subgid_subuid
[19.03] backport Fix docker crash when creating namespaces with UID in /etc/subuid and /etc/subgid
2020-04-02 22:14:34 +02:00
Sebastiaan van Stijn
d12b6d24d1
Merge pull request #40628 from cpuguy83/19.03_backport_39360_swarm_log_fill_rate
[19.03] Fix rate limiting for logger, increase refill rate
2020-04-02 20:40:29 +02:00
Sebastiaan van Stijn
359edd8cbf
[19.03] vendor: containerd 481103c8793316c118d9f795cde18060847c370e
full diff: 7c1e88399e...481103c879

- Fix error handling for task deletion
- Fix fd leak of shim log
- Fix killall when use pidnamespace
- Improve ARM platform matching

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-30 19:36:07 +02:00
Akihiro Suda
1454987253
Merge pull request #40617 from SamWhited/19.03
[19.03 backport] Update DNS library
2020-03-28 02:02:45 +09:00
Sam Whited
021258661b Update libnetwork and DNS library
This makes sure that we don't become vulnerable to CVE-2018-17419 or
CVE-2019-19794 in the future. While we are not currently vulnerable to
either, there is a risk that a PR could be made which uses one of the
vulnerable methods in the future, so it's worth going ahead and updating
to ensure that a simple PR that would easily pass code review doesn't
lead to a vulnerability.

Signed-off-by: Sam Whited <sam@samwhited.com>
2020-03-27 09:53:11 -04:00
Akihiro Suda
1db5199ddc
Merge pull request #40564 from thaJeztah/19.03_backport_apparmor_fixes
[19.03 backport] AppArmor fixes
2020-03-18 16:31:00 +09:00
Akihiro Suda
6ed0f6ab78
Merge pull request #40652 from thaJeztah/19.03_backport_fix_backingfs
[19.03 backport] fix backingFs assignment
2020-03-13 04:42:15 +09:00
Brian Goff
100d240d86
Move schema1 deprecation notice
Currently we show this deprecation notice for any error returned by a
registry.
Registries can return an error for any number of reasons.
Instead let's show the deprecation notice only if the fallback was
successful.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 6859bc7eee)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 20:30:57 +01:00
Brian Goff
4a4b3ed37f
Merge pull request #40558 from thaJeztah/19.03_backport_buster_or_bust
[19.03 backport] various dockerfile changes and update to buster variant
2020-03-12 12:22:39 -07:00
Sebastiaan van Stijn
57d5105759
bump windows-container-utility aa1ba87e99b68e0113bd27ec26c60b88f9d4ccd9
full diff: e004a1415a...aa1ba87e99

changes:

- Use standard include paths instead of hard-coding

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5125f8b304)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:49:21 +01:00
Sebastiaan van Stijn
68db0c1739
Dockerfile: switch to iptables-legacy to match the host
CI runs on Ubuntu 16.04 machines, which use iptables (legacy), but
Debian buster uses nftables. Because of this, DNS resolution does not
work if the daemon configures iptables.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit bb0472bd23)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:49:19 +01:00
Sebastiaan van Stijn
4aaf3ead97
Dockerfile: switch golang image to "buster" variant, and update btrfs packages
The btrfs-tools was a transitional package, and no longer exists:

> Package btrfs-tools
> stretch (oldstable) (admin): transitional dummy package
> 4.7.3-1: amd64 arm64 armel armhf i386 mips mips64el mipsel ppc64el s390x

It must be replaced either by `btrfs-progs` or `libbtrfs-dev` (which has just the development headers)

> Package: libbtrfs-dev (4.20.1-2)
> Checksumming Copy on Write Filesystem utilities (development headers)

Note that the `libbtrfs-dev` package is not available on Debian stretch
(only in stretch-backports)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4e3ab9e9fb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:49:17 +01:00
Sebastiaan van Stijn
a070874828
hack/make: ignore failure to stop apparmor
```
 ---> Making bundle: .integration-daemon-stop (in bundles/test-integration)
 ++++ cat bundles/test-integration/docker.pid
 +++ kill 13137
 +++ /etc/init.d/apparmor stop
 Leaving: AppArmorNo profiles have been unloaded.

 Unloading profiles will leave already running processes permanently
 unconfined, which can lead to unexpected situations.

 To set a process to complain mode, use the command line tool
 'aa-complain'. To really tear down all profiles, run 'aa-teardown'."

script returned exit code 255
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5dbfae6949)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:49:15 +01:00
Sebastiaan van Stijn
237843a059
Dockerfile: align consecutive COPY lines
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 93edf327dc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:49:13 +01:00
Sebastiaan van Stijn
400b2850ff
Dockerfile: order COPY lines by change frequency
Ordering the COPY lines to optimize for layer sharing
when these dependencies are updated.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8edbe5dec2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:49:11 +01:00
Sebastiaan van Stijn
ddfeaf32ff
Dockerfile: sort packages alphabetically
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ee0ef6c535)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:49:09 +01:00
Sebastiaan van Stijn
cb813faebf
Dockerfile: use build-arg for vpnkit
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1cfcce5e21)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:49:07 +01:00
Sebastiaan van Stijn
0499db23d1
Dockerfile: use spaces for indentation
Indenting with tabs can cause the formatting to go wonky,
because the first line of any command is "indented" with spaces,
but following lines are not, therefore they can be mis-aligned with
the first line.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a42b4144bc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:49:05 +01:00
Sebastiaan van Stijn
c77e7cb3d0
[19.03] Dockerfile: move CRIU_VERSION lower
Match the position with where it's on master after the
Dockerfile buildkit refactor.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:49:03 +01:00
Sebastiaan van Stijn
c6511ee4db
bump vndr v0.1.1
full diff: https:/github.com/LK4D4/vndr/compare/v0.1.0...v0.1.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 486161a63a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:49:01 +01:00
Sebastiaan van Stijn
0fa8a0c575
bump vndr v0.1.0 to support versioned import paths
With this change, go packages/modules that use versioned
import paths (github.com/foo/bar/v2), but don't use a directory
in the repository, can now be supported.

For example:

```
github.com/coreos/go-systemd/v22 v22.0.0
```

will vendor the github.com/coreos/go-systemd repository
into `vendor/github.com/coreos/go-systemd/v22`.

full diff: f5ab8fc5fb...v0.1.0

- LK4D4/vndr#83 migrate bitbucket to api 2.0
    - fixes LK4D4/vndr#82 https://api.bitbucket.org/1.0/repositories/ww/goautoneg: 410 Gone
- LK4D4/vndr#86 Replace sort.Sort with sort.Strings
- LK4D4/vndr#87 support `github.com/coreos/go-systemd/v22`

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d4f05c168d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:48:59 +01:00
Justen Martin
f3009e2f51
Use build args to override binary commits in dockerfile
Signed-off-by: Justen Martin <jmart@the-coder.com>
(cherry picked from commit 095ca77f48)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:48:57 +01:00
Sebastiaan van Stijn
92ca652fc9
Revert "dockerfile: update vndr to 85886e1a"
This reverts commit 0d4f412ecd.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:48:50 +01:00
Kir Kolyshkin
fdad16840c
go-swagger: fix panic
This is an attempt to fix go-swagger panic under Golang 1.13.

Details:
 * https://github.com/go-openapi/jsonpointer/pull/4
 * https://github.com/go-swagger/go-swagger/pull/2059

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 93f9b902af)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:48:44 +01:00
Sebastiaan van Stijn
075e057de5
Dockerfile: set GO111MODULE=off
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 961119db21)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-12 18:48:42 +01:00
Tonis Tiigi
aa6a9891b0 vendor: add local copy of archive/tar
This version avoids doing name lookups on creating tarball that
should be avoided in to not hit loading glibc shared libraries.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-03-09 21:45:05 +00:00
Tonis Tiigi
0d4f412ecd dockerfile: update vndr to 85886e1a
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-03-09 21:45:05 +00:00
Jintao Zhang
fe2a25a785
fix backingFs assignment
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 18c22f5bc1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-09 18:40:20 +01:00
Ethan Mosbaugh
e6c9e2736f Fix rate limiting for logger, increase refill rate
Signed-off-by: Ethan Mosbaugh <ethan@replicated.com>
(cherry picked from commit 50c6a5fb07)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2020-03-05 09:05:04 -08:00
Akihiro Suda
71373c6105
Merge pull request #40604 from thaJeztah/19.03_backport_mis_unlock
[19.03 backport] daemon/cluster: add a missing Unlock
2020-02-29 10:37:35 +09:00
Sebastiaan van Stijn
498fbecafd
Merge pull request #40476 from cpuguy83/19.03_fix_exec_id_client
[19.03] Exec inspect field should be "ID" not "ExecID"
2020-02-28 22:23:26 +01:00
Brian Goff
5101ce52ae
Merge pull request #40461 from AkihiroSuda/cherrypick-40243-1903
[19.03 backport] Use certs.d from XDG_CONFIG_HOME when in rootless mode (fixes #40236)
2020-02-28 11:17:39 -08:00
Ziheng Liu
1e3971d556
daemon/cluster: add a missing Unlock
Signed-off-by: Ziheng Liu <lzhfromustc@gmail.com>
(cherry picked from commit 83c0bedba9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-28 19:10:14 +01:00
Sebastiaan van Stijn
bb19f8cc90
Merge pull request #40566 from thaJeztah/19.03_backport_bump_grpc
[19.03 backport] bump google.golang.org/grpc v1.23.1
2020-02-28 18:17:14 +01:00
Sebastiaan van Stijn
a18dd2e48e
Merge pull request #40586 from thaJeztah/19.03_revert_jenkinsfile_pin_older_windows
[19.03] Revert "Jenkinsfile: temporarily pin windows image to 10.0.17763.973"
2020-02-26 17:45:58 +01:00
Sebastiaan van Stijn
eb7bd90a57
Revert "Jenkinsfile: temporarily pin windows image to 10.0.17763.973"
This reverts commit c694d60364.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-26 16:27:45 +01:00
Dmitry Sharshakov
a2d887b6f5 Use certs.d
from XDG_CONFIG_HOME
 when in rootless mode

Signed-off-by: Dmitry Sharshakov <d3dx12.xx@gmail.com>
(cherry picked from commit f4fa98f583)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-02-26 16:50:59 +09:00
Sebastiaan van Stijn
0594484041
Merge pull request #40575 from thaJeztah/19.03_backport_unify_apis
[19.03 backport] docs: add API versions v1.25 - v1.29, v1.38 - v1.40
2020-02-25 12:40:36 +01:00
Sebastiaan van Stijn
cb5a2beaff
docs: add API versions v1.25 - v1.29, v1.38 - v1.40
Adding separate documents for older API versions, so that these don't have to
be collected from each tag/release branch:

- v1.40 - docker v19.03
- v1.39 - docker v18.09
- v1.38 - docker v18.06
- v1.29 - docker v17.05
- v1.28 - docker v17.04
- v1.27 - docker v17.03
- v1.26 - docker v1.13.1
- v1.25 - docker v1.13.0

Note that:

- API versions v1.30 - v1.37 are yet to be added after the tags and release-
  branches from the docker/docker-ce mono-repo have been extracted.
- docker v1.13.0 made the switch from using a markdown file to using swagger
  to document the API.

Approach taken:

    git mv api/swagger.yaml api/swagger-current.yaml

    git checkout upstream/19.03 -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.40.yaml

    git checkout v18.09.9 -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.39.yaml

    git checkout v18.06.3-ce -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.38.yaml

    git checkout v17.05.0-ce -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.29.yaml

    git checkout v17.04.0-ce -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.28.yaml

    git checkout v17.03.2-ce -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.27.yaml

    git checkout v1.13.1 -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.26.yaml

    git checkout v1.13.0 -- api/swagger.yaml
    git mv api/swagger.yaml docs/api/v1.25.yaml

    git mv api/swagger-current.yaml api/swagger.yaml

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6fdbc50084)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-25 10:52:59 +01:00
Akihiro Suda
ad2c5440b5
Merge pull request #40477 from cpuguy83/19.03_40446_check_other_mounts
[19.03] Check tmpfs mounts before create anon volume
2020-02-25 09:41:24 +09:00
Akihiro Suda
a515a320f2
Merge pull request #40547 from thaJeztah/19.03_backport_update_selinux_v1.3.1
[19.03 backport] vendor: update opencontainers/selinux v1.3.1
2020-02-25 09:40:40 +09:00
Akihiro Suda
56399cdacf
Merge pull request #40560 from thaJeztah/19.03_backport_33434_api_doc_base64url
[19.03 backport] Update API docs to specify using base64url
2020-02-25 09:40:17 +09:00
Sebastiaan van Stijn
5e6469c088
Merge pull request #40557 from thaJeztah/19.03_bump_buildkit_v0.6.4
[19.03] vendor: update buildkit v0.6.4
2020-02-24 18:00:53 +01:00
Brian Goff
679115602f
Merge pull request #40555 from fuweid/cp1903-40137
[19.03 backport] daemon: add grpc.WithBlock option
2020-02-22 07:26:04 -08:00
Sebastiaan van Stijn
ce1b8c8c93
bump google.golang.org/grpc v1.23.1
full diff: https://github.com/grpc/grpc-go/compare/v1.23.0...v1.23.1

- grpc/grpc-go#3018 server: set and advertise max frame size of 16KB
- grpc/grpc-go#3017 grpclb: fix deadlock in grpclb connection cache
    - Before the fix, if the timer to remove a SubConn fires at the
      same time NewSubConn cancels the timer, it caused a mutex leak
      and deadlock.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 39ad39d220)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-22 16:11:05 +01:00
Arko Dasgupta
911ecc3376
Set the bip network value as the subnet
Dont assign the --bip value directly to the subnet
for the default bridge. Instead use the network value
from the ParseCIDR output

Addresses: https://github.com/moby/moby/issues/40392

Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
(cherry picked from commit f800d5f786)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-22 16:04:43 +01:00
Sebastiaan van Stijn
08420b1c95
AppArmor: add missing rules for running in userns
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 404d87ec69)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-22 15:59:47 +01:00
Sebastiaan van Stijn
fbb08f525f
AppArmor: remove rules for linkgraph.db SQLite database
Commit 0f9f99500c removed the
use of SQLite for managing container links, and commit
f8119bb7a7 removed the migration
tool, and SQLite dependency.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e553a03627)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-22 15:59:35 +01:00
Brian Goff
1a830501b7
Use FILE_SHARE_DELETE for log files on Windows.
This fixes issues where one goroutine tries to delete or rename a file
while another goroutine has the file open (e.g. a log reader).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit a5f237c2b5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-22 15:55:03 +01:00
Yong Tang
dcae74c44a
Fix docker crash when creating namespaces with UID in /etc/subuid and /etc/subgid
This fix tries to address the issue raised in 39353 where
docker crash when creating namespaces with UID in /etc/subuid and /etc/subgid.

The issue was that, mapping to `/etc/sub[u,g]id` in docker does not
allow numeric ID.

This fix fixes the issue by probing other combinations (uid:groupname, username:gid, uid:gid)
when normal username:groupname fails.

This fix fixes 39353.

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
(cherry picked from commit f09dc2f4fc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-22 15:46:55 +01:00
Mike Bush
0349167554
Fixes #33434 - API docs to specify using base64url
Specify base64url rather than base64. Also correct other links to the base64url section of RFC4648

Signed-off-by: Mike Bush <mpbush@gmail.com>
(cherry picked from commit f282dde877)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-22 15:29:42 +01:00
Sebastiaan van Stijn
b47f177f20
vendor: update buildkit v0.6.4
full diff: 57e8ad5217...v0.6.4

- buildkit#1374 [v0.6] ops: fix deadlock on releasing shared mounts
    - backport of buildkit#1355 ops: fix deadlock on releasing shared mounts
    - fixes buildkit#1322 Deadlock on cache mounts

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-22 12:28:37 +01:00
Wei Fu
9ed0504592 daemon: add grpc.WithBlock option
WithBlock makes sure that the following containerd request is reliable.

In one edge case with high load pressure, kernel kills dockerd, containerd
and containerd-shims caused by OOM. When both dockerd and containerd
restart, but containerd will take time to recover all the existing
containers. Before containerd serving, dockerd will failed with gRPC
error. That bad thing is that restore action will still ignore the
any non-NotFound errors and returns running state for
already stopped container. It is unexpected behavior. And
we need to restart dockerd to make sure that anything is OK.

It is painful. Add WithBlock can prevent the edge case. And
n common case, the containerd will be serving in shortly.
It is not harm to add WithBlock for containerd connection.

Signed-off-by: Wei Fu <fuweid89@gmail.com>
(cherry picked from commit 9f73396dab)
Signed-off-by: Wei Fu <fuweid89@gmail.com>
2020-02-22 14:28:28 +08:00
Sebastiaan van Stijn
1a7d601a15
Merge pull request #40549 from cpuguy83/19.03_stats_use_cond_var
[19.03 backport] Use condition variable to wake stats collector.
2020-02-22 02:29:23 +01:00
Tibor Vass
eee88a2a23
Merge pull request #40551 from thaJeztah/19.03_backport_jenkinsfile_pin_older_windows
[19.03 backport] Jenkinsfile: temporarily pin windows image to 10.0.17763.973
2020-02-21 15:45:04 -08:00
Sebastiaan van Stijn
c694d60364
Jenkinsfile: temporarily pin windows image to 10.0.17763.973
The latest `ltsc2019` image (`10.0.17763.1039`) appear to be broken,
and even a `RUN Write-Host hello` hangs.

Temporarily switching back to an older version so that CI doesn't fail.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fa2417984b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-21 09:25:09 +01:00
Brian Goff
0901d4ab31 Use condition variable to wake stats collector.
Before the collection goroutine wakes up every 1 second (as configured).
This sleep interval is in case there are no stats to collect we don't
end up in a tight loop.

Instead use a condition variable to signal that a collection is needed.
This prevents us from waking the goroutine needlessly when there is no
one looking for stats.

For now I've kept the sleep just moved it to the end of the loop, which
gives some space between collections.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit e75e6b0e31)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2020-02-20 11:38:16 -08:00
Sebastiaan van Stijn
75fa1145da
Merge pull request #40490 from thaJeztah/19.03_backport_swagger_document_constraints
[19.03 backport] swagger: document "node.platform.(arch|os)" constraints
2020-02-20 20:31:51 +01:00
Brian Goff
d1cf6d1303
Merge pull request #40540 from thaJeztah/19.03_update_containerd_1.2.13
[19.03] update containerd runtime v1.2.13
2020-02-20 11:18:42 -08:00
Brian Goff
e145add0ef
Merge pull request #40533 from thaJeztah/19.03_update_golang_1.12.17
[19.03] Update Golang 1.12.17
2020-02-20 11:18:02 -08:00
Sebastiaan van Stijn
2b130c28ca
vendor: update opencontainers/selinux v1.3.1
full diff: 5215b1806f...v1.3.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 12c7541f1f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-20 15:22:08 +01:00
Sebastiaan van Stijn
c6afabf3b3
update containerd runtime v1.2.13
The thirteenth patch release for `containerd` 1.2 fixes a regression introduced
in v1.2.12 that caused container/shim to hang on single core machines, fixes an
issue with blkio, and updates the Golang runtime to 1.12.17.

* Fix container pid race condition
* Update containerd/cgroups dependency to address blkio issue
* Set octet-stream content-type on PUT request
* Pin to libseccomp 2.3.3 to preserve compatibility with hosts that do not have libseccomp 2.4 or higher installed
* Update Golang runtime to 1.12.17, which includes a fix to the runtime

full diff: https://github.com/containerd/containerd/compare/v1.2.12...v1.2.13

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-18 21:33:19 +01:00
Akihiro Suda
4ac62b478d
Merge pull request #40462 from AkihiroSuda/cherrypick-40210-1903
[19.03 backport] overlay[2]: rm extra checks in init
2020-02-18 18:04:05 +09:00
Sebastiaan van Stijn
55af290462
Update Golang 1.12.17
full diff: https://github.com/golang/go/compare/go1.12.16...go1.12.17

go1.12.17 (released 2020/02/12) includes a fix to the runtime. See the Go 1.12.17
milestone on the issue tracker for details:

https://github.com/golang/go/issues?q=milestone%3AGo1.12.17+label%3ACherryPickApproved

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-17 11:55:15 +01:00
Brian Goff
1b8e9a131c Exec inspect field should be "ID" not "ExecID"
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit cc993a9cbf)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2020-02-10 14:09:26 -08:00
Sebastiaan van Stijn
5e23653130
swagger: document "node.platform.(arch|os)" constraints
Support for these constraints was added in docker 1.13.0
(API v1.25), but never documented.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ed439e4a31)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-10 17:04:42 +01:00
Akihiro Suda
130ae89dab
Merge pull request #40460 from AkihiroSuda/cherrypick-40406-1903
[19.03 backport] dockerd-rootless.sh: remove confusing code comment
2020-02-09 04:23:57 +09:00
Brian Goff
1d8da80dbf Check tmpfs mounts before create anon volume
This makes sure that things like `--tmpfs` mounts over an anonymous
volume don't create volumes uneccessarily.
One method only checks mountpoints, the other checks both mountpoints
and tmpfs... the usage of these should likely be consolidated.

Ideally, processing for `--tmpfs` mounts would get merged in with the
rest of the mount parsing. I opted not to do that for this change so the
fix is minimal and can potentially be backported with fewer changes of
breaking things.
Merging the mount processing for tmpfs can be handled in a followup.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit f464c31668)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2020-02-07 14:11:17 -08:00
Kir Kolyshkin
5b6f2e1c59 overlay[2]: rm fs checks
Now that we do check if overlay is working by performing an actual
overlayfs mount, there's no need in extra checks for the kernel version
or the filesystem type. Actual mount check is sufficient.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit e226aea280)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-02-06 13:18:47 +09:00
Kir Kolyshkin
1b0edb155f Fix/improve overlay support check
Before this commit, overlay check was performed by looking for
`overlay` in /proc/filesystem. This obviously might not work
for rootless Docker (fs is there, but one can't use it as non-root).

This commit changes the check to perform the actual mount, by reusing
the code previously written to check for multiple lower dirs support.

The old check is removed from both drivers, as well as the additional
check for the multiple lower dirs support in overlay2 since it's now
a part of the main check.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 649e4c8889)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-02-06 13:18:41 +09:00
Kir Kolyshkin
5571ceb5ac overlay: move supportsMultipleLowerDir to utils
This moves supportsMultipleLowerDir() to overlayutils
so it can be used from both overlay and overlay2.

The only changes made were:
 * replace logger with logrus
 * don't use workDirName mergedDirName constants
 * add mnt var to improve readability a bit

This is a preparation for the next commit.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit d5687079ad)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-02-06 13:18:37 +09:00
Danny Milosavljevic
5e4574526d Use fewer modprobes
Signed-off-by: Danny Milosavljevic <dannym@scratchpost.org>
(cherry picked from commit 074eca1d79)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-02-06 13:18:23 +09:00
Akihiro Suda
9338d0a6b5 dockerd-rootless.sh: remove confusing code comment
`--userland-proxy-path` is automatically set by dockerd: e6c1820ef5/cmd/dockerd/config_unix.go (L46)

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 9bd1ae024a)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-02-06 13:08:08 +09:00
Brian Goff
5f6d6f3f22
Merge pull request #40453 from thaJeztah/19.03_bump_containerd
[19.03] update containerd 1.12.12, runc v1.0.0-rc10
2020-02-04 14:05:54 -08:00
Akihiro Suda
d3dab1f618
update runc library to v1.0.0-rc10 (CVE-2019-19921)
Notable changes:
* Fix CVE-2019-19921 (Volume mount race condition with shared mounts): https://github.com/opencontainers/runc/pull/2207
* Fix exec FIFO race: https://github.com/opencontainers/runc/pull/2185
* Basic support for cgroup v2.  Almost feature-complete, but still missing support for systemd mode in rootless.
  See also https://github.com/opencontainers/runc/issues/2209 for the known issues.

Full changes: https://github.com/opencontainers/runc/compare/v1.0.0-rc9...v1.0.0-rc10

Also updates go-selinux: 3a1f366feb...5215b1806f
(See https://github.com/containerd/cri/pull/1383#issuecomment-578227009)

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 6d68080907)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-04 18:41:02 +01:00
Akihiro Suda
3bd1759f80
update runc binary to v1.0.0-rc10 (CVE-2019-19921)
Notable changes:
* Fix CVE-2019-19921 (Volume mount race condition with shared mounts): https://github.com/opencontainers/runc/pull/2207
* Fix exec FIFO race: https://github.com/opencontainers/runc/pull/2185
* Basic support for cgroup v2.  Almost feature-complete, but still missing support for systemd mode in rootless.
  See also https://github.com/opencontainers/runc/issues/2209 for the known issues.

Full changes: https://github.com/opencontainers/runc/compare/v1.0.0-rc9...v1.0.0-rc10

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit cd43c1d1ac)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-04 18:41:00 +01:00
Sebastiaan van Stijn
f8cfa7947c
[19.03] Update containerd binary to v1.2.12
full diff: https://github.com/containerd/containerd/compare/v1.2.11...v1.2.12

Welcome to the v1.2.12 release of containerd!

The twelfth patch release for containerd 1.2 includes an updated runc with
a fix for CVE-2019-19921, an updated version of the opencontainers/selinux
dependency, which includes a fix for CVE-2019-16884, an updated version of the
gopkg.in/yaml.v2 dependency to address CVE-2019-11253, and a Golang update.

Notable Updates

- Update the runc vendor to v1.0.0-rc10 which includes a mitigation for CVE-2019-19921.
- Update the opencontainers/selinux which includes a mitigation for CVE-2019-16884.
- Update Golang runtime to 1.12.16, mitigating the CVE-2020-0601 certificate verification
  bypass on Windows, and CVE-2020-7919, which only affects 32-bit architectures.
- Update Golang runtime to 1.12.15, which includes a fix to the runtime (Go 1.12.14,
  Go 1.12.15) and and the net/http package (Go 1.12.15)
- A fix to prevent SIGSEGV when starting containerd-shim containerd/containerd#3960
- Fixes to exec containerd/containerd#3755
    - Prevent docker exec hanging if an earlier docker exec left a zombie process
    - Prevent High system load/CPU utilization with liveness and readiness probes
    - Prevent Docker healthcheck causing high CPU utilization

CRI fixes:

- Update the gopkg.in/yaml.v2 vendor to v2.2.8 with a mitigation for CVE-2019-11253

API

- Fix API filters to properly handle and return parse errors containerd/containerd#3950

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-04 18:40:40 +01:00
Brian Goff
11665130f9 Merge pull request #40440 from tonistiigi/1903-update-buildkit
[19.03] vendor: update buildkit to ce88aa518
2020-02-04 17:15:20 +00:00
Brian Goff
3ba45cef16 Merge pull request #40432 from thaJeztah/19.03_bump_swarmkit
[19.03] vendor: bump swarmkit 062b694b46c0744d601eebef79f3f7433d808a04
2020-02-04 17:15:19 +00:00
Tonis Tiigi
a836daf6c5 vendor: update buildkit to 57e8ad5
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-02-04 17:15:19 +00:00
Brian Goff
e686f468f7 Merge pull request #40433 from thaJeztah/19.03_bump_golang_1.12.16
[19.03] Update Golang 1.12.16, golang.org/x/crypto (CVE-2020-0601, CVE-2020-7919)
2020-02-04 17:15:19 +00:00
Sebastiaan van Stijn
0dd0af939f [19.03] vendor: bump swarmkit 062b694b46c0744d601eebef79f3f7433d808a04
full diff: f35d9100f2...062b694b46

changes:

- docker/swarmkit#2927 [19.03 backport] Fix leaking subscription contexts
    - backport of docker/swarmkit#2926 Fix leaking log subscription contexts
    - addresses moby/moby#39916 Dockerd eats too much RAM

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-04 17:15:19 +00:00
Sebastiaan van Stijn
7b575f9813 vendor: update golang.org/x/crypto 69ecbb4d6d5dab05e49161c6e77ea40a030884e1
full diff: 88737f569e...69ecbb4d6d

Includes 69ecbb4d6d
(forward-port of 8b5121be2f),
which fixes CVE-2020-7919:

- Panic in crypto/x509 certificate parsing and golang.org/x/crypto/cryptobyte
  On 32-bit architectures, a malformed input to crypto/x509 or the ASN.1 parsing
  functions of golang.org/x/crypto/cryptobyte can lead to a panic.
  The malformed certificate can be delivered via a crypto/tls connection to a
  client, or to a server that accepts client certificates. net/http clients can
  be made to crash by an HTTPS server, while net/http servers that accept client
  certificates will recover the panic and are unaffected.
  Thanks to Project Wycheproof for providing the test cases that led to the
  discovery of this issue. The issue is CVE-2020-7919 and Go issue golang.org/issue/36837.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b606c8e440)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-04 17:15:19 +00:00
Sebastiaan van Stijn
acca30055a [19.03] Update Golang 1.12.16 (CVE-2020-0601, CVE-2020-7919)
full diff: https://github.com/golang/go/compare/go1.12.15...go1.12.16

go1.12.16 (released 2020/01/28) includes two security fixes. One mitigates the
CVE-2020-0601 certificate verification bypass on Windows. The other affects only
32-bit architectures.

https://github.com/golang/go/issues?q=milestone%3AGo1.12.16+label%3ACherryPickApproved

- X.509 certificate validation bypass on Windows 10
  A Windows vulnerability allows attackers to spoof valid certificate chains when
  the system root store is in use. These releases include a mitigation for Go
  applications, but it’s strongly recommended that affected users install the
  Windows security update to protect their system.
  This issue is CVE-2020-0601 and Go issue golang.org/issue/36834.
- Panic in crypto/x509 certificate parsing and golang.org/x/crypto/cryptobyte
  On 32-bit architectures, a malformed input to crypto/x509 or the ASN.1 parsing
  functions of golang.org/x/crypto/cryptobyte can lead to a panic.
  The malformed certificate can be delivered via a crypto/tls connection to a
  client, or to a server that accepts client certificates. net/http clients can
  be made to crash by an HTTPS server, while net/http servers that accept client
  certificates will recover the panic and are unaffected.
  Thanks to Project Wycheproof for providing the test cases that led to the
  discovery of this issue. The issue is CVE-2020-7919 and Go issue golang.org/issue/36837.
  This is also fixed in version v0.0.0-20200124225646-8b5121be2f68 of golang.org/x/crypto/cryptobyte.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-02-04 17:15:19 +00:00
Brian Goff
4076c57b50
Fix more signal handling issues in tests.
Found these by doing a `grep -R 'using the force'` on a full test run.
There's still a few more which are running against the main test daemon,
so it is difficult to find which test they belong to.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit fcd65ebf49)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-27 11:01:23 +01:00
HuanHuan Ye
68e1150357
DaemonCli: Move check into startMetricsServer
Fix TODO: move into startMetricsServer()
Fix errors.Wrap return nil when passed err is nil

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>
Signed-off-by: HuanHuan Ye <logindaveye@gmail.com>
(cherry picked from commit 88c554f950)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-27 11:01:20 +01:00
Brian Goff
b813c398bb
Add FromClient to test env execution
While working on other tests I noticed that environment.Execution cannot
be used for anything but the pre-configured daemon, however this can
come in handy for being able share daemons across multiple tests that
currently spin up a new daemon.
The execution env also seems to be misused in some of these cases.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 1381956499)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-27 11:01:17 +01:00
Tõnis Tiigi
69098f05cf
Merge pull request #454 from thaJeztah/19.03_backport_lgetxattr_panic
[19.03 backport] Fix possible runtime panic in Lgetxattr
2020-01-23 15:03:16 -08:00
Sebastiaan van Stijn
6949793bb1
Merge pull request #429 from thaJeztah/19.03_backport_windows_1903_fixes
[19.03 backport] bump hcsshim to fix docker build failing on Windows 1903
2020-01-23 20:48:16 +01:00
Sebastiaan van Stijn
c030578fe4
Merge pull request #240 from thaJeztah/19.03_backport_lcowfromscratch
[19.03 backport] LCOW: Fix FROM scratch
2020-01-23 20:30:23 +01:00
Sebastiaan van Stijn
ef7b19365e
Merge pull request #443 from thaJeztah/19.03_backport_health_race
[19.03 backport] Avoid a data race in container/health.go
2020-01-23 20:24:16 +01:00
Sebastiaan van Stijn
c3936abb67
Merge pull request #441 from thaJeztah/19.03_backport_fix_double_host
[19.03 backport] daemon: don't listen on the same address multiple times
2020-01-23 20:23:52 +01:00
Sebastiaan van Stijn
78571e9049
Merge pull request #439 from arkodg/19.03
[19.03] Bump 19.03 libnetwork refpoint
2020-01-23 20:23:38 +01:00
Sebastiaan van Stijn
d2693998a6
Merge pull request #442 from thaJeztah/19.03_backport_errdefs_no_recurse
[19.03 backport] errdefs: remove unneeded recursive calls
2020-01-23 20:23:19 +01:00
Sebastiaan van Stijn
6def98ee7d
Merge pull request #444 from thaJeztah/19.03_backport_fix_unmount_ipc_ignore_enotexist
[19.03 backport] Fix "no such file or directory" warning when unmounting IPC mount
2020-01-23 20:23:02 +01:00
Sebastiaan van Stijn
60220a48b2
Merge pull request #446 from thaJeztah/19.03_backport_ctx_upload_cancel
[19.03 backport] builder-next: close build context upload on cancel
2020-01-23 20:22:41 +01:00
Sebastiaan van Stijn
efe241644b
Merge pull request #447 from thaJeztah/19.03_backport_fix_containerStart_unhandled_error
[19.03 backport] daemon:containerStart() fix unhandled error for saveApparmorConfig
2020-01-23 20:22:20 +01:00
Sebastiaan van Stijn
b645c8c70e
Merge pull request #449 from thaJeztah/19.03_backport_move_windows_gopath_out_of_goroot
[19.03 backport] Move GOPATH out from under the GO source tree
2020-01-23 20:21:56 +01:00
Sebastiaan van Stijn
dda9b3eced
Merge pull request #440 from thaJeztah/19.03_backport_remove_cocky
[19.03 backport] Remove cocky from names-generator
2020-01-23 20:21:27 +01:00
Sebastiaan van Stijn
8270df208b
Merge pull request #448 from thaJeztah/19.03_backport_gofmt_pkg_parsers
[19.03 backport] pkg/parsers/kernel: gofmt hex value (preparation for Go 1.13+)
2020-01-23 20:18:49 +01:00
Sebastiaan van Stijn
facfb9e1b0
Merge pull request #450 from thaJeztah/19.03_backport_bump_docker_py_4.1.0
[19.03 backport] bump docker-py to 4.1.0
2020-01-23 20:15:36 +01:00
Sebastiaan van Stijn
abfed203eb
Merge pull request #451 from thaJeztah/19.03_backport_swagger_fixes
[19.03 backport] assorted swagger / API docs fixes
2020-01-23 20:15:18 +01:00
Sebastiaan van Stijn
e6ba13d3b9
Merge pull request #452 from thaJeztah/19.03_bump_golang_1.12.15
[19.03] Bump Golang 1.12.15
2020-01-23 20:13:40 +01:00
Tibor Vass
bd9e7fca87
Merge pull request #453 from tonistiigi/1903-update-buildkit
[19.03] vendor: update buildkit to 926935b5
2020-01-22 15:04:35 -08:00
Tonis Tiigi
68b270b97c vendor: update buildkit to 926935b5
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-01-21 15:50:25 -08:00
Sebastiaan van Stijn
ec50d8f814
Merge pull request #434 from AkihiroSuda/bump-up-rootlesskit-1903
[19.03 backport] rootless: fix proxying UDP packets
2020-01-20 13:45:29 +01:00
Akihiro Suda
325e889ba3 rootless: fix proxying UDP packets
UDP reply packets were not proxied: https://github.com/rootless-containers/rootlesskit/issues/86

The issue was fixed in RootlessKit v0.7.1: https://github.com/rootless-containers/rootlesskit/pull/87

Full changes since v0.7.0: https://github.com/rootless-containers/rootlesskit/compare/v0.7.0...v0.7.1

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 658723badd)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2020-01-20 13:15:51 +09:00
Sebastiaan van Stijn
1984d8064b
Merge pull request #445 from thaJeztah/19.03_backport_only_add_btrfs_when_needed_please
[19.03 backport] Remove btrfs_noversion build tag, no longer needed
2020-01-17 18:29:30 +01:00
Sebastiaan van Stijn
2b05c146ef
[19.03] Bump Golang 1.12.15
full diff: https://github.com/golang/go/compare/go1.12.14...go1.12.15

go1.12.15 (released 2020/01/09) includes fixes to the runtime and the net/http
package. See the Go 1.12.15 milestone on the issue tracker for details:

https://github.com/golang/go/issues?q=milestone%3AGo1.12.15+label%3ACherryPickApproved

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 15:14:59 +01:00
Sascha Grunert
c7cd5d6726
Fix possible runtime panic in Lgetxattr
If `unix.Lgetxattr` returns an error, then `sz == -1` which will cause a
runtime panic if `errno == unix.ERANGE`.

Signed-off-by: Sascha Grunert <sgrunert@suse.com>
(cherry picked from commit 4138cd22ab)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 11:31:12 +01:00
Felipe Ruhland
1942d3a8b1
Fix Engine API version history typo
Signed-off-by: Felipe Ruhland <felipe.ruhland@gmail.com>
(cherry picked from commit 8107d44852)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 11:23:54 +01:00
Sebastiaan van Stijn
67ac9ab190
swagger: add missing container Health docs
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9ae7196775)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 11:22:47 +01:00
Sebastiaan van Stijn
2c571d3a45
swagger: move ContainerState to definitions
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7e0afd4934)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 11:22:38 +01:00
Odin Ugedal
9da49d0b99
Fix phrasing when referring to the freezer cgroup
Signed-off-by: Odin Ugedal <odin@ugedal.com>
(cherry picked from commit 9c94e8260a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 11:21:48 +01:00
Hannes Ljungberg
37851d8f5b
Update service networks documentation
The previous description stated that an array of names / ids could be passed when the API in reality expects objects in the form of NetworkAttachmentConfig. This is fixed by updating the description and adding a definition for NetworkAttachmentConfig.

Signed-off-by: Hannes Ljungberg <hannes@5monkeys.se>
(cherry picked from commit 4d09fab232)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 11:18:39 +01:00
Sebastiaan van Stijn
e4e71dcf6b
swagger: restore bind options information
This information was added to an older version of the API
documentation (through 164ab2cfc9 and
5213a0a67e), but only added in the
"docs" branch.

This patch copies the information to the swagger file.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 79c877cfa7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 11:13:20 +01:00
Daniel Black
d2aa7e3b3f
/containers/{id}/json missing Platform
To match ContainerJSONBase api/types/types.go

Signed-off-by: Daniel Black <daniel@linux.ibm.com>
(cherry picked from commit 7b4b940470)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 11:11:20 +01:00
Jan Chren
ea5f540fb6
Document message parameter to /images/create
This parameter was introduced 4 years ago in b857dadb33
as part of https://github.com/moby/moby/pull/15711, but has never made it to the API docs.

Signed-off-by: Jan Chren (rindeal) <dev.rindeal@gmail.com>
(cherry picked from commit 9608dc5470)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 11:10:08 +01:00
Jérémy Leherpeur
834f0f19c5
Fix indentation in some description
Fix the indentation to allow jane-openapi generate to work

Signed-off-by: Jeremy Leherpeur <jeremy.leherpeur@yousign.fr>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit cf315bedc5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 11:09:11 +01:00
skanehira
062a45bfa4
fix swagger.yaml #39484
Signed-off-by: skanehira <sho19921005@gmail.com>
(cherry picked from commit 3afdc46314)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 11:07:50 +01:00
Sebastiaan van Stijn
cc0416d0eb
bump docker-py to 4.1.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5a703ccb46)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 10:53:40 +01:00
Vikram bir Singh
bbd22fb5d9
Move GOPATH out from under the GO source tree
Unlike Linux which uses a temp dir as GOPATH, Windows
uses c:\go. Among other things, this blocks go get.

Moving GOPATH to c:\gopath and updating references in
comments and documentation.

Currently the change is being scoped narrowly. In the
future GOPATH value could be passed as a parameter to
the ps1 scripts.

Signed-off-by: Vikram bir Singh <vikrambir.singh@docker.com>
(cherry picked from commit ecf91f0d7f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 10:47:18 +01:00
Sebastiaan van Stijn
71dec69ef5
pkg/parsers/kernel: gofmt hex value (preparation for Go 1.13+)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7663aebc12)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 10:43:36 +01:00
Sebastiaan van Stijn
e6ef2b0641
daemon:containerStart() fix unhandled error for saveApparmorConfig
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1250e42a43)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 10:36:10 +01:00
Tonis Tiigi
c0fd6556f2
builder-next: close build context upload on cancel
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 2c2cd9b86a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 10:27:47 +01:00
Eli Uriegas
b1798d895a
daemon: Remove btrfs_noversion build flag
btrfs_noversion was added in d7c37b5a28
for distributions that did not have the `btrfs/version.h` header file.

Seeing how all of the distributions we currently support do have the
`btrfs/version.h` file we should probably just remove this build flag
altogether.

Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
(cherry picked from commit e665263b10)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 10:22:07 +01:00
Sebastiaan van Stijn
d17c56f639
Fix "no such file or directory" warning when unmounting IPC mount
When cleaning up IPC mounts, the daemon could log a warning if the IPC mount was not found;

```
cleanup: failed to unmount IPC: umount /var/lib/docker/containers/90f408e26e205d30676655a08504dddc0d17f5713c1dd4654cf67ded7d3bbb63/mounts/shm, flags: 0x2: no such file or directory"
```

These warnings are safe to ignore, but can cause some confusion;  `container.UnmountIpcMount()`
already attempted to suppress these warnings, however, `mount.Unmount()` returns a `mountError`,
which nests the original error, therefore detecting failed.

This parch uses `errors.Cause()` to get the _underlying_ error to detect if it's a "is not exist".

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 060f387c0b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 10:14:52 +01:00
Ziheng Liu
32206f17d0
Avoid a data race in container/health.go
Signed-off-by: Ziheng Liu <lzhfromustc@gmail.com>
(cherry picked from commit 53e0c50126)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 10:12:32 +01:00
Sebastiaan van Stijn
b2694b459f
errdefs: remove unneeded recursive calls
The `statusCodeFromGRPCError` and `statusCodeFromDistributionError`
helpers are used by `GetHTTPErrorStatusCode`, which already recurses
if the error implements the `Causer` interface.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 32f4fdfb5c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 10:10:03 +01:00
Sebastiaan van Stijn
10df1f55f1
daemon: don't listen on the same address multiple times
Before this change:

    dockerd -H unix:///run/docker.sock -H unix:///run/docker.sock -H unix:///run/docker.sock
    ...
    INFO[2019-07-13T00:02:36.195090937Z] Daemon has completed initialization
    INFO[2019-07-13T00:02:36.215940441Z] API listen on /run/docker.sock
    INFO[2019-07-13T00:02:36.215933172Z] API listen on /run/docker.sock
    INFO[2019-07-13T00:02:36.215990566Z] API listen on /run/docker.sock

After this change:

    dockerd -H unix:///run/docker.sock -H unix:///run/docker.sock -H unix:///run/docker.sock
    ...
    INFO[2019-07-13T00:01:37.533579874Z] Daemon has completed initialization
    INFO[2019-07-13T00:01:37.567045771Z] API listen on /run/docker.sock

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d470252e87)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 10:07:09 +01:00
Eli Uriegas
f13b265b56
Remove cocky from names-generator
Could be misinterpreted as something not too kosher

Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
(cherry picked from commit 8be39cd277)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-17 10:03:40 +01:00
Arko Dasgupta
89c5fbacfd Bump 19.03 libnetwork refpoint
[19.03 backport] bridge: Fix hwaddr set race between us and udev

Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
2020-01-16 16:54:52 -08:00
Sebastiaan van Stijn
9077436e6e
Merge pull request #424 from thaJeztah/19.03_backport_39608_short_libnetwork_id
[19.03 backport] daemon: Use short libnetwork ID in exec-root & update libnetwork
2020-01-16 22:15:04 +01:00
Sebastiaan van Stijn
1a451ca6e0
Merge pull request #423 from thaJeztah/19.03_backport_win_restore_no_parallelism
[19.03 backport] Windows: Use system specific parallelism value on containers restart
2020-01-16 21:13:10 +01:00
Sebastiaan van Stijn
cf14fa7a23
Merge pull request #427 from thaJeztah/19.03_backport_40232-comply_with_gelf_spec
[19.03 backport] logger/gelf: Skip empty lines to comply with spec
2020-01-16 21:09:12 +01:00
Sebastiaan van Stijn
f6398c1f07
Merge pull request #425 from cpuguy83/backport_40169_windows_version_quad
[19.03] Windows: Only set VERSION_QUAD if unset
2020-01-16 21:06:58 +01:00
Sebastiaan van Stijn
71e07f9130
Merge pull request #435 from thaJeztah/19.03_bump_golang_1.12.14
[19.03] Bump Golang 1.12.14
2020-01-16 20:58:52 +01:00
Sebastiaan van Stijn
c4dbf36951
Merge pull request #428 from thaJeztah/19.03_bump_containerd_1.2.11
[19.03] Update containerd to v1.2.11, runc v1.0.0-rc9
2020-01-16 20:58:28 +01:00
Sebastiaan van Stijn
077f093988
Merge pull request #437 from thaJeztah/19.03_backport_skip_broken_docker_py_test
[19.03 backport] docker-py: skip broken ImageCollectionTest::test_pull_multiple, and re-enable fixed tests
2020-01-16 20:56:16 +01:00
Sebastiaan van Stijn
96582ab4ba
Merge pull request #438 from ydcool/19.03_backport_fix_compiling_errors_on_mips
[19.03 backport] cast Dev and Rdev of Stat_t to uint64 for mips
2020-01-16 20:54:12 +01:00
Dominic
16f503c048
cast Dev and Rdev of Stat_t to uint64 for mips
Signed-off-by: Dominic <yindongchao@inspur.com>
Signed-off-by: Dominic Yin <yindongchao@inspur.com>
(cherry picked from commit 5f0231bca1)
Signed-off-by: Dominic Yin <yindongchao@inspur.com>
2020-01-13 09:25:13 +08:00
Sebastiaan van Stijn
8e57214487
docker-py: skip broken ImageCollectionTest::test_pull_multiple
The ImageCollectionTest.test_pull_multiple test performs a `docker pull` without
a `:tag` specified) to pull all tags of the given repository (image).

After pulling the image, the image(s) pulled are checked to verify if the list
of images contains the `:latest` tag.

However, the test assumes that all tags of the image are tags for the same
version of the image (same digest), and thus a *single* image is returned, which
is not always the case.

Currently, the `hello-world:latest` and `hello-world:linux` tags point to a
different digest, therefore the `client.images.pull()` returns multiple images:
one image for digest, making the test fail:

    =================================== FAILURES ===================================
    ____________________ ImageCollectionTest.test_pull_multiple ____________________
    tests/integration/models_images_test.py:90: in test_pull_multiple
        assert len(images) == 1
    E   AssertionError: assert 2 == 1
    E    +  where 2 = len([<Image: 'hello-world:linux'>, <Image: 'hello-world:latest'>])

This patch temporarily skips the broken test until it is fixed upstream.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f2b25e498f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-06 13:43:37 +01:00
Sebastiaan van Stijn
b617355190
docker-py: re-enable tests that were fixed in v4.1.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6bc45b09e7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-06 13:43:28 +01:00
Sebastiaan van Stijn
8dbc7420ed
[19.03] Bump Golang 1.12.14
go1.12.14 (released 2019/12/04) includes a fix to the runtime. See the Go 1.12.14
milestone on our issue tracker for details:

https://github.com/golang/go/issues?q=milestone%3AGo1.12.14+label%3ACherryPickApproved

Update Golang 1.12.13
------------------------

go1.12.13 (released 2019/10/31) fixes an issue on macOS 10.15 Catalina where the
non-notarized installer and binaries were being rejected by Gatekeeper. Only macOS
users who hit this issue need to update.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-30 10:21:06 +01:00
Vikram bir Singh
e2f226b5b4
Bump hcsshim to b3f49c06ffaeef24d09c6c08ec8ec8425a
Among other things, this is required to pull in
microsoft/hcsshim#718

Also fixes microsoft/hcsshim#737
which was caught by checks while attempting to bump
up hcsshim version.

Signed-off-by: Vikram bir Singh <vikrambir.singh@docker.com>
(cherry picked from commit a7b6c3f0bf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-03 16:16:30 +01:00
vikrambirsingh
5302429fff
TestRunAttachFailedNoLeak: Compare lowercase
Fixed failures in TestRunAttachFailedNoLeak caused by case mismatch

Signed-off-by: vikrambirsingh <vikrambir.singh@docker.com>
(cherry picked from commit c530c9cbb0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-03 16:16:27 +01:00
Sebastiaan van Stijn
1f18c73c09
bump Microsoft/hcsshim 2226e083fc390003ae5aa8325c3c92789afa0e7a
Adds osversion.Build() utility

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a5341aaf32)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-03 16:16:13 +01:00
Sebastiaan van Stijn
3fca5878d6
integration-cli: remove unnescessary conversions (unconvert)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7c40c0a922)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-03 16:13:30 +01:00
Sebastiaan van Stijn
4d190af804
Rename "v1" to "statsV1"
follow-up to 27552ceb15, where this
was left as a review comment, but the PR was already merged.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9a7e96b5b7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-03 16:04:08 +01:00
Sebastiaan van Stijn
9ab162a73a
bump containerd/cgroups 5fbad35c2a7e855762d3c60f2e474ffcad0d470a
full diff: c4b9ac5c76...5fbad35c2a

- containerd/cgroups#82 Add go module support
- containerd/cgroups#96 Move metrics proto package to stats/v1
- containerd/cgroups#97 Allow overriding the default /proc folder in blkioController
- containerd/cgroups#98 Allows ignoring memory modules
- containerd/cgroups#99 Add Go 1.13 to Travis
- containerd/cgroups#100 stats/v1: export per-cgroup stats

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 27552ceb15)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-03 16:03:22 +01:00
Sebastiaan van Stijn
fe00613d06
bump containerd/cgroups c4b9ac5c7601384c965b9646fc515884e091ebb9
full diff:  github.com/containerd/cgroups 4994991857...c4b9ac5c76

changes included:

  - containerd/cgroups#81 Add network stats
    - addresses containerd/cgroups#80 Add network metrics
  - containerd/cgroups#85 Fix cgroup hugetlb size prefix for kB
    - addresses kubernetes/kubernetes#77169 Permission denied on hugetlb due to wrong filename
    - relates to opencontainers/runc#2065 Fix cgroup hugetlb size prefix for kB
  - containerd/cgroups#88 cgroups: fix MoveTo function fail problem
  - containerd/cgroups#92 fixed an issue with invalid soft memory limits
  - containerd/cgroups#93 avoid adding io_serviced and io_service_bytes duplicately
    - fixes containerd/containerd#3412 collected metric container_blkio_io_serviced_recursive_total: was collected before with the same name and label values

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0af1099a81)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-03 15:58:15 +01:00
Sebastiaan van Stijn
cfcf25bb54
[19.03] Update containerd binary to v1.2.11
full diff: https://github.com/containerd/containerd/compare/v1.2.10...v1.2.11

The eleventh patch release for containerd 1.2 includes an updated runc with
an additional fix for CVE-2019-16884 and a Golang update.

Notable Updates
-----------------------

- Update the runc vendor to v1.0.0-rc9 which includes an additional mitigation
  for CVE-2019-16884.
  More details on the runc CVE in opencontainers/runc#2128, and the additional
  mitigations in opencontainers/runc#2130.
- Add local-fs.target to service file to fix corrupt image after unexpected host
  reboot. Reported in containerd/containerd#3671, and fixed by containerd/containerd#3746.
- Update Golang runtime to 1.12.13, which includes security fixes to the crypto/dsa
  package made in Go 1.12.11 (CVE-2019-17596), and fixes to the go command, runtime,
  syscall and net packages (Go 1.12.12).

CRI fixes:
-----------------------

- Fix shim delete error code to avoid unnecessary retries in the CRI plugin. Discovered
  in containerd/cri#1309, and fixed by containerd/containerd#3732 and containerd/containerd#3739.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-11-28 10:56:28 +01:00
Sebastiaan van Stijn
efcd84e47c
[19.03] Update to runc v1.0.0-rc9
full diff: 3e425f80a8...v1.0.0-rc9

- opencontainers/runc#1951 Add SCMP_ACT_LOG as a valid Seccomp action
- opencontainers/runc#2130 *: verify operations on /proc/... are on procfs
  This is an additional mitigation for CVE-2019-16884. The primary problem
  is that Docker can be coerced into bind-mounting a file system on top of
  /proc (resulting in label-related writes to /proc no longer happening).

  While we are working on mitigations against permitting the mounts, this
  helps avoid our code from being tricked into writing to non-procfs
  files. This is not a perfect solution (after all, there might be a
  bind-mount of a different procfs file over the target) but in order to
  exploit that you would need to be able to tweak a config.json pretty
  specifically (which thankfully Docker doesn't allow).

  Specifically this stops AppArmor from not labeling a process silently
  due to /proc/self/attr/... being incorrectly set, and stops any
  accidental fd leaks because /proc/self/fd/... is not real.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-11-28 10:56:06 +01:00
John Howard
ba28377919
LCOW: Fix FROM scratch
Signed-off-by: John Howard <jhoward@microsoft.com>
(cherry picked from commit 20b11792e8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-11-27 23:03:22 +01:00
Jonas Heinrich
449b60fcd0
logger/gelf: Skip empty lines to comply with spec
The [gelf payload specification](http://docs.graylog.org/en/2.4/pages/gelf.html#gelf-payload-specification)
demands that the field `short_message` *MUST* be set by the client library.
Since docker logging via the gelf driver sends messages line by line, it can happen that messages with an empty
`short_message` are passed on. This causes strict downstream processors (like graylog) to raise an exception.

The logger now skips messages with an empty line.

Resolves: #40232
See also: #37572

Signed-off-by: Jonas Heinrich <Jonas@JonasHeinrich.com>
(cherry picked from commit 5c6b913ff1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-11-26 12:54:18 +01:00
Grant Millar
d3d724e45a
daemon: Use short libnetwork ID in exec-root & update libnetwork
also updates libnetwork to d9a6682a4dbb13b1f0d8216c425fe9ae010a0f23
full diff:

3eb39382bf...d9a6682a4d

- docker/libnetwork#2482 [19.03 backport] Shorten controller ID in exec-root to not hit UNIX_PATH_MAX
- docker/libnetwork#2483 [19.03 backport] Fix panic in drivers/overlay/encryption.go

Signed-off-by: Grant Millar <rid@cylo.io>
(cherry picked from commit df7b8f458a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-11-21 14:30:33 +01:00
Brian Goff
d699e3de12 Windows: Only set VERSION_QUAD if unset
When trying to build with some pretty typical version strings this was
causing failures trying to generate the windows resource file.

The resource file is already gated by an `ifdef` for this var, so
instead of blindly setting based on "VERSION", which can contain some
characters which are incompatible (e.g. 1.2.3.rc.0 will fail due to the
".rc").

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit ce931f28ea)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2019-11-15 16:16:00 -08:00
Olli Janatuinen
e1cae011e2
Windows: Use system specific parallelism value on containers restart
Signed-off-by: Olli Janatuinen <olli.janatuinen@gmail.com>
(cherry picked from commit 447a840254)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-11-13 14:54:41 -08:00
Andrew Hsu
ea84732a77
Merge pull request #422 from tonistiigi/1903-update-buildkit
[19.03] vendor: update buildkit to 928f3b48
2019-11-12 20:22:39 -08:00
Tonis Tiigi
33b2719488 vendor: update buildkit to 928f3b48
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-11-12 18:17:50 -08:00
Andrew Hsu
075a0201b9
Merge pull request #374 from thaJeztah/19.03_backport_add_tc_dynamic_ingress_network
[19.03 backport] Add TC to check dynamic subnet for ingress network
2019-11-05 20:12:14 -08:00
Andrew Hsu
5d5083a57a
Merge pull request #420 from tonistiigi/1903-buildkit-update
[19.03] vendor: update buildkit to ff93519ee
2019-11-04 17:23:40 -08:00
Tonis Tiigi
25162d4a4e vendor: update buildkit to ff93519ee
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-11-04 16:06:35 -08:00
Andrew Hsu
35913e58c2
Merge pull request #419 from andrewhsu/xit
[19.03] Windows: disable flaky test TestStartReturnCorrectExitCode
2019-11-01 08:41:33 -07:00
Arko Dasgupta
12e7d99439
Add TC to check dyanmic subnet for ingress network
Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
(cherry picked from commit e2b5ac75a393f6942c37efdd888fc3bc761de244)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-31 17:46:56 +01:00
Sebastiaan van Stijn
0c38d56a6d
Revert "Revert "[19.03] bump swarmkit to f35d9100f2c6ac810cc8d7de6e8f93dcc7a42d29""
This reverts commit ef4366ee89.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-31 17:46:54 +01:00
Andrew Hsu
031ef2dc8e Windows: disable flaky test TestStartReturnCorrectExitCode
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit 1be272ef76)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2019-10-31 16:05:07 +00:00
Andrew Hsu
ddb60aa6d1
Merge pull request #418 from kolyshkin/19.03-go1.12.12
[19.03] Bump golang 1.12.12
2019-10-30 09:47:17 -07:00
Kir Kolyshkin
92a8618ddc Bump golang 1.12.12
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-10-28 14:01:45 -07:00
Andrew Hsu
370def6b30
Merge pull request #412 from thaJeztah/19.03_backport_builder_entitilement_confg
[19.03 backport] builder entitlements configuration added.
2019-10-28 10:53:19 -07:00
Andrew Hsu
e2e3abec71
Merge pull request #410 from thaJeztah/19.03_backport_fix_buildkit_prunegc_filter_config
[19.03 backport] daemon/config: fix filter type in BuildKit GC config
2019-10-28 10:52:31 -07:00
Andrew Hsu
0e8949a003
Merge pull request #407 from thaJeztah/19.03_backport_better_container_error
[19.03 backport] Propagate GetContainer error from event processor
2019-10-28 10:50:46 -07:00
Andrew Hsu
967aa3a9ef
Merge pull request #405 from thaJeztah/19.03_backport_oci_regression
[19.03 backport] Use ocischema package instead of custom handler
2019-10-28 10:50:08 -07:00
Andrew Hsu
83bcde8f60
Merge pull request #408 from thaJeztah/19.03_backport_update_rootless_docs
[19.03 backport] docs/rootless.md: update
2019-10-28 10:46:27 -07:00
Andrew Hsu
d91a85a9b5
Merge pull request #397 from thaJeztah/19.03_backport_slirp4netns_sandbox
[19.03 backport] rootless: harden slirp4netns with mount namespace and seccomp
2019-10-28 10:45:18 -07:00
Sebastiaan van Stijn
e5a0bc6a50
Add GoDoc to fix linting validation
The validate step in CI was broken, due to a combination of
086b4541cf, fbdd437d29,
and 85733620eb being merged to master.

```
api/types/filters/parse.go:39:1: exported method `Args.Keys` should have comment or be unexported (golint)
func (args Args) Keys() []string {
^
daemon/config/builder.go:19:6: exported type `BuilderGCFilter` should have comment or be unexported (golint)
type BuilderGCFilter filters.Args
     ^
daemon/config/builder.go:21:1: exported method `BuilderGCFilter.MarshalJSON` should have comment or be unexported (golint)
func (x *BuilderGCFilter) MarshalJSON() ([]byte, error) {
^
daemon/config/builder.go:35:1: exported method `BuilderGCFilter.UnmarshalJSON` should have comment or be unexported (golint)
func (x *BuilderGCFilter) UnmarshalJSON(data []byte) error {
^
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9d726f1c18)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-22 10:28:25 +02:00
Tibor Vass
dae4436d1c
daemon/config: add MarshalJSON for future proofing
If anything marshals the daemon config now or in the future
this commit ensures the correct canonical form for the builder
GC policies' filters.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 85733620eb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-22 10:28:11 +02:00
Tibor Vass
1e26b431c9
daemon/config: fix filter type in BuildKit GC config
For backwards compatibility, the old incorrect object format for
builder.GC.Rule.Filter still works but is deprecated in favor of array of
strings akin to what needs to be passed on the CLI.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit fbdd437d29)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-22 10:28:09 +02:00
Kunal Kushwaha
ce74774c09
builder entitlements configutation added.
buildkit supports entitlements like network-host and security-insecure.
this patch aims to make it configurable through daemon.json file.
by default network-host is enabled & secuirty-insecure is disabled.

Signed-off-by: Kunal Kushwaha <kunal.kushwaha@gmail.com>
(cherry picked from commit 8b7bbf180f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-22 10:06:46 +02:00
Tibor Vass
645f559352
Merge pull request #411 from thaJeztah/19.03_backport_fix_dco_branch
[19.03 backport] Jenkinsfile: set repo and branch for DCO check as well
2019-10-21 16:22:02 -07:00
Sebastiaan van Stijn
9c388fb119
Jenkinsfile: set repo and branch for DCO check as well
Commit 7019b60d0d added these
env-vars to other stages, but forgot to update the DCO stage,
which also does a diff to validate commits that are in a PR.

Also adding openssh-client, for situations where the upstream
needs to be accessed through an ssh connection.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7c5fd83c22)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-21 23:32:48 +02:00
Akihiro Suda
a8b454a934
docs/rootless.md: update
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit e76dea157e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-20 23:50:07 +02:00
Brian Goff
fd169c00bf
Propagate GetContainer error from event processor
Before this change we just accept that any error is "not found" and it
could be something else, but even if it it is just a "not found" kind of
error this should be dealt with from the container store and not the
event processor.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 54e30a62d3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-17 02:49:24 +02:00
Brian Goff
e037bade8c
Use ocischema package instead of custom handler
Previously we were re-using schema2.DeserializedManifest to handle oci
manifests. The issue lies in the fact that distribution started
validating the media type string during json deserialization. This
change broke our usage of that type.

Instead distribution now provides direct support for oci schemas, so use
that instead of our custom handlers.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit e443512ce4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-14 23:06:05 +02:00
Andrew Hsu
adfac697dc
Merge pull request #404 from thaJeztah/19.03_revert_iptables_check2
[19.03 backport] revert controller: Check if IPTables is enabled for arrangeUserFilterRule ENGCORE-1114
2019-10-11 14:19:53 -07:00
Sebastiaan van Stijn
54a58760b6
[19.03 backport] revert controller: Check if IPTables is enabled for arrangeUserFilterRule
This change caused a regression, causing the DOCKER-USER chain
to not be created, despite iptables being enabled on the daemon.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-11 21:10:48 +02:00
Andrew Hsu
5787ef7e9c
Merge pull request #396 from thaJeztah/19.03_backport_update_moved_repositories
[19.03 backport] Update links/references to transferred repositories
2019-10-10 10:58:11 -07:00
Andrew Hsu
9a21cf7e55
Merge pull request #399 from thaJeztah/19.03_backport_do_the_right_diff_do_the_right_diff
[19.03 backport] Jenkinsfile: set repo and branch, to assist validate_diff()
2019-10-10 10:56:41 -07:00
Sebastiaan van Stijn
abbc956ac8
Jenkinsfile: set repo and branch, to assist validate_diff()
This is a continuation of 2a08f33166247da9d4c09d4c6c72cbb8119bf8df;

When running CI in other repositories (e.g. Docker's downstream
docker/engine repository), or other branches, the validation
scripts were calculating the list of changes based on the wrong
information.

This lead to weird failures in CI in a branch where these values
were not updated ':-) (CI on a pull request failed because it detected
that new tests were added to the deprecated `integration-cli` test-suite,
but the pull request did not actually make changes in that area).

This patch uses environment variables set by Jenkins to sets the
correct target repository (and branch) to compare to.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7019b60d0d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-07 23:52:55 +02:00
Sebastiaan van Stijn
646e7a5239
Jenkinsfile: remove redundant -f Dockerfile
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 64b3d12686)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-07 23:52:53 +02:00
Sebastiaan van Stijn
3e077fc866
Merge pull request #398 from thaJeztah/19.03_rollback_libnetwork
[19.03] roll-back libnetwork iptables forward policy change [DESKTOP-1934]
2019-10-07 23:12:15 +02:00
Sebastiaan van Stijn
fb0fca8607
[19.03] roll-back libnetwork iptables forward policy change
The patch made in  docker/libnetwork#2450 caused a breaking change in the
networking behaviour, causing Kubernetes installations on Docker Desktop
(and possibly other setups) to fail.

Rolling back this change in the 19.03 branch while we investigate if there
are alternatives.

diff: 45c710223c...96bcc0dae8

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-07 18:11:13 +02:00
Akihiro Suda
5bd4233d7b
rootless: harden slirp4netns with mount namespace and seccomp
When slirp4netns v0.4.0+ is used, now slirp4netns is hardened using
mount namespace ("sandbox") and seccomp to mitigate potential
vulnerabilities.

bump up rootlesskit: 2fcff6ceae...791ac8cb20

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit e20b7323fb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-05 10:54:26 +02:00
Andrew Hsu
2ae5cbcf05
Merge pull request #391 from thaJeztah/19.03_backport_session_endpoint_docs_updates
[19.03 backport] API: update docs that /session left experimental in V1.39
2019-10-03 10:49:04 -07:00
Sebastiaan van Stijn
3472e441c5
hack/ci/windows.ps1 update references to repositories that were moved
Also updated the related docs.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5175ed54e5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-03 15:37:56 +02:00
Sebastiaan van Stijn
a2a4576c61
Dockerfile.windows: update references to repositories that were moved
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 83fd212f2c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-03 15:37:54 +02:00
Sebastiaan van Stijn
ac62fa7a61
Jenkinsfile: update references to repositories that were moved
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b323c6e9ae)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-03 15:37:52 +02:00
Andrew Hsu
d9fba87f5a
Merge pull request #392 from andrewhsu/bump_docker_py
[19.03 backport] Temporarily switch docker-py to "master"
2019-10-02 15:56:00 -07:00
Sebastiaan van Stijn
ec0e20a9eb Temporarily switch docker-py to "master"
The docker-py tests were broken, because the version of
py-test that was used, used a dependency that had a new
major release with a breaking change.

Unfortunately, it was not pinned to a specific version,
so when the dependency did the release, py-test broke;

```
22:16:47  Traceback (most recent call last):
22:16:47    File "/usr/local/bin/pytest", line 10, in <module>
22:16:47      sys.exit(main())
22:16:47    File "/usr/local/lib/python3.6/site-packages/_pytest/config/__init__.py", line 61, in main
22:16:47      config = _prepareconfig(args, plugins)
22:16:47    File "/usr/local/lib/python3.6/site-packages/_pytest/config/__init__.py", line 182, in _prepareconfig
22:16:47      config = get_config()
22:16:47    File "/usr/local/lib/python3.6/site-packages/_pytest/config/__init__.py", line 156, in get_config
22:16:47      pluginmanager.import_plugin(spec)
22:16:47    File "/usr/local/lib/python3.6/site-packages/_pytest/config/__init__.py", line 530, in import_plugin
22:16:47      __import__(importspec)
22:16:47    File "/usr/local/lib/python3.6/site-packages/_pytest/tmpdir.py", line 25, in <module>
22:16:47      class TempPathFactory(object):
22:16:47    File "/usr/local/lib/python3.6/site-packages/_pytest/tmpdir.py", line 35, in TempPathFactory
22:16:47      lambda p: Path(os.path.abspath(six.text_type(p)))
22:16:47  TypeError: attrib() got an unexpected keyword argument 'convert'
```

docker-py master has a fix for this (bumping the version of
`py-test`), but it's not in a release yet, and the docker cli that's used
in our CI is pinned to 17.06, which doesn't support building from a remote
git repository from a specific git commit.

To fix the immediate situation, this patch switches the docker-py
tests to run from the master branch.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 48353e16fe)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2019-10-02 17:42:41 +00:00
Sebastiaan van Stijn
923e849f28
API: update docs that /session left experimental in V1.39
The `/session` endpoint left experimental in API V1.39 through
239047c2d3 and
01c9e7082e, but the API reference
was not updated accordingly.

This updates the API documentation to match the change.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6756f5f378)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-10-02 12:33:14 +02:00
Kirill Kolyshkin
060997ca6b
Merge pull request #389 from thaJeztah/19.03_backport_fix_dockernetworksuite
[19.03 backport] integration-cli: fix DockerNetworkSuite not being run
2019-09-30 11:03:49 -07:00
Sebastiaan van Stijn
adcd369285
integration-cli: fix DockerNetworkSuite not being run
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5c891ea9ca)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-30 19:59:26 +02:00
Andrew Hsu
b6a7124855
Merge pull request #383 from thaJeztah/19.03_backport_test_fixes_2
[19.03 backport] Testing and Jenkinsfile changes [step 2]
2019-09-27 16:58:30 -07:00
Andrew Hsu
7fe3abf887
Merge pull request #387 from thaJeztah/19.03_bump_golang_1.12.10
[19.03] bump golang 1.12.10 (CVE-2019-16276)
2019-09-27 11:52:28 -07:00
Andrew Hsu
3fec3d1f1c
Merge pull request #385 from thaJeztah/19.03_backport_bump_containerd_runc
[19.03 backport] update containerd 1.2.10, runc v1.0.0-rc8-92-g84373aaa (CVE-2019-16884)
2019-09-27 11:07:53 -07:00
Sebastiaan van Stijn
49e8f7451d
bump golang 1.12.10 (CVE-2019-16276)
full diff: https://github.com/golang/go/compare/go1.12.9...go1.12.10

```
Hi gophers,

We have just released Go 1.13.1 and Go 1.12.10 to address a recently reported security issue. We recommend that all affected users update to one of these releases (if you're not sure which, choose Go 1.13.1).

net/http (through net/textproto) used to accept and normalize invalid HTTP/1.1 headers with a space before the colon, in violation of RFC 7230. If a Go server is used behind an uncommon reverse proxy that accepts and forwards but doesn't normalize such invalid headers, the reverse proxy and the server can interpret the headers differently. This can lead to filter bypasses or request smuggling, the latter if requests from separate clients are multiplexed onto the same upstream connection by the proxy. Such invalid headers are now rejected by Go servers, and passed without normalization to Go client applications.

The issue is CVE-2019-16276 and Go issue golang.org/issue/34540.

Thanks to Andrew Stucki, Adam Scarr (99designs.com), and Jan Masarik (masarik.sh) for discovering and reporting this issue.

Downloads are available at https://golang.org/dl for all supported platforms.

Alla prossima,
Filippo on behalf of the Go team
```

From the patch: 6e6f4aaf70

```
net/textproto: don't normalize headers with spaces before the colon

RFC 7230 is clear about headers with a space before the colon, like

X-Answer : 42

being invalid, but we've been accepting and normalizing them for compatibility
purposes since CL 5690059 in 2012.

On the client side, this is harmless and indeed most browsers behave the same
to this day. On the server side, this becomes a security issue when the
behavior doesn't match that of a reverse proxy sitting in front of the server.

For example, if a WAF accepts them without normalizing them, it might be
possible to bypass its filters, because the Go server would interpret the
header differently. Worse, if the reverse proxy coalesces requests onto a
single HTTP/1.1 connection to a Go server, the understanding of the request
boundaries can get out of sync between them, allowing an attacker to tack an
arbitrary method and path onto a request by other clients, including
authentication headers unknown to the attacker.

This was recently presented at multiple security conferences:
https://portswigger.net/blog/http-desync-attacks-request-smuggling-reborn

net/http servers already reject header keys with invalid characters.
Simply stop normalizing extra spaces in net/textproto, let it return them
unchanged like it does for other invalid headers, and let net/http enforce
RFC 7230, which is HTTP specific. This loses us normalization on the client
side, but there's no right answer on the client side anyway, and hiding the
issue sounds worse than letting the application decide.
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-27 16:59:28 +02:00
Sebastiaan van Stijn
3136dea250
Re-group vendor.conf deps to reflect reality
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 05a0621fd0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-27 16:25:56 +02:00
Sebastiaan van Stijn
8ddb4c4e95
bump runc vendor v1.0.0-rc8-92-g84373aaa
full diff: https://github.com/opencontainers/runc/compare/v1.0.0-rc8...3e425f80a8c931f88e6d94a8c831b9d5aa481657

  - opencontainers/runc#2010 criu image path permission error when checkpoint rootless container
  - opencontainers/runc#2028 Update to Go 1.12 and drop obsolete versions
  - opencontainers/runc#2029 Update dependencies
  - opencontainers/runc#2034 Support for logging from children processes
  - opencontainers/runc#2035 specconv: always set "type: bind" in case of MS_BIND
  - opencontainers/runc#2038 `r.destroy` can defer exec in `runner.run` method
  - opencontainers/runc#2041 Change the permissions of the notify listener socket to rwx for everyone
  - opencontainers/runc#2042 libcontainer: intelrdt: add missing destroy handler in defer func
  - opencontainers/runc#2047 Move systemd.Manager initialization into a function in that module
  - opencontainers/runc#2057 main: not reopen /dev/stderr
      - closes opencontainers/runc#2056 Runc + podman|cri-o + systemd issue with stderr
      - closes kubernetes/kubernetes#77615 kubelet fails starting CRI-O containers (Ubuntu 18.04 + systemd cgroups driver)
      - closes cri-o/cri-o#2368 Joining worker node not starting flannel or kube-proxy / CRI-O error "open /dev/stderr: no such device or address"
  - opencontainers/runc#2061 libcontainer: fix TestGetContainerState to check configs.NEWCGROUP
  - opencontainers/runc#2065 Fix cgroup hugetlb size prefix for kB
  - opencontainers/runc#2067 libcontainer: change seccomp test for clone syscall
  - opencontainers/runc#2074 Update dependency libseccomp-golang
  - opencontainers/runc#2081 Bump CRIU to 3.12
  - opencontainers/runc#2089 doc: First process in container needs `Init: true`
  - opencontainers/runc#2094 Skip searching /dev/.udev for device nodes
      - closes opencontainers/runc#2093 HostDevices() race with older udevd versions
  - opencontainers/runc#2098 man: fix man-pages
  - opencontainers/runc#2103 cgroups/fs: check nil pointers in cgroup manager
  - opencontainers/runc#2107 Make get devices function public
  - opencontainers/runc#2113 libcontainer: initial support for cgroups v2
  - opencontainers/runc#2116 Avoid the dependency on cgo through go-systemd/util package
      - removes github.com/coreos/pkg as dependency
  - opencontainers/runc#2117 Remove libcontainer detection for systemd features
      - fixes opencontainers/runc#2117 Cache the systemd detection results
  - opencontainers/runc#2119 libcontainer: update masked paths of /proc
      - relates to moby/moby#36368 Add /proc/keys to masked paths
      - relates to moby/moby#38299 Masked /proc/asound
      - relates to moby/moby#37404 Add /proc/acpi to masked paths (CVE-2018-10892)
  - opencontainers/runc#2122 nsenter: minor fixes
  - opencontainers/runc#2123 Bump x/sys and update syscall for initial Risc-V support
  - opencontainers/runc#2125 cgroup: support mount of cgroup2
  - opencontainers/runc#2126 libcontainer/nsenter: Don't import C in non-cgo file
  - opencontainers/runc#2129 Only allow proc mount if it is procfs
      - addresses opencontainers/runc#2129 AppArmor can be bypassed by a malicious image that specifies a volume at /proc (CVE-2019-16884)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ac0ab114a2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-27 16:25:48 +02:00
Sebastiaan van Stijn
b4c03dd633
update runc to v1.0.0-rc8-92-g84373aaa (CVE-2019-16884)
full diff: https://github.com/opencontainers/runc/compare/v1.0.0-rc8...3e425f80a8c931f88e6d94a8c831b9d5aa481657

  - opencontainers/runc#2010 criu image path permission error when checkpoint rootless container
  - opencontainers/runc#2028 Update to Go 1.12 and drop obsolete versions
  - opencontainers/runc#2029 Update dependencies
  - opencontainers/runc#2034 Support for logging from children processes
  - opencontainers/runc#2035 specconv: always set "type: bind" in case of MS_BIND
  - opencontainers/runc#2038 `r.destroy` can defer exec in `runner.run` method
  - opencontainers/runc#2041 Change the permissions of the notify listener socket to rwx for everyone
  - opencontainers/runc#2042 libcontainer: intelrdt: add missing destroy handler in defer func
  - opencontainers/runc#2047 Move systemd.Manager initialization into a function in that module
  - opencontainers/runc#2057 main: not reopen /dev/stderr
      - closes opencontainers/runc#2056 Runc + podman|cri-o + systemd issue with stderr
      - closes kubernetes/kubernetes#77615 kubelet fails starting CRI-O containers (Ubuntu 18.04 + systemd cgroups driver)
      - closes cri-o/cri-o#2368 Joining worker node not starting flannel or kube-proxy / CRI-O error "open /dev/stderr: no such device or address"
  - opencontainers/runc#2061 libcontainer: fix TestGetContainerState to check configs.NEWCGROUP
  - opencontainers/runc#2065 Fix cgroup hugetlb size prefix for kB
  - opencontainers/runc#2067 libcontainer: change seccomp test for clone syscall
  - opencontainers/runc#2074 Update dependency libseccomp-golang
  - opencontainers/runc#2081 Bump CRIU to 3.12
  - opencontainers/runc#2089 doc: First process in container needs `Init: true`
  - opencontainers/runc#2094 Skip searching /dev/.udev for device nodes
      - closes opencontainers/runc#2093 HostDevices() race with older udevd versions
  - opencontainers/runc#2098 man: fix man-pages
  - opencontainers/runc#2103 cgroups/fs: check nil pointers in cgroup manager
  - opencontainers/runc#2107 Make get devices function public
  - opencontainers/runc#2113 libcontainer: initial support for cgroups v2
  - opencontainers/runc#2116 Avoid the dependency on cgo through go-systemd/util package
      - removes github.com/coreos/pkg as dependency
  - opencontainers/runc#2117 Remove libcontainer detection for systemd features
      - fixes opencontainers/runc#2117 Cache the systemd detection results
  - opencontainers/runc#2119 libcontainer: update masked paths of /proc
      - relates to moby/moby#36368 Add /proc/keys to masked paths
      - relates to moby/moby#38299 Masked /proc/asound
      - relates to moby/moby#37404 Add /proc/acpi to masked paths (CVE-2018-10892)
  - opencontainers/runc#2122 nsenter: minor fixes
  - opencontainers/runc#2123 Bump x/sys and update syscall for initial Risc-V support
  - opencontainers/runc#2125 cgroup: support mount of cgroup2
  - opencontainers/runc#2126 libcontainer/nsenter: Don't import C in non-cgo file
  - opencontainers/runc#2129 Only allow proc mount if it is procfs
      - addresses opencontainers/runc#2129 AppArmor can be bypassed by a malicious image that specifies a volume at /proc (CVE-2019-16884)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit bc9a7ec898)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-27 16:25:39 +02:00
Jintao Zhang
65a6d9d9eb
Update containerd to v1.2.10
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit c4ec02b0af)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-27 16:25:20 +02:00
Sebastiaan van Stijn
06f11abf43
integration-cli: fix golint issues
```
docker/integration-cli/checker/checker.go
Line 12: warning: exported type Compare should have comment or be unexported (golint)
Line 14: warning: exported function False should have comment or be unexported (golint)
Line 20: warning: exported function True should have comment or be unexported (golint)
Line 26: warning: exported function Equals should have comment or be unexported (golint)
Line 32: warning: exported function Contains should have comment or be unexported (golint)
Line 38: warning: exported function Not should have comment or be unexported (golint)
Line 52: warning: exported function DeepEquals should have comment or be unexported (golint)
Line 58: warning: exported function HasLen should have comment or be unexported (golint)
Line 64: warning: exported function IsNil should have comment or be unexported (golint)
Line 70: warning: exported function GreaterThan should have comment or be unexported (golint)
Line 76: warning: exported function NotNil should have comment or be unexported (golint)
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6397dd4d31)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:54 +02:00
Tibor Vass
da8cd68e4f
integration-cli: run goimports
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 5b7347c312)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:53 +02:00
Vikram bir Singh
9464d3cd68
Disable TestPsListContainersFilterExited (Windows)
On account of being flaky on both RS1 and RS5.

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>
Signed-off-by: Vikram bir Singh <vikrambir.singh@docker.com>
(cherry picked from commit 7de4e13089)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:52 +02:00
Sebastiaan van Stijn
50cee7c48d
hack/test/unit: fix custom TESTFLAGS not working
The `-test.timeout=5m` was glued directly after the current `TESTFLAGS`,
causing them to be non-functional;

Before:

    make TESTDEBUG=1 TESTDIRS='github.com/docker/docker/pkg/filenotify' TESTFLAGS='-test.run TestPollerEvent' test-unit
    + mkdir -p bundles
    + gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- -tags 'netgo seccomp libdm_no_deferred_remove' -cover -coverprofile=bundles/profile.out -covermode=atomic -test.run TestPollerEvent-test.timeout=5m github.com/docker/docker/pkg/filenotify
    testing: warning: no tests to run
    ok  	github.com/docker/docker/pkg/filenotify	0.003s	coverage: 0.0% of statements [no tests to run]

    DONE 0 tests in 0.298s

After:

    make TESTDEBUG=1 TESTDIRS='github.com/docker/docker/pkg/filenotify' TESTFLAGS='-test.run TestPollerEvent' test-unit
    + mkdir -p bundles
    + gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report.json --junitfile=bundles/junit-report.xml -- -tags 'netgo seccomp libdm_no_deferred_remove' -cover -coverprofile=bundles/profile.out -covermode=atomic -test.run TestPollerEvent -test.timeout=5m github.com/docker/docker/pkg/filenotify
    ok  	github.com/docker/docker/pkg/filenotify	0.608s	coverage: 44.7% of statements

    DONE 1 tests in 0.922s

This was introduced in 42f0a0db75

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0620990307)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:52 +02:00
Tibor Vass
682a46189b
integration-cli: move each test suite to its own TestX testing function
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit f1c1cd436a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:51 +02:00
Tibor Vass
e1c5cdf14d
hack: have integration-cli use gotestsum codepath
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 84928be605)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:50 +02:00
Sebastiaan van Stijn
4cf69b995e
integration-cli: remove unneeded fmt.Sprintf() in asserts
Replaced using a bit of grep-ing;

```
find . -name "*_test.go" -exec sed -E -i 's#assert.Assert\((.*), fmt.Sprintf\((.*)\)\)$#assert.Assert\(\1, \2\)#g' '{}' \;
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0fabf3e41e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:50 +02:00
Pavel Tikhomirov
419995682f
integration-cli/requirements: Skip windows specific isolation requirements on non-windows
After the commit faaffd5d6d ("Windows:Disable 2 restart test when
Hyper-V") some tests became skipped on linux:

SKIP: docker_cli_restart_test.go:167: DockerSuite.TestRestartContainerSuccess (unmatched requirement IsolationIsProcess)
SKIP: docker_cli_restart_test.go:240: DockerSuite.TestRestartPolicyAfterRestart (unmatched requirement IsolationIsProcess)

But AFAIU it is highly unlikely that we actually meant to skip them on linux.

https://github.com/moby/moby/issues/39625

Signed-off-by: Pavel Tikhomirov <ptikhomirov@virtuozzo.com>
(cherry picked from commit b469933b06)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:49 +02:00
Tibor Vass
7ae6aa420d
integration-cli: remove TestingT
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 231ed42cab)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:48 +02:00
Tibor Vass
4c3e2dc441
suite: put suite setup inside test run
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit d32e6bbde8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:47 +02:00
Tibor Vass
d98c74d38d
intgration-cli: fix formatting
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit cc01289792)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:47 +02:00
Tibor Vass
cf50c5bba8
integration-cli: fix pollCheck
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 8eb9f3f90e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:46 +02:00
Tibor Vass
05933ab2d4
integration-cli: have helper functions use testing.Helper()
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit bad6f3bf73)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:45 +02:00
Tibor Vass
15aa73ea4c
remove per-test -timeout logic because it does not work
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 8bffe9524d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:45 +02:00
Tibor Vass
df569fd54c
hack: update scripts
- remove -check.* flags
- use (per-test) -timeout flag
- allow user to override TEST_SKIP_* regardless of TESTFLAGS
- remove test-imports validation

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 7cd028f2d0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:44 +02:00
Tibor Vass
0fa81e50e3
Update Jenkinsfile
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 7491db3e92)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:43 +02:00
Tibor Vass
a5282fa128
cleanup
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 925e407c7b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:43 +02:00
Tibor Vass
da96e5c27b
Setup tests
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 8b40da168b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:42 +02:00
Tibor Vass
fce03f9921
internal/test/suite
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit fd0ed80ff2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:41 +02:00
Tibor Vass
c3d8cb99a0
vendor: remove vdemeester/shakers and go-check/check
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 3aa4ff64aa)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:41 +02:00
Tibor Vass
5f7621b01e
remove rm-gocheck.go and templates
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 9843c2f12c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:40 +02:00
Tibor Vass
37555cdeff
remove waitAndAssert and type casts
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 649201dc44)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:39 +02:00
Tibor Vass
be42af89f8
fix remaining issues with checker.Not
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 40f1950e8e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:38 +02:00
Tibor Vass
9266ff7893
waitAndAssert -> poll.WaitOn
go get -d golang.org/x/tools/cmd/eg && \
dir=$(go env GOPATH)/src/golang.org/x/tools && \
git -C "$dir" fetch https://github.com/tiborvass/tools handle-variadic && \
git -C "$dir" checkout 61a94b82347c29b3289e83190aa3dda74d47abbb && \
go install golang.org/x/tools/cmd/eg

eg -w -t template.waitAndAssert.go ./integration-cli 2>&1 \
| awk '{print $2}' | while read file; do
	# removing vendor/ in import paths
	# not sure why eg adds them
	sed -E -i 's#^([\t]+").*/vendor/([^"]+)#\1\2#g' "$file"
	sed -E -i 's#\.\(eg_compareFunc\)##g' "$file"
	goimports -w "$file"
done

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit ac2f24e72a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:38 +02:00
Tibor Vass
d766dac3bf
prepare for eg on waitAndAssert
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 42599f1cad)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:37 +02:00
Tibor Vass
c30d52b829
fix remaining compile issues
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 318b1612e1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:36 +02:00
Tibor Vass
be66788e3c
rm-gocheck: fix compile errors from converting check.CommentInterface to string
while :; do \
	out=$(go test -c ./integration-cli 2>&1 | grep 'cannot use nil as type string in return argument') || break
	echo "$out" | while read line; do
		file=$(echo "$line" | cut -d: -f1)
		n=$(echo "$line" | cut -d: -f2)
		sed -E -i "${n}"'s#\b(return .*, )nil#\1""#g' "$file"
	done
done \
&& \
while :; do \
	out=$(go test -c ./integration-cli/daemon 2>&1 | grep 'cannot use nil as type string in return argument') || break
	echo "$out" | while read line; do
		file=$(echo "$line" | cut -d: -f1)
		n=$(echo "$line" | cut -d: -f2)
		sed -E -i "${n}"'s#\b(return .*, )nil#\1""#g' "$file"
	done
done \
&& \
while :; do \
	out=$(go test -c ./pkg/discovery 2>&1 | grep 'cannot use nil as type string in return argument') || break
	echo "$out" | while read line; do
		file=$(echo "$line" | cut -d: -f1)
		n=$(echo "$line" | cut -d: -f2)
		sed -E -i "${n}"'s#\b(return .*, )nil#\1""#g' "$file"
	done
done \
&& \
while :; do \
	out=$(go test -c ./pkg/discovery/file 2>&1 | grep 'cannot use nil as type string in return argument') || break
	echo "$out" | while read line; do
		file=$(echo "$line" | cut -d: -f1)
		n=$(echo "$line" | cut -d: -f2)
		sed -E -i "${n}"'s#\b(return .*, )nil#\1""#g' "$file"
	done
done \
&& \
while :; do \
	out=$(go test -c ./pkg/discovery/kv 2>&1 | grep 'cannot use nil as type string in return argument') || break
	echo "$out" | while read line; do
		file=$(echo "$line" | cut -d: -f1)
		n=$(echo "$line" | cut -d: -f2)
		sed -E -i "${n}"'s#\b(return .*, )nil#\1""#g' "$file"
	done
done \
&& \
while :; do \
	out=$(go test -c ./pkg/discovery/memory 2>&1 | grep 'cannot use nil as type string in return argument') || break
	echo "$out" | while read line; do
		file=$(echo "$line" | cut -d: -f1)
		n=$(echo "$line" | cut -d: -f2)
		sed -E -i "${n}"'s#\b(return .*, )nil#\1""#g' "$file"
	done
done \
&& \
while :; do \
	out=$(go test -c ./pkg/discovery/nodes 2>&1 | grep 'cannot use nil as type string in return argument') || break
	echo "$out" | while read line; do
		file=$(echo "$line" | cut -d: -f1)
		n=$(echo "$line" | cut -d: -f2)
		sed -E -i "${n}"'s#\b(return .*, )nil#\1""#g' "$file"
	done
done

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 64de5e8228)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:36 +02:00
Tibor Vass
dc044f26ea
rm-gocheck: goimports
goimports -w \
-- "./pkg/discovery/file" "./pkg/discovery/kv" "./pkg/discovery/memory" "./pkg/discovery/nodes" "./integration-cli" "./integration-cli/daemon" "./pkg/discovery" \
&& \
 gofmt -w -s \
-- "./pkg/discovery/file" "./pkg/discovery/kv" "./pkg/discovery/memory" "./pkg/discovery/nodes" "./integration-cli" "./integration-cli/daemon" "./pkg/discovery"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 7813dfe9d7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:35 +02:00
Tibor Vass
1b1fe4cc64
rm-gocheck: check.CommentInterface -> string
sed -E -i 's#(\*testing\.T\b.*)check\.CommentInterface\b#\1string#g' \
-- "integration-cli/daemon/daemon.go" "integration-cli/daemon/daemon_swarm.go" "integration-cli/docker_api_exec_test.go" "integration-cli/docker_api_swarm_service_test.go" "integration-cli/docker_api_swarm_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_prune_unix_test.go" "integration-cli/docker_cli_restart_test.go" "integration-cli/docker_cli_service_create_test.go" "integration-cli/docker_cli_service_health_test.go" "integration-cli/docker_cli_service_logs_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_utils_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 3a24472c8e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:34 +02:00
Tibor Vass
be28c05949
rm-gocheck: convert check.Commentf to string - other
sed -E -i 's#\bcheck.Commentf\(([^\)]+)\)#\1#g' \
-- "integration-cli/docker_cli_build_unix_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_plugins_test.go" "integration-cli/docker_cli_run_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 6e5cf532af)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:33 +02:00
Tibor Vass
2f069fa3a5
rm-gocheck: convert check.Commentf to string - with just one string
sed -E -i 's#\bcheck.Commentf\(("[^"]+")\)#\1#g' \
-- "integration-cli/daemon/daemon_swarm.go" "integration-cli/docker_api_containers_test.go" "integration-cli/docker_api_swarm_test.go" "integration-cli/docker_cli_build_unix_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_history_test.go" "integration-cli/docker_cli_import_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_plugins_test.go" "integration-cli/docker_cli_port_test.go" "integration-cli/docker_cli_ps_test.go" "integration-cli/docker_cli_pull_local_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_run_unix_test.go" "integration-cli/docker_cli_save_load_test.go" "integration-cli/docker_cli_service_logs_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_userns_test.go" "integration-cli/docker_cli_volume_test.go" "integration-cli/docker_utils_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 6135eec30a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:33 +02:00
Tibor Vass
673cf751ca
rm-gocheck: convert check.Commentf to string - with multiple args
sed -E -i 's#\bcheck.Commentf\(([^,]+),(.*)\)#fmt.Sprintf(\1,\2)#g' \
-- "integration-cli/daemon/daemon.go" "integration-cli/daemon/daemon_swarm.go" "integration-cli/docker_api_containers_test.go" "integration-cli/docker_api_exec_test.go" "integration-cli/docker_api_swarm_node_test.go" "integration-cli/docker_api_swarm_test.go" "integration-cli/docker_cli_attach_unix_test.go" "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_commit_test.go" "integration-cli/docker_cli_cp_from_container_test.go" "integration-cli/docker_cli_cp_to_container_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_history_test.go" "integration-cli/docker_cli_images_test.go" "integration-cli/docker_cli_info_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_links_test.go" "integration-cli/docker_cli_netmode_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_plugins_test.go" "integration-cli/docker_cli_port_test.go" "integration-cli/docker_cli_ps_test.go" "integration-cli/docker_cli_pull_local_test.go" "integration-cli/docker_cli_rmi_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_run_unix_test.go" "integration-cli/docker_cli_save_load_test.go" "integration-cli/docker_cli_service_create_test.go" "integration-cli/docker_cli_service_logs_test.go" "integration-cli/docker_cli_start_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_userns_test.go" "integration-cli/docker_cli_volume_test.go" "integration-cli/docker_hub_pull_suite_test.go" "integration-cli/docker_utils_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit a2024a5470)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:32 +02:00
Tibor Vass
ed9449a424
rm-gocheck: Contains -> strings.Contains
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.Contains, (.*)\)$#assert.Assert(c, eg_contains(\1, \2))#g' \
-- "integration-cli/docker_api_containers_test.go" "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_commit_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_history_test.go" "integration-cli/docker_cli_images_test.go" "integration-cli/docker_cli_info_test.go" "integration-cli/docker_cli_info_unix_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_links_test.go" "integration-cli/docker_cli_netmode_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_plugins_test.go" "integration-cli/docker_cli_port_test.go" "integration-cli/docker_cli_prune_unix_test.go" "integration-cli/docker_cli_ps_test.go" "integration-cli/docker_cli_pull_local_test.go" "integration-cli/docker_cli_rmi_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_run_unix_test.go" "integration-cli/docker_cli_save_load_test.go" "integration-cli/docker_cli_service_create_test.go" "integration-cli/docker_cli_start_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_volume_test.go" \
&& \
go get -d golang.org/x/tools/cmd/eg && dir=$(go env GOPATH)/src/golang.org/x/tools && git -C "$dir" fetch https://github.com/tiborvass/tools handle-variadic && git -C "$dir" checkout 61a94b82347c29b3289e83190aa3dda74d47abbb && go install golang.org/x/tools/cmd/eg \
&& \
/bin/echo -e 'package main\nvar eg_contains func(arg1, arg2 string, extra ...interface{}) bool' > ./integration-cli/eg_helper.go \
&& \
goimports -w ./integration-cli \
&& \
eg -w -t template.contains.go -- ./integration-cli \
&& \
rm -f ./integration-cli/eg_helper.go \
&& \
go run rm-gocheck.go redress '\bassert\.Assert\b.*(\(|,)\s*$' \
 "integration-cli/docker_api_containers_test.go" "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_commit_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_history_test.go" "integration-cli/docker_cli_images_test.go" "integration-cli/docker_cli_info_test.go" "integration-cli/docker_cli_info_unix_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_links_test.go" "integration-cli/docker_cli_netmode_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_plugins_test.go" "integration-cli/docker_cli_port_test.go" "integration-cli/docker_cli_prune_unix_test.go" "integration-cli/docker_cli_ps_test.go" "integration-cli/docker_cli_pull_local_test.go" "integration-cli/docker_cli_rmi_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_run_unix_test.go" "integration-cli/docker_cli_save_load_test.go" "integration-cli/docker_cli_service_create_test.go" "integration-cli/docker_cli_start_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_volume_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 98f2638fe5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:31 +02:00
Tibor Vass
07b243656c
rm-gocheck: Not(Contains) -> !strings.Contains
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.Not\(checker\.Contains\), (.*)\)$#assert.Assert(c, !eg_contains(\1, \2))#g' \
-- "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_images_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_plugins_test.go" "integration-cli/docker_cli_prune_unix_test.go" "integration-cli/docker_cli_ps_test.go" "integration-cli/docker_cli_pull_local_test.go" "integration-cli/docker_cli_rmi_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_run_unix_test.go" "integration-cli/docker_cli_save_load_test.go" "integration-cli/docker_cli_start_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_volume_test.go" \
&& \
go get -d golang.org/x/tools/cmd/eg && dir=$(go env GOPATH)/src/golang.org/x/tools && git -C "$dir" fetch https://github.com/tiborvass/tools handle-variadic && git -C "$dir" checkout 61a94b82347c29b3289e83190aa3dda74d47abbb && go install golang.org/x/tools/cmd/eg \
&& \
/bin/echo -e 'package main\nvar eg_contains func(arg1, arg2 string, extra ...interface{}) bool' > ./integration-cli/eg_helper.go \
&& \
goimports -w ./integration-cli \
&& \
eg -w -t template.not_contains.go -- ./integration-cli \
&& \
rm -f ./integration-cli/eg_helper.go \
&& \
go run rm-gocheck.go redress '\bassert\.Assert\b.*(\(|,)\s*$' \
 "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_images_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_plugins_test.go" "integration-cli/docker_cli_prune_unix_test.go" "integration-cli/docker_cli_ps_test.go" "integration-cli/docker_cli_pull_local_test.go" "integration-cli/docker_cli_rmi_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_run_unix_test.go" "integration-cli/docker_cli_save_load_test.go" "integration-cli/docker_cli_start_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_volume_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 4e2e486b23)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:31 +02:00
Tibor Vass
99c1b63197
rm-gocheck: Matches -> cmp.Regexp
sed -E -i '0,/^import "github\.com/ s/^(import "github\.com.*)/\1\nimport "gotest.tools\/assert\/cmp")/' \
-- "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_history_test.go" "integration-cli/docker_cli_links_test.go" \
&& \
sed -E -i '0,/^\t+"github\.com/ s/(^\t+"github\.com.*)/\1\n"gotest.tools\/assert\/cmp"/' \
-- "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_history_test.go" "integration-cli/docker_cli_links_test.go" \
&& \
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.Matches, (.*)\)$#assert.Assert(c, eg_matches(is.Regexp, \1, \2))#g' \
-- "integration-cli/docker_cli_images_test.go" "integration-cli/docker_api_containers_test.go" \
&& \
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.Matches, (.*)\)$#assert.Assert(c, eg_matches(cmp.Regexp, \1, \2))#g' \
-- "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_history_test.go" "integration-cli/docker_cli_links_test.go" \
&& \
go get -d golang.org/x/tools/cmd/eg && dir=$(go env GOPATH)/src/golang.org/x/tools && git -C "$dir" fetch https://github.com/tiborvass/tools handle-variadic && git -C "$dir" checkout 61a94b82347c29b3289e83190aa3dda74d47abbb && go install golang.org/x/tools/cmd/eg \
&& \
/bin/echo -e 'package main\nvar eg_matches func(func(cmp.RegexOrPattern, string) cmp.Comparison, interface{}, string, ...interface{}) bool' > ./integration-cli/eg_helper.go \
&& \
goimports -w ./integration-cli \
&& \
eg -w -t template.matches.go -- ./integration-cli \
&& \
rm -f ./integration-cli/eg_helper.go \
&& \
go run rm-gocheck.go redress '\bassert\.Assert\b.*(\(|,)\s*$' \
 "integration-cli/docker_api_containers_test.go" "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_history_test.go" "integration-cli/docker_cli_images_test.go" "integration-cli/docker_cli_links_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit f2c9e391fc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:30 +02:00
Tibor Vass
e25352a42a
rm-gocheck: run goimports to compile successfully
goimports -w \
-- "./integration-cli/daemon" "./pkg/discovery" "./pkg/discovery/file" "./pkg/discovery/kv" "./pkg/discovery/memory" "./pkg/discovery/nodes" "./integration-cli" \
&& \
 gofmt -w -s \
-- "./integration-cli/daemon" "./pkg/discovery" "./pkg/discovery/file" "./pkg/discovery/kv" "./pkg/discovery/memory" "./pkg/discovery/nodes" "./integration-cli"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 59e55dcdd0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:28 +02:00
Tibor Vass
d523748a4f
rm-gocheck: comment out check.TestingT
sed -E -i 's#([^*])(check\.TestingT\([^\)]+\))#\1/*\2*/#g' \
-- "integration-cli/check_test.go" "pkg/discovery/discovery_test.go" "pkg/discovery/file/file_test.go" "pkg/discovery/kv/kv_test.go" "pkg/discovery/memory/memory_test.go" "pkg/discovery/nodes/nodes_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit eb67bb9fb5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:27 +02:00
Tibor Vass
bbcad73a27
rm-gocheck: comment out check.Suite calls
sed -E -i 's#^([^*])+?((var .*)?check\.Suite\(.*\))#\1/*\2*/#g' \
-- "integration-cli/check_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_hub_pull_suite_test.go" "pkg/discovery/discovery_test.go" "pkg/discovery/file/file_test.go" "pkg/discovery/kv/kv_test.go" "pkg/discovery/memory/memory_test.go" "pkg/discovery/nodes/nodes_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 81d2a0c389)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:26 +02:00
Tibor Vass
1da0c05e48
rm-gocheck: redress check.Suite calls
go run rm-gocheck.go redress '[^/]\bcheck\.Suite\(.*\{\s*$' \
 "integration-cli/check_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_network_unix_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 6a8a9738ec)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:26 +02:00
Tibor Vass
7696045c1d
rm-gocheck: True
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.True#assert.Assert(c, \1#g' \
-- "integration-cli/docker_api_containers_test.go" "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_cp_from_container_test.go" "integration-cli/docker_cli_cp_to_container_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_plugins_test.go" "integration-cli/docker_cli_service_create_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit d0fc8d082d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:25 +02:00
Tibor Vass
cfd0c55d99
rm-gocheck: False
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.False\b#assert.Assert(c, !\1#g' \
-- "integration-cli/docker_cli_by_digest_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit b17bb1e74a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:24 +02:00
Tibor Vass
e49237dc7d
rm-gocheck: NotNil
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.NotNil\b#assert.Assert(c, \1 != nil#g' \
-- "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_history_test.go" "integration-cli/docker_cli_import_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_links_test.go" "integration-cli/docker_cli_netmode_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_port_test.go" "integration-cli/docker_cli_ps_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_service_create_test.go" "integration-cli/docker_cli_start_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_volume_test.go" "pkg/discovery/discovery_test.go" "pkg/discovery/file/file_test.go" "pkg/discovery/kv/kv_test.go" "pkg/discovery/nodes/nodes_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 64a161aa3e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:24 +02:00
Tibor Vass
ef4c63acf6
rm-gocheck: IsNil
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.IsNil\b#assert.Assert(c, \1 == nil#g' \
-- "integration-cli/docker_api_containers_test.go" "integration-cli/docker_cli_attach_test.go" "integration-cli/docker_cli_attach_unix_test.go" "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_build_unix_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_cp_from_container_test.go" "integration-cli/docker_cli_cp_to_container_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_health_test.go" "integration-cli/docker_cli_history_test.go" "integration-cli/docker_cli_import_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_links_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_plugins_test.go" "integration-cli/docker_cli_port_test.go" "integration-cli/docker_cli_ps_test.go" "integration-cli/docker_cli_pull_local_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_run_unix_test.go" "integration-cli/docker_cli_save_load_test.go" "integration-cli/docker_cli_service_create_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_userns_test.go" "integration-cli/docker_cli_volume_test.go" "integration-cli/docker_hub_pull_suite_test.go" "integration-cli/docker_utils_test.go" "pkg/discovery/discovery_test.go" "pkg/discovery/file/file_test.go" "pkg/discovery/kv/kv_test.go" "pkg/discovery/memory/memory_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 2743e2d8bc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:23 +02:00
Tibor Vass
7b91af803d
rm-gocheck: HasLen -> assert.Equal + len()
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.HasLen, (.*)#assert.Equal(c, len(\1), \2#g' \
-- "integration-cli/docker_api_containers_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_import_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_ps_test.go" "integration-cli/docker_cli_pull_local_test.go" "integration-cli/docker_cli_service_create_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_userns_test.go" "pkg/discovery/discovery_test.go" "pkg/discovery/file/file_test.go" "pkg/discovery/kv/kv_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 491ef7b901)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:22 +02:00
Tibor Vass
17e04aa6c2
rm-gocheck: DeepEquals -> assert.DeepEqual
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.DeepEquals, (.*)#assert.DeepEqual(c, \1, \2#g' \
-- "integration-cli/docker_cli_daemon_test.go" "pkg/discovery/discovery_test.go" "pkg/discovery/file/file_test.go" "pkg/discovery/kv/kv_test.go" "pkg/discovery/memory/memory_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit dd9d28669f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:21 +02:00
Tibor Vass
6dc7846d26
rm-gocheck: Equals -> assert.Equal
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.Equals, (.*)#assert.Equal(c, \1, \2#g' \
-- "integration-cli/docker_api_containers_test.go" "integration-cli/docker_api_swarm_node_test.go" "integration-cli/docker_cli_attach_test.go" "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_build_unix_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_commit_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_health_test.go" "integration-cli/docker_cli_images_test.go" "integration-cli/docker_cli_import_test.go" "integration-cli/docker_cli_info_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_links_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_plugins_test.go" "integration-cli/docker_cli_port_test.go" "integration-cli/docker_cli_ps_test.go" "integration-cli/docker_cli_rmi_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_run_unix_test.go" "integration-cli/docker_cli_service_create_test.go" "integration-cli/docker_cli_service_health_test.go" "integration-cli/docker_cli_start_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_userns_test.go" "integration-cli/docker_cli_volume_test.go" "pkg/discovery/discovery_test.go" "pkg/discovery/file/file_test.go" "pkg/discovery/generator_test.go" "pkg/discovery/kv/kv_test.go" "pkg/discovery/nodes/nodes_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 6dc7f4c167)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:21 +02:00
Tibor Vass
b3d02e7f3c
rm-gocheck: Not(Matches) -> !cmp.Regexp
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.Not\(checker\.Matches\), (.*)\)#assert.Assert(c, !is.Regexp("^"+\2+"$", \1)().Success())#g' \
-- "integration-cli/docker_cli_images_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 10208e4d60)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:20 +02:00
Tibor Vass
819baeb430
rm-gocheck: Not(Equals) -> a != b
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.Not\(checker\.Equals\), (.*)#assert.Assert(c, \1 != \2#g' \
-- "integration-cli/docker_api_containers_test.go" "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_build_unix_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_prune_unix_test.go" "integration-cli/docker_cli_ps_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_save_load_test.go" "integration-cli/docker_cli_service_create_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_volume_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 0fa116fa8f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:19 +02:00
Tibor Vass
fa8d7029a7
rm-gocheck: Not(IsNil) -> != nil
sed -E -i 's#\bassert\.Assert\(c, (.*), checker\.Not\(checker\.IsNil\)#assert.Assert(c, \1 != nil#g' \
-- "integration-cli/docker_api_containers_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_service_create_test.go" "integration-cli/docker_cli_volume_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 74747b35e1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:19 +02:00
Tibor Vass
c8da7fbd25
rm-gocheck: normalize to use checker
sed -E -i 's#\bcheck\.(Equals|DeepEquals|HasLen|IsNil|Matches|Not|NotNil)\b#checker.\1#g' \
-- "integration-cli/docker_api_containers_test.go" "integration-cli/docker_cli_attach_test.go" "integration-cli/docker_cli_attach_unix_test.go" "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_build_unix_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_health_test.go" "integration-cli/docker_cli_images_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_netmode_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_port_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_run_unix_test.go" "integration-cli/docker_cli_save_load_test.go" "integration-cli/docker_cli_service_health_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_volume_test.go" "integration-cli/docker_utils_test.go" "pkg/discovery/discovery_test.go" "pkg/discovery/file/file_test.go" "pkg/discovery/generator_test.go" "pkg/discovery/kv/kv_test.go" "pkg/discovery/memory/memory_test.go" "pkg/discovery/nodes/nodes_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 230f7bcc02)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:18 +02:00
Tibor Vass
99deded542
rm-gocheck: ErrorMatches -> assert.ErrorContains
sed -E -i 's#\bassert\.Assert\(c, (.*), check\.ErrorMatches,#assert.ErrorContains(c, \1,#g' \
-- "pkg/discovery/kv/kv_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit a7d144fb34)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:17 +02:00
Tibor Vass
64a928a3d4
rm-gocheck: check.C -> testing.T
sed -E -i 's#\bcheck\.C\b#testing.T#g' \
-- "integration-cli/check_test.go" "integration-cli/daemon/daemon.go" "integration-cli/daemon/daemon_swarm.go" "integration-cli/daemon_swarm_hack_test.go" "integration-cli/docker_api_attach_test.go" "integration-cli/docker_api_build_test.go" "integration-cli/docker_api_build_windows_test.go" "integration-cli/docker_api_containers_test.go" "integration-cli/docker_api_containers_windows_test.go" "integration-cli/docker_api_exec_resize_test.go" "integration-cli/docker_api_exec_test.go" "integration-cli/docker_api_images_test.go" "integration-cli/docker_api_inspect_test.go" "integration-cli/docker_api_logs_test.go" "integration-cli/docker_api_network_test.go" "integration-cli/docker_api_stats_test.go" "integration-cli/docker_api_swarm_node_test.go" "integration-cli/docker_api_swarm_service_test.go" "integration-cli/docker_api_swarm_test.go" "integration-cli/docker_api_test.go" "integration-cli/docker_cli_attach_test.go" "integration-cli/docker_cli_attach_unix_test.go" "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_build_unix_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_commit_test.go" "integration-cli/docker_cli_cp_from_container_test.go" "integration-cli/docker_cli_cp_test.go" "integration-cli/docker_cli_cp_to_container_test.go" "integration-cli/docker_cli_cp_to_container_unix_test.go" "integration-cli/docker_cli_cp_utils_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_plugins_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_events_test.go" "integration-cli/docker_cli_events_unix_test.go" "integration-cli/docker_cli_exec_test.go" "integration-cli/docker_cli_exec_unix_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_health_test.go" "integration-cli/docker_cli_history_test.go" "integration-cli/docker_cli_images_test.go" "integration-cli/docker_cli_import_test.go" "integration-cli/docker_cli_info_test.go" "integration-cli/docker_cli_info_unix_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_links_test.go" "integration-cli/docker_cli_login_test.go" "integration-cli/docker_cli_logout_test.go" "integration-cli/docker_cli_logs_test.go" "integration-cli/docker_cli_netmode_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_plugins_logdriver_test.go" "integration-cli/docker_cli_plugins_test.go" "integration-cli/docker_cli_port_test.go" "integration-cli/docker_cli_proxy_test.go" "integration-cli/docker_cli_prune_unix_test.go" "integration-cli/docker_cli_ps_test.go" "integration-cli/docker_cli_pull_local_test.go" "integration-cli/docker_cli_pull_test.go" "integration-cli/docker_cli_push_test.go" "integration-cli/docker_cli_registry_user_agent_test.go" "integration-cli/docker_cli_restart_test.go" "integration-cli/docker_cli_rmi_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_run_unix_test.go" "integration-cli/docker_cli_save_load_test.go" "integration-cli/docker_cli_save_load_unix_test.go" "integration-cli/docker_cli_search_test.go" "integration-cli/docker_cli_service_create_test.go" "integration-cli/docker_cli_service_health_test.go" "integration-cli/docker_cli_service_logs_test.go" "integration-cli/docker_cli_service_scale_test.go" "integration-cli/docker_cli_sni_test.go" "integration-cli/docker_cli_start_test.go" "integration-cli/docker_cli_stats_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_swarm_unix_test.go" "integration-cli/docker_cli_top_test.go" "integration-cli/docker_cli_update_unix_test.go" "integration-cli/docker_cli_userns_test.go" "integration-cli/docker_cli_v2_only_test.go" "integration-cli/docker_cli_volume_test.go" "integration-cli/docker_deprecated_api_v124_test.go" "integration-cli/docker_deprecated_api_v124_unix_test.go" "integration-cli/docker_hub_pull_suite_test.go" "integration-cli/docker_utils_test.go" "integration-cli/events_utils_test.go" "integration-cli/fixtures_linux_daemon_test.go" "integration-cli/utils_test.go" "pkg/discovery/discovery_test.go" "pkg/discovery/file/file_test.go" "pkg/discovery/generator_test.go" "pkg/discovery/kv/kv_test.go" "pkg/discovery/memory/memory_test.go" "pkg/discovery/nodes/nodes_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 1d92789b4f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:16 +02:00
Tibor Vass
4a358d0763
rm-gocheck: check.C -> testing.B for BenchmarkXXX
sed -E -i 's#( Benchmark[^\(]+\([^ ]+ \*)check\.C\b#\1testing.B#g' \
-- "integration-cli/benchmark_test.go" "integration-cli/docker_cli_logs_bench_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 6ecff64d03)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:16 +02:00
Tibor Vass
a11079a449
rm-gocheck: c.Assert(...) -> assert.Assert(c, ...)
sed -E -i 's#\bc\.Assert\(#assert.Assert(c, #g' \
-- "integration-cli/docker_api_containers_test.go" "integration-cli/docker_api_swarm_node_test.go" "integration-cli/docker_cli_attach_test.go" "integration-cli/docker_cli_attach_unix_test.go" "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_build_unix_test.go" "integration-cli/docker_cli_by_digest_test.go" "integration-cli/docker_cli_commit_test.go" "integration-cli/docker_cli_cp_from_container_test.go" "integration-cli/docker_cli_cp_to_container_test.go" "integration-cli/docker_cli_create_test.go" "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_external_volume_driver_unix_test.go" "integration-cli/docker_cli_health_test.go" "integration-cli/docker_cli_history_test.go" "integration-cli/docker_cli_images_test.go" "integration-cli/docker_cli_import_test.go" "integration-cli/docker_cli_info_test.go" "integration-cli/docker_cli_info_unix_test.go" "integration-cli/docker_cli_inspect_test.go" "integration-cli/docker_cli_links_test.go" "integration-cli/docker_cli_netmode_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_plugins_test.go" "integration-cli/docker_cli_port_test.go" "integration-cli/docker_cli_prune_unix_test.go" "integration-cli/docker_cli_ps_test.go" "integration-cli/docker_cli_pull_local_test.go" "integration-cli/docker_cli_rmi_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_run_unix_test.go" "integration-cli/docker_cli_save_load_test.go" "integration-cli/docker_cli_service_create_test.go" "integration-cli/docker_cli_service_health_test.go" "integration-cli/docker_cli_start_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_userns_test.go" "integration-cli/docker_cli_volume_test.go" "integration-cli/docker_hub_pull_suite_test.go" "integration-cli/docker_utils_test.go" "pkg/discovery/discovery_test.go" "pkg/discovery/file/file_test.go" "pkg/discovery/generator_test.go" "pkg/discovery/kv/kv_test.go" "pkg/discovery/memory/memory_test.go" "pkg/discovery/nodes/nodes_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 1f69c62540)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:15 +02:00
Tibor Vass
e44c6dc109
rm-gocheck: redress multiline c.Assert calls
go run rm-gocheck.go redress '\bc\.Assert\b.*(,|\()\s*$' \
 "integration-cli/docker_cli_daemon_test.go" "integration-cli/docker_cli_network_unix_test.go" "integration-cli/docker_cli_port_test.go" "integration-cli/docker_cli_run_test.go" "integration-cli/docker_cli_swarm_test.go" "integration-cli/docker_cli_volume_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 36e7001b99)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:14 +02:00
Tibor Vass
59a9eda8b6
rm-gocheck: normalize c.Check to c.Assert
sed -E -i 's#\bc\.Check\(#c.Assert(#g' \
-- "integration-cli/docker_cli_build_test.go" "integration-cli/docker_cli_health_test.go" "integration-cli/docker_cli_run_test.go"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 5879446de9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:14 +02:00
Tibor Vass
6abf32fd52
add rm-gocheck.go script and eg templates
The following "rm-gocheck:"-prefixed commits were generated by
go run rm-gocheck.go --commit

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 8f64611c83)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:13 +02:00
Tibor Vass
02545bf320
prepare for rm-gocheck script
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 931edfe5e9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:12 +02:00
Tibor Vass
e71e7d8246
integration-cli: fix tests that are silently succeeding when they should not compile
Tests fixed in this patch used to compile and pass successfully,
despite checking if non-nullable types are not nil.

These would have become compile errors once go-check is removed.

About TestContainerAPIPsOmitFields:
Basically what happened is that this test got refactored to start using the API types
and API client library instead of custom types and stdlib's http functions.
This test used to test an API regression which could possibly be a unit test.
However because PublicPort and IP are not nullable types, this test became useless.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit e07a3f2917)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:11 +02:00
Kir Kolyshkin
99799a9ab5
TestContainersAPICreateMountsCreate: minor optimization
Don't use two-stage mount in TestContainersAPICreateMountsCreate();
apparently it was written before mount.Mount() could accept propagation
flags.

While at it, remove rw as this is the default.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 1cfdb2ffb8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:11 +02:00
Kir Kolyshkin
9f03b73dbd
pkg/mount/Make*: optimize
The only option we supply is either BIND or a mount propagation flag,
so it makes sense to specify the flag value directly, rather than using
parseOptions() every time.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit ec248fe61d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:10 +02:00
Kir Kolyshkin
32685e9c2b
daemon/mountVolumes(): eliminate MakeRPrivate call
It is sufficient to add "rprivate" to mount flags.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit a6773f69f2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:09 +02:00
Kir Kolyshkin
d2b470142c
daemon/mountVolumes: no need to specify fstype
For bind mounts, fstype argument to mount(2) is ignored.
Usual convention is either empty string or "none".

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 4e65b17ac4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:09 +02:00
Kir Kolyshkin
8b328aa9b4
pkg/mount: Mount: minor optimization
Eliminate double call to parseOptions() from Mount()

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 80fce834ad)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:52:08 +02:00
Kir Kolyshkin
dc4884a9fb
pkg/mount: MakeMount: minor optimization
Current code in MakeMount parses /proc/self/mountinfo twice:
first in call to Mounted(), then in call to Mount(). Use
ForceMount() to eliminate such double parsing.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit aa60541877)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 23:51:59 +02:00
Kirill Kolyshkin
e64c635c31
Merge pull request #381 from thaJeztah/19.03_backport_hn3000_fix_39981
[19.03 backport] Remove minsky and stallman
2019-09-26 10:47:55 -07:00
Kirill Kolyshkin
9eec36e483
Merge pull request #382 from thaJeztah/19.03_backport_test_fixes
[19.03 backport] Testing and Jenkinsfile changes [step 1]
2019-09-26 10:43:26 -07:00
Tibor Vass
dfadf729d3
Jenkinsfile: move integration step cleanup to amd64 where it was intended to be
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit f3d8b8ae74)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-26 18:16:49 +02:00
Samuel Karp
183cac25f9
awslogs: fix flaky TestLogBlocking unit test
TestLogBlocking is intended to test that the Log method blocks by
default.  It does this by mocking out the internals of the
awslogs.logStream and replacing one of its internal channels with one
that is controlled by the test.  The call to Log occurs inside a
goroutine.  Go may or may not schedule the goroutine immediately and the
blocking may or may not be observed outside the goroutine immediately
due to decisions made by the Go runtime.  This change adds a small
timeout for test failure so that the Go runtime has the opportunity to
run the goroutine before the test fails.

Signed-off-by: Samuel Karp <skarp@amazon.com>
(cherry picked from commit fd94bae0b8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:43:08 +02:00
Stefan Scherer
168e23a2f5
Zap a fixed folder, add build number to folder inside
Signed-off-by: Stefan Scherer <stefan.scherer@docker.com>
(cherry picked from commit 4866207543)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:13 +02:00
Sebastiaan van Stijn
35e9ee82a6
integration-cli: Skip TestAPIImagesSaveAndLoad on RS3 and older
I've seen this test fail a number of times recently on RS1

Looking at failures, the test is taking a long time ro run (491.77s, which is
more than 8 minutes), so perhaps it's just too slow on RS1, which may be
because we switch to a different base image, or because we're now running
on different machines.

Compared to RS5 (still slow, but a lot faster);

```
--- PASS: Test/DockerSuite/TestAPIImagesSaveAndLoad (146.25s)
```

```
 --- FAIL: Test/DockerSuite/TestAPIImagesSaveAndLoad (491.77s)
     cli.go:45: assertion failed:
         Command:  d:\CI-5\CI-93d2cf881\binary\docker.exe inspect --format {{.Id}} sha256:69e7c1ff23be5648c494294a3808c0ea3f78616fad67bfe3b10d3a7e2be5ff02
         ExitCode: 1
         Error:    exit status 1
         Stdout:

         Stderr:   Error: No such object: sha256:69e7c1ff23be5648c494294a3808c0ea3f78616fad67bfe3b10d3a7e2be5ff02

         Failures:
         ExitCode was 1 expected 0
         Expected no error
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5adaf52953)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:12 +02:00
Sebastiaan van Stijn
06cca53fa0
Dockerfile: remove GOMETALINTER_OPTS
This `ENV` was added to the Dockerfile in b96093fa56,
when the repository used per-architecture Dockerfiles, and some architectures needed
a different configuration.

Now that we use a multi-arch Dockerfile, and CI uses a Jenkinsfile, we can remove
this `ENV` from the Dockerfile, and set it in CI instead if needed.

Also updated the wording and fixed linting issues in hack/validate/gometalinter

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a464a3d51f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:12 +02:00
Sebastiaan van Stijn
de3a04a65d
Windows: skip flaky TestLogBlocking
This test frequently fails on Windows RS1 (mainly), so skipping it
for now on Windows;

```
ok  	github.com/docker/docker/daemon/logger	0.525s	coverage: 43.0% of statements
time="2019-09-09T20:37:35Z" level=info msg="Trying to get region from EC2 Metadata"
time="2019-09-09T20:37:36Z" level=info msg="Log stream already exists" errorCode=ResourceAlreadyExistsException logGroupName= logStreamName= message= origError="<nil>"
--- FAIL: TestLogBlocking (0.02s)
    cloudwatchlogs_test.go:313: Expected to be able to read from stream.messages but was unable to
time="2019-09-09T20:37:36Z" level=error msg=Error
time="2019-09-09T20:37:36Z" level=error msg="Failed to put log events" errorCode=InvalidSequenceTokenException logGroupName=groupName logStreamName=streamName message="use token token" origError="<nil>"
time="2019-09-09T20:37:36Z" level=error msg="Failed to put log events" errorCode=DataAlreadyAcceptedException logGroupName=groupName logStreamName=streamName message="use token token" origError="<nil>"
time="2019-09-09T20:37:36Z" level=info msg="Data already accepted, ignoring error" errorCode=DataAlreadyAcceptedException logGroupName=groupName logStreamName=streamName message="use token token"
FAIL
coverage: 78.2% of statements
FAIL	github.com/docker/docker/daemon/logger/awslogs	0.630s
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6c75c86240)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:11 +02:00
Sebastiaan van Stijn
c2b84fd0e8
integration-cli: add daemon.StartNodeWithBusybox function
Starting the daemon should not load the busybox image again
in most cases, so add a new `StartNodeWithBusybox` function
to be clear that this one loads the busybox image, and use
`StartNode()` for cases where loading the busybox image is
not needed.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ead3f4e7c8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:11 +02:00
Sebastiaan van Stijn
55aadb3a8f
integration-cli: swarm.RestartNode(); don't load busybox again
The daemon was already created and started with the busybox
image loaded, so there's no need to load the image again.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8fc23588f1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:10 +02:00
Brian Goff
eba485a3c6
Fix Service TTY test so signal handlers work
Noticed this test container not exiting correctly while debugging
another issue. Before this change, signals were being eaten by bash, now
they are hanlded by top. This cuts the test time in half since it
doesn't have to wait for docker to SIGKILL it.

Old:
PASS: docker_cli_swarm_test.go:840: DockerSwarmSuite.TestSwarmServiceTTY	18.997s

New:
PASS: docker_cli_swarm_test.go:840: DockerSwarmSuite.TestSwarmServiceTTY	6.293s

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit e6c5563ae9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:09 +02:00
Sebastiaan van Stijn
0459d8c7a6
integration: TestInspect(): use swarm.RunningTasksCount
Instead of using the locally crafted `serviceContainerCount()` utility

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f874f8b6fd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:09 +02:00
Sebastiaan van Stijn
6fdd837110
hack/ci/windows.ps1: fix Go version check (due to trailing .0)
The Windows Dockerfile downloads the Go binaries, which (unlike
the Golang images) do not have a trailing `.0` in their version.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 61450a651b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:08 +02:00
Sebastiaan van Stijn
884551acd1
Dockerfile.windows: trim .0 from Go versions
This was an oversight when changing the Dockerfile to use a build-arg;
the Windows Dockerfile downloads the Go binaries, which never have a
trailing `.0`.

This patch makes sure that the trailing zero (if any) is removed.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c5bd6e3dc7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:07 +02:00
Sebastiaan van Stijn
d53f67be35
hack/ci/windows.ps1: stop tailing logs after stopping the daemon
There's already a step in  "Nuke Everything", but lets' stop it
after stopping the daemon as well

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e1636ad5fa)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:07 +02:00
Sebastiaan van Stijn
b9f2e88286
hack/ci/windows.ps1: add support for DOCKER_STORAGE_OPTS
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b6f596c411)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:06 +02:00
Sebastiaan van Stijn
a1638563f7
integration-cli: update TestCreateWithWorkdir for Hyper-V isolation
Hyper-V isolated containers do not allow file-operations on a
running container. This test currently uses `docker cp` to verify
that the WORKDIR was automatically created, which cannot be done
while the container is running.

```
FAIL: docker_cli_create_test.go:302: DockerSuite.TestCreateWithWorkdir

assertion failed:
Command:  d:\CI-7\CI-f3768a669\binary\docker.exe cp foo:c:\home\foo\bar c:\tmp
ExitCode: 1
Error:    exit status 1
Stdout:
Stderr:   Error response from daemon: filesystem operations against a running Hyper-V container are not supported

Failures:
ExitCode was 1 expected 0
Expected no error
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ac9ef840ef)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:05 +02:00
Peter Salvatore
5d74bd7ef9
Jenkinsfile hack for auto-cancellation.
This change will cause Jenkins to only build the
latest HEAD of a PR branch, cancelling any
previous builds that may already be in progress.
This will decrease feedback time and help mitigate
resource contention.

Signed-off-by: Peter Salvatore <peter@psftw.com>
(cherry picked from commit 85bcc524ea)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:04 +02:00
Sebastiaan van Stijn
e101935ae8
Jenkinsfile: Windows: enabled debug-mode for daemon under test
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1fbadd76b7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:04 +02:00
Sebastiaan van Stijn
a365f0745d
Jenkinsfile: create bundles for Windows stages
CI already stores the logs of the test daemon, so we might as well
store them as artifacts

```
[2019-09-03T12:49:39.835Z] INFO: Tidying up at end of run
[2019-09-03T12:49:39.835Z] INFO: Saving daemon under test log (d:\CI-2\CI-3593e7622\dut.out) to C:\windows\TEMP\CIDUT.out
[2019-09-03T12:49:39.835Z] INFO: Saving daemon under test log (d:\CI-2\CI-3593e7622\dut.err) to C:\windows\TEMP\CIDUT.err
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6ee61f5493)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:03 +02:00
Sebastiaan van Stijn
ff26a23314
hack/ci/windows.ps1 print all environment variables to check how Jenkins runs this script
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7eb522a235)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:02 +02:00
Sebastiaan van Stijn
4329550a74
hack/ci/windows.ps1: explicitly set exit code to result of tests
Trying to see if this helps with the cleanup step exiting in CI, but
Jenkins continuing to wait for the script to end afterwards.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8e8c52c4ab)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:02 +02:00
Jintao Zhang
d58829550e
TestCase: use icmd.RunCmd instead icmd.StartCmd
Use `cli.Docker` instead `dockerCmdWithResult`.

Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit e6fce00ec8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:01 +02:00
Sebastiaan van Stijn
b116452a03
docker-py: skip flaky AttachContainerTest::test_attach_no_stream
Seen failing a couple of times:

```
[2019-09-02T08:40:15.796Z] =================================== FAILURES ===================================
[2019-09-02T08:40:15.796Z] __________________ AttachContainerTest.test_attach_no_stream ___________________
[2019-09-02T08:40:15.796Z] tests/integration/api_container_test.py:1250: in test_attach_no_stream
[2019-09-02T08:40:15.796Z]     assert output == 'hello\n'.encode(encoding='ascii')
[2019-09-02T08:40:15.796Z] E   AssertionError: assert b'' == b'hello\n'
[2019-09-02T08:40:15.796Z] E     Right contains more items, first extra item: 104
[2019-09-02T08:40:15.796Z] E     Use -v to get the full diff
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ce77a804b8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:00 +02:00
Sebastiaan van Stijn
24181cd265
TestBuildSquashParent: fix non-standard comparisson
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 32f1c65162)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:36:00 +02:00
Sebastiaan van Stijn
ae7858ff2c
integration-cli: fix some bashism's in Dockerfiles
`TestBuildBuildTimeArgEnv` and `TestBuildBuildTimeArgEmptyValVariants` were
using non-standard comparisons.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit dbde4786e4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:59 +02:00
Sebastiaan van Stijn
69da36f39e
hack/make/binary-daemon: fix some linting issues
- Add quotes to prevent word splitting in `cp` statement (SC2046)
- Replace legacy back tics with `$()`
- Replace `which` with `command -v` (SC2230)
- Fix incorrect (`==`) comparison

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 70d3677825)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:58 +02:00
Stefan Scherer
93b38b8008
Fix docker inspect for dutimgVersion
Signed-off-by: Stefan Scherer <stefan.scherer@docker.com>
(cherry picked from commit 52a53e2587)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:58 +02:00
Sebastiaan van Stijn
e8e2666705
integration-cli: getContainerCount() fix trimming prefix
caught by staticcheck:

```
integration-cli/docker_utils_test.go:66:29: SA1024: cutset contains duplicate characters (staticcheck)
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 02c9b0674f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:57 +02:00
Sebastiaan van Stijn
45f49fe5c3
TestDispatch: refactor to use subtests again, and fix linting (structcheck)
Instead of using a `initDispatchTestCases()` function, declare the test-table
inside `TestDispatch` itself, and run the tests as subtests.

```
[2019-08-27T15:14:51.072Z] builder/dockerfile/evaluator_test.go:18:2: `name` is unused (structcheck)
[2019-08-27T15:14:51.072Z] 	name, expectedError string
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a3f9cb5b63)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:56 +02:00
Sebastiaan van Stijn
1d91898ca6
integration: windows.ps1: turn defender error into a warning
Some integration tests are known to fail if Windows Defender is
enabled. On the machines that run our CI, defender is disabled
for that reason.

Contributors likely will have defender enabled, and because of
that are currently not able to run the integration tests.

This patch changes the ERROR into a WARNING, so that contributors
can still run (a limited set of) the integration tests, but get
informed that some may fail.

We should make this requirement more specific, and only skip
tests that are known to require defender to be disabled, but
while that's not yet in place, let's print a warning instead.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 31885181fc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:56 +02:00
Sebastiaan van Stijn
79e5950b2f
pkg/term: refactor TestEscapeProxyRead
- use subtests to make it clearer what the individual test-cases
  are, and to prevent tests from depending on values set by the
  previous test(s).
- remove redundant messages in assert (gotest.tools already prints
  a useful message if assertions fail).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 556d26c07d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:55 +02:00
Vitaly Ostrosablin
6d7d877c73
Fix testcase name
TestBuildMulitStageResetScratch testcase was actually meant to be
TestBuildMulitStageResetScratch

Signed-off-by: Vitaly Ostrosablin <tmp6154@yandex.ru>
(cherry picked from commit c266d8fe56)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:54 +02:00
Sebastiaan van Stijn
b8b8bcb8bf
Dockerfile: update CRIU to v3.12
New features

- build CRIU with Android NDK
- C/R of
  - IP RAW sockets
  - lsm: dump and restore any SELinux process label
  - support restoring ghost files on readonly mounts

Bugfixes

 - Do not lock network if running in the host network namespace
- Fix RPC configuration file handling
- util: don't leak file descriptors to third-party tools
- small fixes here and there

Improvements

- travis: switch to the Ubuntu Xenial
- travis-ci: Enable ia32 tests
- Many improvements and bug fixes in the libcriu
  - Changes in the API and ABI (SONAME increased from 1 to 2)

full diff: https://github.com/checkpoint-restore/criu/compare/v3.11...v3.12

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 00ad0222ce)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:53 +02:00
Sebastiaan van Stijn
08573e2920
Jenkinsfile: add TESTDEBUG
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d723643dc3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:53 +02:00
Sebastiaan van Stijn
5d4f5db76c
integration: improve package- and filename for junit.xml
Generate more unique names, based on architecture and test-suite name.

Clean up the path to this integration test to create a useful package name.
"$dir" can be either absolute (/go/src/github.com/docker/docker/integration/foo)
or relative (./integration/foo). To account for both, first we strip the
absolute path, then any leading periods and slashes.

For the package-name, we use periods as separator instead of slashes, to be more
in-line with Java package names (which is what junit.xml was originally designed
for).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f007b0150a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:52 +02:00
Sebastiaan van Stijn
48e8f18495
integration: test2json: enable timestamps to fix zero-time test durations
Without these options set, test2json does not include a `Time`
field in the generated JSON;

    {"Action":"run","Test":"TestCgroupNamespacesBuild"}
    {"Action":"output","Test":"TestCgroupNamespacesBuild","Output":"=== RUN   TestCgroupNamespacesBuild\n"}
    {"Action":"output","Test":"TestCgroupNamespacesBuild","Output":"--- PASS: TestCgroupNamespacesBuild (1.70s)\n"}
    ...
    {"Action":"pass","Test":"TestCgroupNamespacesBuild"}

As a result, `gotestsum` was not able to calculate test-duration, and
reported `time="0.000000"` for all tests;

    <testcase classname="amd64.integration.build" name="TestCgroupNamespacesBuild" time="0.000000"></testcase>

With this patch applied:

    {"Time":"2019-08-23T22:42:41.644361357Z","Action":"run","Package":"amd64.integration.build","Test":"TestCgroupNamespacesBuild"}
    {"Time":"2019-08-23T22:42:41.644367647Z","Action":"output","Package":"amd64.integration.build","Test":"TestCgroupNamespacesBuild","Output":"=== RUN   TestCgroupNamespacesBuild\n"}
    {"Time":"2019-08-23T22:42:44.926933252Z","Action":"output","Package":"amd64.integration.build","Test":"TestCgroupNamespacesBuild","Output":"--- PASS: TestCgroupNamespacesBuild (3.28s)\n"}
    ...
    {"Time":"2019-08-23T22:42:44.927003836Z","Action":"pass","Package":"amd64.integration.build","Test":"TestCgroupNamespacesBuild","Elapsed":3.28}

Which now correctly reports the test's duration:

    <testcase classname="amd64.integration.build" name="TestCgroupNamespacesBuild" time="3.280000"></testcase>

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d2e00d62e2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:52 +02:00
Sebastiaan van Stijn
517ebe626c
integration: use gotestsum to generate junit.xml and go-test-report.json
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f3be6b346f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:51 +02:00
Sebastiaan van Stijn
14d561eb1c
integration: simplify parallel run destination
'Namespace' parallel runs by bind-mounting a different directory
in the container, instead of making the tests running inside
the container aware of the namespaced location.

This makes it transparent to the tests, and slightly reduces
complexity.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3262a69be6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:50 +02:00
SataQiu
1da2e90b56
fix some spelling mistakes
Signed-off-by: SataQiu <qiushida@beyondcent.com>
(cherry picked from commit f6226a2a56)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:49 +02:00
Eli Uriegas
316390891c
hack: Remove inContainer check, it wasn't useful
The inContainer check isn't really useful anymore.

Even though it was said that we shouldn't rely on its existence back in
2016, we're now in 2019 and this thing still exists so we should just
rely on it now to check whether or not we're in a container.

Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
(cherry picked from commit f5cd8fdd44)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:48 +02:00
Brian Goff
24395d55fc
Better logging for swarm tests
Call helper for starting swarm agents and add some logging with daemon
id's when joining the swarm.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit b0fe0dff7a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 21:35:29 +02:00
Andrew Hsu
63f2e107b3
Merge pull request #372 from thaJeztah/19.03_backport_bump_libnetwork
[19.03 backport] bump libnetwork to 96bcc0dae898308ed659c5095526788a602f4726
2019-09-25 09:38:51 -07:00
Sebastiaan van Stijn
a768bf8673
integration-cli: remove redundant "testrequires"
The `DockerDaemonSuite.SetUpTest` already checks for Linux and a local daemon;

```
func (s *DockerDaemonSuite) SetUpTest(c *check.C) {
	testRequires(c, DaemonIsLinux, testEnv.IsLocalDaemon)
	s.d = daemon.New(c, dockerBinary, dockerdBinary, testdaemon.WithEnvironment(testEnv.Execution))
}
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7f37d99ef5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:33:51 +02:00
Drew Erny
2ebfdfd66c
Retry service updates on out of sequence errors
Code retrying service update operations when receiving "update out of
sequence" errors was removed because of a misunderstanding, which has
made tests flaky. This re-adds the "CmdRetryOutOfSequence" method, and
uses it in TestSwarmPublishAdd to avoid flaky behavior.

Signed-off-by: Drew Erny <drew.erny@docker.com>
(cherry picked from commit 1de914695b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:33:17 +02:00
Tonis Tiigi
1204f3a77c
integration-cli: increase healthcheck timeout
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 8c9362857f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:32:45 +02:00
Sebastiaan van Stijn
1d795b53d3
integration: run build session tests on non-experimental
The session endpoint is no longer experimental since
01c9e7082e, so we don't
need to start an experimental daemon.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit becd29c665)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:32:13 +02:00
Deep Debroy
7e76438537
Be more conservative for Windows in TestFrequency for Splunk
Signed-off-by: Deep Debroy <ddebroy@docker.com>
(cherry picked from commit a5c420ac54)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:31:30 +02:00
Sebastiaan van Stijn
bf212c5b33
DockerSwarmSuite lock portIndex to work around race
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c096225e8e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:29:48 +02:00
Brian Goff
eeeb2e941d
Fix Microsecond -> Milisecond.
A bit too quick on the trigger on some text completion I think...

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 5d818213ff)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:29:20 +02:00
Brian Goff
05c096a1ac
Don't log test initial test daemon ping failures
This is just noise due to timing. I picked `> 2` just based on
logs from tests I've seen there's always 1 or 2.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 15675e28f1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:28:27 +02:00
Sebastiaan van Stijn
ad8327f2ce
integration: fix cleanup of raft data
The directory used for storage was either changed or new directories
were added.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6a64a4deec)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:27:49 +02:00
Brian Goff
34418110ec
Add (hidden) flags to set containerd namespaces
This allows our tests, which all share a containerd instance, to be a
bit more isolated by setting the containerd namespaces to the generated
daemon ID's rather than the default namespaces.

This came about because I found in some cases we had test daemons
failing to start (really very slow to start) because it was (seemingly)
processing events from other tests.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 24ad2f486d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:26:26 +02:00
Sebastiaan van Stijn
e286096089
integration-cli: remove unused requirements utils
Removes some test functions that were unused:

- bridgeNfIP6tables
- ambientCapabilities (added to support #26979, which was reverted in #27737)
- overlay2Supported

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c887b09abc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:25:53 +02:00
Sebastiaan van Stijn
a63a02fefd
integration-cli: remove defaultSleepImage constant
Both Linux and Windows now use busybox, so no need to keep a
constant for this.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 27f432ca57)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:25:20 +02:00
Sebastiaan van Stijn
f76cb3e6d5
integration-cli: remove ExecSupport check
All current versions of Docker support exec, so no need
to check for this.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7204341950)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:24:48 +02:00
Sebastiaan van Stijn
edeff03134
Integration: MACVlan add missing import comment and build-tag
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 316e16618f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:23:14 +02:00
Sebastiaan van Stijn
8c8de170d2
Integration: remove redundant kernel version check for MACVlan
The daemon requires kernel 3.10 or up to start, so there's no need
to check if the daemon is kernel 3.8 or up.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 691eb14256)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:23:08 +02:00
Sebastiaan van Stijn
1710bba5c3
Integration: exclude IPVlan test-suite on Windows
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4060a7026c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:22:16 +02:00
Sebastiaan van Stijn
0378afaf5f
Integration: IPVlan add missing import comment
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 93b28677bf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:22:11 +02:00
Sebastiaan van Stijn
0c8bc0b57a
Integration: remove "experimental" option for IPVLAN test-daemons
IPVLAN no longer is experimental since 3ab093d567,
so there's no need to set this option.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit dae9bac675)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:22:05 +02:00
Sebastiaan van Stijn
c95330420c
Integration: remove unneeded platform check for IPVLAN tests
These tests require a local daemon, and are not built on Windows

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1e4bd2623a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 17:21:57 +02:00
hn3000
589f437b06
Remove minsky and stallman
Their inclusion is no longer defensible.
closes #39981

Signed-off-by: Harald Niesche <harald@niesche.de>
(cherry picked from commit 77d3c68f97)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-25 01:57:40 +02:00
Sebastiaan van Stijn
559be42fc2
bump libnetwork to 96bcc0dae898308ed659c5095526788a602f4726
full diff: 92d1fbe1eb...96bcc0dae8

changes included:

- docker/libnetwork#2429 Updating IPAM config with results from HNS create network call
  - addresses moby/moby#38358
- docker/libnetwork#2450 Always configure iptables forward policy
  - related to moby/moby#14041 and docker/libnetwork#1526

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 75477f0b3c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-24 20:45:32 +02:00
Andrew Hsu
dda2b4454f
Merge pull request #293 from thaJeztah/19.03_backport_test_restart
[19.03 backport] Improve select for daemon restart tests
2019-09-24 11:40:18 -07:00
Andrew Hsu
0ff52c285d
Merge pull request #289 from thaJeztah/19.03_backport_bump_gorilla_mux
[19.03 backport] bump gorilla/mux v1.7.2
2019-09-24 11:36:24 -07:00
Andrew Hsu
09f8810272
Merge pull request #356 from thaJeztah/19.03_backport_fix_windows_errortype
[19.03 backport] Windows: fix error-type for starting a running container
2019-09-24 11:34:21 -07:00
Andrew Hsu
f4f8feafe7
Merge pull request #361 from thaJeztah/19.03_backport_libcontainerd_events_wait
[19.03 backport] Sleep before restarting event processing
2019-09-24 11:32:35 -07:00
Andrew Hsu
29db7cb98b
Merge pull request #376 from thaJeztah/19.03_backport_fix_flaky_addr_pool_init_test
[19.03 backport] Fix flaky TestServiceWithDefaultAddressPoolInit
2019-09-24 11:27:30 -07:00
Sebastiaan van Stijn
cf298d3073
Merge pull request #378 from kolyshkin/19.03-backport-log-max-file-1-follow
[19.03 backport] logger: fix follow logs for max-file=1
2019-09-24 20:25:22 +02:00
Andrew Hsu
53b9d440b8
Merge pull request #373 from tonistiigi/19.03-buildkit
[19.03] vendor: update buildkit for 19.03
2019-09-23 15:43:25 -07:00
Kir Kolyshkin
1e13f66fbb logger: fix follow logs for max-file=1
In case jsonlogfile is used with max-file=1 and max-size set,
the log rotation is not perfomed; instead, the log file is closed
and re-open with O_TRUNC.

This situation is not handled by the log reader in follow mode,
leading to an issue of log reader being stuck forever.

This situation (file close/reopen) could be handled in waitRead(),
but fsnotify library chose to not listen to or deliver this event
(IN_CLOSE_WRITE in inotify lingo).

So, we have to handle this by checking the file size upon receiving
io.EOF from the log reader, and comparing the size with the one received
earlier. In case the new size is less than the old one, the file was
truncated and we need to seek to its beginning.

Fixes #39235.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 9cd24ba605)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-09-23 11:29:52 -07:00
Andrew Hsu
9a9ff44418
Merge pull request #296 from thaJeztah/19.03_backport_exec_hang
[19.03 backport] Handle blocked I/O of exec'd processes
2019-09-23 10:05:26 -07:00
Arko Dasgupta
218af8c7bd
Move defer method to the top right after New is called
Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
(cherry picked from commit a65dee30fc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-23 18:40:51 +02:00
Arko Dasgupta
a24fddc2ad
Fix flaky TestServiceWithDefaultAddressPoolInit
1.This commit replaces serviceRunningCount with
swarm.RunningTasksCount to accurately check if the
service is running with the accurate number of instances
or not. serviceRunningCount was only checking the ServiceList
and was not checking if the tasks were running or not

This adds a safe barrier to execute docker network inspect
commands for overlay networks which get created
asynchronously via Swarm

2. Make sure client connections are closed

3. Make sure every service and network name is unique

4. Make sure services and networks are cleaned up

Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
(cherry picked from commit f3a3ea0d3c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-23 18:40:48 +02:00
selansen
6d9666c8a0
TestServiceWithDefaultAddressPoolInit
Looks like TestServiceWithDefaultAddressPoolInit is failing
randomly in CI. I am not able to reproduce the issue locally
but this has been reported few times.  So I tried to modify
code and see if I can fix the random failure.

Signed-off-by: selansen <elango.siva@docker.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 88578aa9e9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-23 18:40:46 +02:00
Andrew Hsu
c5754a7329
Merge pull request #281 from thaJeztah/19.03_backport_fix_golint_again
[19.03 backport] Integration: change signatures to fix golint warnings
2019-09-23 09:37:43 -07:00
Andrew Hsu
c27f11fa2e
Merge pull request #340 from thaJeztah/19.03_backport_bump_grpc
[19.03 backport] bump google.golang.org/grpc v1.23.0 (CVE-2019-9512, CVE-2019-9514, CVE-2019-9515)
2019-09-23 09:32:43 -07:00
Andrew Hsu
09b72e0be4
Merge pull request #335 from thaJeztah/19.03_backport_dev190815
[19.03 backport] fix docker rmi stucking
2019-09-23 09:30:56 -07:00
Tonis Tiigi
b71e1008a5 vendor: update buildkit for 19.03
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-09-23 09:23:35 -07:00
Sebastiaan van Stijn
0d6d5b392a
Revert "Fixing integration test"
This reverts commit 8fca769bd5.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-23 18:22:42 +02:00
Tibor Vass
51f390dd79
integration: get tests to compile again
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit a281289515)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-23 18:20:56 +02:00
Sebastiaan van Stijn
168132b632
integration: change testGraphDriver signature to fix linting
Line 441: warning: context.Context should be the first parameter of a function (golint)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit dac5710b68)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-23 18:20:54 +02:00
Sebastiaan van Stijn
3f4338cf04
integration: change createAmbiguousNetworks signature to fix linting
Line 30: warning: context.Context should be the first parameter of a function (golint)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 123e29f44a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-23 18:20:52 +02:00
Sebastiaan van Stijn
3ade7ca12b
integration: change container.Run signature to fix linting
Line 59: warning: context.Context should be the first parameter of a function (golint)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9f9b4290b9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-23 18:20:50 +02:00
Sebastiaan van Stijn
9c49308cce
integration: change container.Create signature to fix linting
```
Line 25: warning: context.Context should be the first parameter of a function (golint)
Line 44: warning: context.Context should be the first parameter of a function (golint)
Line 52: warning: context.Context should be the first parameter of a function (golint)
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b4c46b0dac)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-23 18:20:48 +02:00
Sebastiaan van Stijn
91757722a9
integration: change network.CreateNoError signature to fix linting
Line 30: warning: context.Context should be the first parameter of a function (golint)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit caec45a37f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-23 18:20:46 +02:00
Sebastiaan van Stijn
235fa0eee8
Revert "integration: have container.Create call compile"
This reverts commit 8f4b96f19e.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-23 18:20:43 +02:00
Andrew Hsu
08af35b250
Merge pull request #375 from thaJeztah/19.03_revert_bump_swarmkit
[19.03] Revert "bump swarmkit to f35d9100f2c6ac810cc8d7de6e8f93dcc7a42d29"
2019-09-23 09:18:46 -07:00
Sebastiaan van Stijn
ef4366ee89
Revert "[19.03] bump swarmkit to f35d9100f2c6ac810cc8d7de6e8f93dcc7a42d29"
This reverts commit 02465c9f9d.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-21 10:32:58 +02:00
Brian Goff
3833f2a60b
Sleep before restarting event processing
This prevents restarting event processing in a tight loop.
You can see this with the following steps:

```terminal
$ containerd &
$ dockerd --containerd=/run/containerd/containerd.sock &
$ pkill -9 containerd
```

At this point you will be spammed with logs such as:

```
ERRO[2019-07-12T22:29:37.318761400Z] failed to get event                           error="rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\"" module=libcontainerd namespace=plugins.moby
```

Without this change you can quickly end up with gigabytes of log data.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 1acaf2aabe)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-20 19:12:19 +02:00
Michael Crosby
78f4d6b84f
Improve select for daemon restart tests
This improves the select logic for the restart tests or starting the
daemon in general.  With the way the ticker and select was setup, it was
possible for only the timeout to be displayed and not the wait errors.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 402433a5e4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-20 19:11:40 +02:00
Kir Kolyshkin
e489130717
daemon/ProcessEvent: make sure to cancel the contexts
Reported by govet linter:

> daemon/monitor.go:57:9: lostcancel: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak (govet)
> 			ctx, _ := context.WithTimeout(context.Background(), 2*time.Second)
> 			     ^
> daemon/monitor.go:128:9: lostcancel: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak (govet)
> 			ctx, _ := context.WithTimeout(context.Background(), 2*time.Second)
> 			     ^

Fixes: b5f288 ("Handle blocked I/O of exec'd processes")
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 53cbf1797b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-20 19:10:55 +02:00
Michael Crosby
34b31d0ee0
Handle blocked I/O of exec'd processes
This is the second part to
https://github.com/containerd/containerd/pull/3361 and will help process
delete not block forever when the process exists but the I/O was
inherited by a subprocess that lives on.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit b5f28865ef)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-20 19:10:52 +02:00
Sebastiaan van Stijn
486953e2ff
bump gorilla/mux v1.7.2
full diff: https://github.com/gorilla/mux/compare/v1.7.0...v1.7.2

includes:

 - gorilla/mux#457 adding Router.Name to create new Route
 - gorilla/mux#447 host:port matching does not require a :port to be specified

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 25b451e01b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-20 19:07:11 +02:00
Haichao Yang
8306f1e31e
fix docker rmi stucking
Signed-off-by: Haichao Yang <yang.haichao@zte.com.cn>
(cherry picked from commit d3f64846a2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-20 19:06:32 +02:00
Sebastiaan van Stijn
879fba29d5
Merge pull request #358 from thaJeztah/19.03_backport_exec_failure_event
[19.03 backport] Send exec exit event on failures
2019-09-20 19:05:06 +02:00
Andrew Hsu
421a3aa737
Merge pull request #368 from tiborvass/19.03-remove-warning-on-v2schema1-pull
[19.03 backport] distribution: modify warning logic when pulling v2 schema1 manifests
2019-09-19 18:35:13 -07:00
Andrew Hsu
23c7134bad
Merge pull request #371 from andrewhsu/clean
[19.03] Jenkinsfile: ensure all containers are cleaned up
2019-09-19 17:56:39 -07:00
Andrew Hsu
2399b7a91b
Merge pull request #369 from thaJeztah/19.03_bump_swarmkit
[19.03] bump swarmkit to f35d9100f2c6ac810cc8d7de6e8f93dcc7a42d29
2019-09-19 17:48:36 -07:00
Andrew Hsu
7cb08ca538
Merge pull request #334 from thaJeztah/19.03_backport_switch_creack_pty
[19.03 backport] switch kr/pty to creack/pty v1.1.7
2019-09-19 17:46:32 -07:00
Andrew Hsu
2aa5322638
Merge pull request #352 from thaJeztah/19.03_backport_detect_invalid_linked_container
[19.03 backport] Return "invalid parameter" when linking to non-existing container
2019-09-19 17:45:09 -07:00
Andrew Hsu
3f7e68e894
Merge pull request #370 from andrewhsu/skip-rs1
[19.03] skip win-RS1 on PRs
2019-09-19 17:37:00 -07:00
Tibor Vass
4b5c535be9 Jenkinsfile: ensure all containers are cleaned up
By convention, containers spawned by jenkins jobs have the name:
docker-pr${BUILD_NUMBER}

That works fine for jobs with a single container. This commit cleans up
when multiple containers are spawned with the convention that their names
share the same "docker-pr${BUILD_NUMBER}-" prefix.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit f470698c2c)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2019-09-19 23:42:20 +00:00
Andrew Hsu
80727e5a92
Merge pull request #365 from thaJeztah/19.03_backport_pull_platform_regression
[19.03 backport] Fix error handling of incorrect --platform values
2019-09-19 16:31:27 -07:00
Andrew Hsu
ff3dc8a7c4
Merge pull request #354 from thaJeztah/19.03_backport_revert_remove_TestSearchCmdOptions
[19.03 backport] Revert "Remove TestSearchCmdOptions test"
2019-09-19 16:24:58 -07:00
Andrew Hsu
b2823e4609
Merge pull request #366 from thaJeztah/19.03_backport_update_to_go_1.12.9
[19.03 backport] Bump Golang 1.12.9
2019-09-19 16:22:21 -07:00
Andrew Hsu
a18eea2702 run integration-cli stages on s390x and ppc64le if not a PR check
Essentially, run on merge to target branch which may or may not be
master branch. Could be 19.03 branch, for example.

See: https://jenkins.io/doc/book/pipeline/syntax/

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit e653943e8b)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2019-09-19 21:41:36 +00:00
Andrew Hsu
36fc8f5809 skip win-RS1 on PRs unless the checkbox is checked
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 039eb05ac8)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2019-09-19 21:41:10 +00:00
Sebastiaan van Stijn
ca22ec44ba Jenkinsfile: shorten stage names for consistency and to fit Jenkins UI
The Blue Ocean UI truncates names, which makes it possible to distinguish
which Windows stage is RS1 or RS5. This patch shortens those names so that they
fit in the Blue Ocean UI.

Other stages and parameters were renamed as well to better reflect what they're running;

Before             | After
-------------------|--------------------------------
janky              | amd64
power              | ppc64le
power-master       | ppc64le integration-cli
windowsRS1         | win-RS1
windowsRS5-process | win-RS5
z                  | s390x
z-master           | x390x integration-cli

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

WIP renames

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>

(cherry picked from commit c18f793f40)

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2019-09-19 21:41:05 +00:00
Sebastiaan van Stijn
02465c9f9d
[19.03] bump swarmkit to f35d9100f2c6ac810cc8d7de6e8f93dcc7a42d29
full diff: bbe341867e...f35d9100f2

changes included:

- docker/swarmkit#2891 [19.03 backport] Remove hardcoded IPAM config subnet value for ingress network
  - backport of docker/swarmkit#2890 Remove hardcoded IPAM config subnet value for ingress network
  - fixes [ENGORC-2651] Specifying --default-addr-pool for docker swarm init is not picked up by ingress network

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-19 09:21:46 +02:00
Tibor Vass
f9232e3f11 distribution: modify warning logic when pulling v2 schema1 manifests
The warning on pull was incorrectly asking to contact registry admins.
It is kept on push however.

Pulling manifest lists with v2 schema1 manifests will not be supported thus
there is a warning for those, but wording changed to suggest repository author
to upgrade.

Finally, a milder warning on regular pull is kept ONLY for DockerHub users
in order to incite moving away from schema1.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 647dfe99a5)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-09-19 00:46:14 +00:00
Sebastiaan van Stijn
6de2bd28df
Merge pull request #360 from thaJeztah/19.03_backport_fix_missing_dir_cleanup_file
[19.03 backport] Ensure parent dir exists for mount cleanup file
2019-09-18 13:55:38 +02:00
Kirill Kolyshkin
2b10608f16
Merge pull request #363 from thaJeztah/19.03_backport_64align
[19.03 backport] atomic: patch 64bit alignment on 32bit systems
2019-09-18 11:30:26 +03:00
Andrew Hsu
c416072ced
Merge pull request #330 from thaJeztah/19.03_backport_bump_libnetwork
[19.03 backport] bump libnetwork to 92d1fbe1eb0883cf11d283cea8e658275146411d
2019-09-16 16:12:22 -07:00
Sebastiaan van Stijn
5196dc65e7
bump hashicorp/go-sockaddr v1.0.2
full diff: 6d291a969b...v1.0.2

Relevant changes:
  - hashicorp/go-sockaddr#25 Add android os
  - hashicorp/go-sockaddr#28 Add go.mod

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 492945c2d5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 18:34:24 +02:00
Sebastiaan van Stijn
8abb005598
bump hashicorp/go-multierror v1.0.0, add errwrap v1.0.0
full diff: fcdddc395d...v1.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 720b66ee1f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 18:34:22 +02:00
Sebastiaan van Stijn
44ca36c7cf
bump lib network to 92d1fbe1eb0883cf11d283cea8e658275146411d
full diff: 09cdcc8c0e...92d1fbe1eb

relevant changes included (omitting some changes that were added _and_ reverted in this bump):

- docker/libnetwork#2433 Fix parseIP error when parseIP before get AddressFamily
  - fixes docker/libnetwork#2431 parseIP Error ip=[172 17 0 2 0 0 0 0 0 0 0 0 0 0 0 0]
  - https://github.com/docker/libnetwork/issues/2289
  - this was a regression introduced in docker/libnetwork#2416 Fix hardcoded AF_INET for IPv6 address handling
- docker/libnetwork#2440 Bump hashicorp go-sockaddr v1.0.2, go-multierror v1.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit bab58c1924)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 18:34:20 +02:00
Sebastiaan van Stijn
b6190c2713
bump libnetwork to 09cdcc8c0eab3946c2d70e8f6225b05baf1e90d1
full diff: 83d30db536...09cdcc8c0e

changes included:

- docker/libnetwork#2416 Fix hardcoded AF_INET for IPv6 address handling
- docker/libnetwork#2411 Macvlan network handles netlabel.Internal wrong
  - fixes docker/libnetwork#2410 Macvlan network handles netlabel.Internal wrong
- docker/libnetwork#2414 Allow network with --config-from to be --internal
  - fixes docker/libnetwork#2413 Network with --config-from does not honor --internal
- docker/libnetwork#2351 Use fewer modprobes
  - relates to moby/moby#38930 Use fewer modprobes
- docker/libnetwork#2415 Support dockerd and system restarts for ipvlan and macvlan networks
  - carry of docker/libnetwork#2295 phantom ip/mac vlan network after a powercycle
  - fixes docker/libnetwork#1743 Phantom docker network

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6f234db9fe)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 18:34:18 +02:00
CarlosEDP
ca89db221f
Update modules to support riscv64
Signed-off-by: CarlosEDP <me@carlosedp.com>
(cherry picked from commit 9eaab0425b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 18:34:16 +02:00
Sebastiaan van Stijn
f3e1aff81d
bump libnetwork. vishvananda/netlink 1.0, vishvananda/netns
full diffs:

- fc5a7d91d5...62a13ae87c
- b2de5d10e3...v1.0.0
- 604eaf189e...13995c7128ccc8e51e9a6bd2b551020a27180abd

notable changes in libnetwork:

- docker/libnetwork#2366 Bump vishvananda/netlink to 1.0.0
- docker/libnetwork#2339 controller: Check if IPTables is enabled for arrangeUserFilterRule
  - addresses docker/libnetwork#2158 dockerd when run with --iptables=false modifies iptables by adding DOCKER-USER
  - addresses moby/moby#35777 With iptables=false dockerd still creates DOCKER-USER chain and rules
  - addresses docker/for-linux#136 dockerd --iptables=false adds DOCKER-USER chain and modify FORWARD chain anyway
- docker/libnetwork#2394 Make DNS records and queries case-insensitive
  - addresses moby/moby#28689 Embedded DNS is case-sensitive
  - addresses moby/moby#21169 hostnames with new networking are case-sensitive

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 344b093258)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 18:34:13 +02:00
Jintao Zhang
ad1e6bae4f
Bump Golang 1.12.9
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 01d6a56699)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 16:20:10 +02:00
Sebastiaan van Stijn
b262d40daf
Merge pull request #345 from thaJeztah/19.03_backport_swarm_flaky
[19.03 backport] integration-cli: fix swarm tests flakiness
2019-09-16 16:00:57 +02:00
Sebastiaan van Stijn
8ab5e2a004
Add regression tests for invalid platform status codes
Before we handled containerd errors, using an invalid platform produced a 500 status:

```bash
curl -v \
  -X POST \
  --unix-socket /var/run/docker.sock \
  "http://localhost:2375/v1.40/images/create?fromImage=hello-world&platform=foobar&tag=latest" \
  -H "Content-Type: application/json"
```

```
* Connected to localhost (docker.sock) port 80 (#0)
> POST /v1.40/images/create?fromImage=hello-world&platform=foobar&tag=latest HTTP/1.1
> Host: localhost:2375
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Type: application/json
>
< HTTP/1.1 500 Internal Server Error
< Api-Version: 1.40
< Content-Length: 85
< Content-Type: application/json
< Date: Mon, 15 Jul 2019 15:25:44 GMT
< Docker-Experimental: true
< Ostype: linux
< Server: Docker/19.03.0-rc2 (linux)
<
{"message":"\"foobar\": unknown operating system or architecture: invalid argument"}
```

That problem is now fixed, and the API correctly returns a 4xx status:

```bash
curl -v \
  -X POST \
  --unix-socket /var/run/docker.sock \
  "http://localhost:2375/v1.40/images/create?fromImage=hello-world&platform=foobar&tag=latest" \
  -H "Content-Type: application/json"
```

```
* Connected to localhost (/var/run/docker.sock) port 80 (#0)
> POST /v1.40/images/create?fromImage=hello-world&platform=foobar&tag=latest HTTP/1.1
> Host: localhost:2375
> User-Agent: curl/7.52.1
> Accept: */*
> Content-Type: application/json
>
< HTTP/1.1 400 Bad Request
< Api-Version: 1.41
< Content-Type: application/json
< Docker-Experimental: true
< Ostype: linux
< Server: Docker/dev (linux)
< Date: Mon, 15 Jul 2019 15:13:42 GMT
< Content-Length: 85
<
{"message":"\"foobar\": unknown operating system or architecture: invalid argument"}
* Curl_http_done: called premature == 0
```

This patch adds tests to validate the behaviour

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9d1b4f5fc3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 15:37:47 +02:00
Sebastiaan van Stijn
7b2d5556d5
errdefs: convert containerd errors to the correct status code
In situations where the containerd error is consumed directly
and not received over gRPC, errors were not translated.

This patch converts containerd errors to the correct HTTP
status code.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4a516215e2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 15:37:37 +02:00
Tonis Tiigi
776c2bd113
atomic: patch 64bit alignment on 32bit systems
causes panic on armv7

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit af2e82d054)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 15:32:14 +02:00
Brian Goff
c67edc5d61
Ensure parent dir exists for mount cleanup file
While investigating a test failure, I found this in the logs:

```
time="2019-07-04T15:06:32.622506760Z" level=warning msg="Error while setting daemon root propagation, this is not generally critical but may cause some functionality to not work or fallback to less desirable behavior" dir=/go/src/github.com/docker/docker/bundles/test-integration/d1285b8250308/root error="error writing file to signal mount cleanup on shutdown: open /tmp/dxr/d1285b8250308/unmount-on-shutdown: no such file or directory"
```

This path is generated from the daemon's exec-root, which appears to not
exist yet. This change just makes sure it exists before we try to write
a file.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 7725b88edc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 15:22:31 +02:00
Michael Crosby
1920db0267
Send exec exit event on failures
Fixes #39427

This always sends the exec exit events even when the exec fails to find
the binary.  A standard 127 exit status is sent in this situation.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit c08d4da6e5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 14:56:59 +02:00
Sebastiaan van Stijn
a6b8e81332
Windows: fix error-type for starting a running container
Trying to start a container that is already running is not an
error condition, so a `304 Not Modified` should be returned instead
of a `409 Conflict`.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c030885e7a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 14:47:21 +02:00
Sebastiaan van Stijn
bc5df68698
integration-cli: also run Docker Hub search tests on Windows
The API does not filter images on platform, so searching on
Windows should work as well.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3d1850d10d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 14:39:49 +02:00
Sebastiaan van Stijn
3c63d7fd9b
TestSearchWithLimit: slight refactor and improve boundary checks
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2ac55d5c9a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 14:39:40 +02:00
Sebastiaan van Stijn
96af5bfbb5
TestSearchStarsOptionWithWrongParameter: remove checks for deprecated flags
The `--stars` flag was deprecated, and was replaced by `--filter stars=xx`

Integration tests run with a fixed version of the CLI, and the new
(`--filter`) option is already tested in this test, so there's no need
to verify the old flags.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 85d6fb888c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 14:39:31 +02:00
Sebastiaan van Stijn
665c5d0c5f
TestSearchCmdOptions: remove checks for deprecated flags
The `--stars` and `--automated` flags have been deprecated, and were
replaced by `--filter stars=xx` and `--filter is-automated=true`.

Integration tests run with a fixed version of the CLI, and the new
(`--filter`) option is already tested in this test, so there's no need
to verify the old flags.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b38c71bfe0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 14:39:23 +02:00
Sebastiaan van Stijn
9557c7be8e
TestSearchCmdOptions: remove cli-only checks
Both `--help` and `--no-trunc` are implemented in the CLI. There's
no need to verify them here because the integration tests use a
fixed version of the CLI.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a78b9a3726)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 14:39:12 +02:00
Sebastiaan van Stijn
b13e995c78
Revert "Remove TestSearchCmdOptions test"
This reverts commit 21e662c774.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1be7065e99)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 14:39:01 +02:00
Drew Erny
c93da8ded9
Fix TestSwarmClusterRotateUnlockKey
TestSwarmClusterRotateUnlockKey had been identified as a flaky test. It
turns out that the test code was wrong: where we should have been
checking the string output of a command, we were instead checking the
value of the error. This means that the error case we were expecting was
not being matched, and the test was failing when it should have just
retried.

Signed-off-by: Drew Erny <drew.erny@docker.com>
(cherry picked from commit b79adac339)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 13:45:19 +02:00
Tonis Tiigi
cf05755e9d
integration-cli: allow temporary no-leader error
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 52e0dfef90)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 13:45:17 +02:00
Tonis Tiigi
c502db4955
integration-cli: allow temporary errors on leader switch
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 3df1095bbd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-16 13:45:14 +02:00
Kirill Kolyshkin
6ffb8e2b67
Merge pull request #342 from thaJeztah/19.03_backport_sigprocmask
[19.03 backport] Add sigprocmask to default seccomp profile (ENGCORE-981)
2019-09-12 20:57:34 +03:00
Andrew Hsu
48282bea40
Merge pull request #353 from thaJeztah/19.03_bump_swarmkit
[19.03 backport] bump swarmkit to bbe341867eae1615faf8a702ec05bfe986e73e06 (bump_v19.03 branch)
2019-09-12 08:22:18 -07:00
Kirill Kolyshkin
1176d6fa66
Merge pull request #349 from thaJeztah/19.03_backport_bump_containerd_1.2.9
[19.03 backport] Update containerd to v1.2.9
2019-09-12 18:20:34 +03:00
Sebastiaan van Stijn
1242a39e8e
Merge pull request #332 from thaJeztah/19.03_backport_fix_overlay_mount_busy
[19.03 backport] Fix overlay2 busy error on mount
2019-09-12 17:08:00 +02:00
Jintao Zhang
3d678eb14a
Update containerd to v1.2.9
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 9ef9a337f6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-12 12:13:40 +02:00
Sebastiaan van Stijn
43e842cfd8
Merge pull request #282 from thaJeztah/19.03_backport_bump_containerd_1.2.7
[19.03 backport] Update containerd to v1.2.8
2019-09-12 12:13:09 +02:00
Sebastiaan van Stijn
525e8ed3fe
bump containerd/ttrpc 92c8520ef9f86600c650dd540266a007bf03670f
full diff: 699c4e40d1...92c8520ef9

changes:

- containerd/ttrpc#37 Handle EOF to prevent file descriptor leak
- containerd/ttrpc#38 Improve connection error handling
- containerd/ttrpc#40 Support headers
- containerd/ttrpc#41 Add client and server unary interceptors
- containerd/ttrpc#43 metadata as KeyValue type
- containerd/ttrpc#42 Refactor close handling for ttrpc clients
- containerd/ttrpc#44 Fix method full name generation
- containerd/ttrpc#46 Client.Call(): do not return error if no Status is set (gRPC v1.23 and up)
- containerd/ttrpc#49 Handle ok status

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8769255d1b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-12 12:09:50 +02:00
Sebastiaan van Stijn
5772636dc6
bump google.golang.org/grpc v1.23.0
full diff: https://github.com/grpc/grpc-go/compare/v1.20.1...v1.23.0

This update contains security fixes:

- transport: block reading frames when too many transport control frames are queued (grpc/grpc-go#2970)
  - Addresses CVE-2019-9512 (Ping Flood), CVE-2019-9514 (Reset Flood), and CVE-2019-9515 (Settings Flood).

Other changes can be found in the release notes:
https://github.com/grpc/grpc-go/releases/tag/v1.23.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f1cd79976a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-12 12:09:48 +02:00
Sebastiaan van Stijn
0b12b76c28
Merge pull request #339 from thaJeztah/19.03_backport_fix_containerd_optional_docker_content_digest
[19.03 backport] vendor: containerd to 7c1e88399
2019-09-12 12:09:19 +02:00
Michael Crosby
40e3647f2f
Add sigprocmask to default seccomp profile
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit e4605cc2a5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-12 10:25:44 +02:00
Sebastiaan van Stijn
892dbfb87e
switch kr/pty to creack/pty v1.1.7
kr/pty was moved to creak/pty and the old location was
archived.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0595c01718)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-12 10:24:55 +02:00
Jintao Zhang
c4d20760d4
Update containerd to v1.2.8
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 1264a85303)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-12 10:15:50 +02:00
Sebastiaan van Stijn
768923199f
Update containerd to v1.2.7
From the release notes: https://github.com/containerd/containerd/releases/tag/v1.2.7

> Welcome to the v1.2.7 release of containerd!
>
> The seventh patch release for containerd 1.2 introduces OCI image
> descriptor annotation support and contains fixes for containerd shim logs,
> container stop/deletion, cri plugin and selinux.
>
> It also contains several important bug fixes for goroutine and file
> descriptor leakage in containerd and containerd shims.
>
> Notable Updates
>
> - Support annotations in the OCI image descriptor, and filtering image by annotations. containerd/containerd#3254
> - Support context timeout in ttrpc which can help avoid containerd hangs when a shim is unresponsive. containerd/ttrpc#31
> - Fix a bug that containerd shim leaks goroutine and file descriptor after containerd restarts. containerd/ttrpc#37
> - Fix a bug that a container can't be deleted if first deletion attempt is canceled or timeout. containerd/containerd#3264
> - Fix a bug that containerd leaks file descriptor when using v2 containerd shims, e.g. containerd-shim-runc-v1. containerd/containerd#3273
> - Fix a bug that a container with lingering processes can't terminate when it shares pid namespace with another container. moby/moby#38978
> - Fix a bug that containerd can't read shim logs after restart. containerd/containerd#3282
> - Fix a bug that shim_debug option is not honored for existing containerd shims after containerd restarts. containerd/containerd#3283
> - cri: Fix a bug that a container can't be stopped when the exit event is not successfully published by the containerd shim. containerd/containerd#3125, containerd/containerd#3177
> - cri: Fix a bug that exec process is not cleaned up if grpc context is canceled or timeout. contaienrd/cri#1159
> - Fix a selinux keyring labeling issue by updating runc to v1.0.0-rc.8 and selinux library to v1.2.2. opencontainers/selinux#50
> - Update ttrpc to f82148331ad2181edea8f3f649a1f7add6c3f9c2. containerd/containerd#3316
> - Update cri to 49ca74043390bc2eeea7a45a46005fbec58a3f88. containerd/containerd#3330

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d5669ec1c6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-12 10:15:48 +02:00
Tibor Vass
f1a639bf53
vendor: containerd to 7c1e88399
Fixes https://github.com/moby/buildkit/issues/1062
when DOCKER_BUILDKIT=1

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 14bd416d0e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-12 10:12:05 +02:00
Sebastiaan van Stijn
f7dbee3eea
bump swarmkit to bbe341867eae1615faf8a702ec05bfe986e73e06 (bump_v19.03 branch)
full diff: 4fb9e961ab...bbe341867e

changes included:

- docker/swarmkit#2889 [19.03 backport] Fix update out of sequence and increase max recv gRPC message size for nodes and secrets

Which relates to

- moby/moby#39531 integration-cli: fix swarm tests flakiness
- docker/engine#345 [19.03 backport] integration-cli: fix swarm tests flakiness

And includes backports of

- docker/swarmkit#2808 Fix flaky tests
- docker/swarmkit#2866 Swap gometalinter for golangci-lint
- docker/swarmkit#2869 Increase max recv gRPC message size to initialize connection broker
 - related / similar to moby/moby#38103 / docker/engine#102 cluster: set bigger grpc limit for array requests
 - related / similar to moby/moby#39306 Increase max recv gRPC message size for nodes and secrets
 - fixes https://github.com/docker/swarmkit/issues/2733 Error generated when messages size is too big
- docker/swarmkit#2870 Fix update out of sequence

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-12 10:07:13 +02:00
Sebastiaan van Stijn
1e0234ddc6
Return "invalid parameter" when linking to non-existing container
Trying to link to a non-existing container is not valid, and should return an
"invalid parameter" (400) error. Returning a "not found" error in this situation
would make the client report the container's image could not be found.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 422067ba7b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-10 23:57:45 +02:00
Sebastiaan van Stijn
9cc467c3b3
Merge pull request #344 from thaJeztah/19.03_backport_DESKTOP_1286_win_admin_error_readability
[19.03 backport] Improve readability of Windows connect error
2019-09-10 00:54:27 +02:00
Sebastiaan van Stijn
04995b667b
Merge pull request #350 from thaJeztah/19.03_backport_swarm_flake
[19.03 backport] Skip TestServiceRemoveKeepsIngressNetwork
2019-09-10 00:53:53 +02:00
Sebastiaan van Stijn
81fcfc67cd
Merge pull request #347 from kolyshkin/19.03-loopback-idx
[19.03 backport] Use correct `LOOP_CTL_GET_FREE` API in `pkg/loopback`
2019-09-07 01:20:09 +02:00
Michael Crosby
78198da34a
Skip TestServiceRemoveKeepsIngressNetwork
Ref: #39426

This is a common flaky test that I have seen on multiple PRs.  It is not
consistent and should be skipped until it is fixed to be robust.  A
simple fix for the swarm tests is not easy as they all poll and have 1
billion timeouts in all the tests so a skip is valid here.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit b94218560e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-07 01:09:38 +02:00
Derek McGowan
16342ac1b1
Fix overlay2 busy error on mount
When mounting overlays which have children, enforce that
the mount is always performed as read only. Newer versions
of the kernel return a device busy error when a lower directory
is in use as an upper directory in another overlay mount.

Adds committed file to indicate when an overlay is being used
as a parent, ensuring it will no longer be mounted with an
upper directory.

Signed-off-by: Derek McGowan <derek@mcgstyle.net>
(cherry picked from commit 477bf1e413)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-06 23:11:07 +02:00
Nick Adcock
239ac23799
Improve readability of Windows connect error
Improve the readability of the connection error displayed to the user on
Windows when running docker commands fails by checking if the client is
privileged. If so then display the actual error wrapped in a generic
error "This error may indicate that the docker daemon is not running."

If not that display the actual error wrapped in a more specific error:
"In the default daemon configuration on Windows, the docker client must
be run with elevated privileges to connect."

Signed-off-by: Nick Adcock <nick.adcock@docker.com>
(cherry picked from commit 1a5dafb31e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-09-06 23:09:33 +02:00
Sebastiaan van Stijn
c8ef549bf6
Merge pull request #337 from thaJeztah/19.03_backport_jenkinsfile
[19.03 backport] Jenkinsfile and CI updates
2019-09-06 23:06:33 +02:00
Daniel Sweet
0485e53675 Use correct LOOP_CTL_GET_FREE API in pkg/loopback
The `ioctl` interface for the `LOOP_CTL_GET_FREE` request on
`/dev/loop-control` is a little different from what `unix.IoctlGetInt`
expects: the first index is the returned status in `r1`, not an `int`
pointer as the first parameter.

Unfortunately we have to go a little lower level to get the appropriate
loop device index out, using `unix.Syscall` directly to read from
`r1`. Internally, the index is returned as a signed integer to match the
internal `ioctl` expectations of interpreting a negative signed integer
as an error at the userspace ABI boundary, so the direct interface of
`ioctlLoopCtlGetFree` can remain as-is.

[@kolyshkin: it still worked before this fix because of
/dev scan fallback in ioctlLoopCtlGetFree()]

Signed-off-by: Daniel Sweet <danieljsweet@icloud.com>
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit db2bc43017)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-09-02 18:38:22 +03:00
Sebastiaan van Stijn
70ca64d736
windows.ps1: fix leaked NdisAdapters not being cleaned up on RS1
Windows RS1 has problems with leaking NdisAdapters during the integration
tests; the windows.ps1 script has a cleanup stesp to remove those
leaked adapters.

For internal testing at Microsoft on internal builds, this cleanup step
was skipped, and only ran on the CI machines in our Jenkins.

Due to the move to our new Jenkins, the names of Windows machines changed,
and because of that, the cleanup step was never executed, resulting in the
leaked adapters not being cleaned up:

```
20:32:23  WARNING: There are 608 NdisAdapters leaked under Psched\Parameters
20:32:23  WARNING: Not cleaning as not a production RS1 server
20:32:24  WARNING: There are 608 NdisAdapters leaked under WFPLWFS\Parameters
20:32:24  WARNING: Not cleaning as not a production RS1 server
```

```
22:01:31  WARNING: There are 1209 NdisAdapters leaked under Psched\Parameters
22:01:31  WARNING: Not cleaning as not a production RS1 server
22:01:31  WARNING: There are 1209 NdisAdapters leaked under WFPLWFS\Parameters
22:01:31  WARNING: Not cleaning as not a production RS1 server
```

This patch removes the check for non-production builds, and unconditionally
cleans up leaked adapters if they are found.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 156ad54fb7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:35 +02:00
Brian Goff
cb9414bbb7
Improve integration test detecetor
The "new test" detector in test-integration-flaky was a bit flaky since
it would detect function signatures that are not new tests.

In addition, the test calls `return` outside of a function which is not
allowed.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit e2b24490e4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:33 +02:00
Sebastiaan van Stijn
c5bfb0290e
Jenkinsfile: fix invalid expression in bundles script
This was introduced in a76ff632a4:

    + find bundles -path */root/*overlay2 -prune -o -type f ( -o -name *.log -o -name *.prof ) -print
    find: invalid expression; you have used a binary operator '-o' with nothing before it.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ca1e7a3b4a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:31 +02:00
Sebastiaan van Stijn
2d3475bac8
Jenkinsfile: don't mark build failed when failing to create bundles
Failing to archive the bundles should not mark the build as failed.
This can happen if a build is terminated early, or if (to be implemented)
an optional build-stage is skipped / failed;

```
2019-08-24T10:53:09.354Z] + bundleName=janky
[2019-08-24T10:53:09.354Z] + echo Creating janky-bundles.tar.gz
[2019-08-24T10:53:09.354Z] Creating janky-bundles.tar.gz
[2019-08-24T10:53:09.354Z] + xargs tar -czf janky-bundles.tar.gz
[2019-08-24T10:53:09.354Z] + find bundles -path */root/*overlay2 -prune -o -type f ( -name *-report.json -o -name *.log -o -name *.prof -o -name *-report.xml ) -print
[2019-08-24T10:53:09.354Z] find: bundles: No such file or directory
[2019-08-24T10:53:09.354Z] tar: Cowardly refusing to create an empty archive
[2019-08-24T10:53:09.354Z] Try 'tar --help' or 'tar --usage' for more information.
Error when executing always post condition:
hudson.AbortException: script returned exit code 123
	at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.handleExit(DurableTaskStep.java:569)
	at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.check(DurableTaskStep.java:515)
	at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.run(DurableTaskStep.java:461)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a76ff632a4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:29 +02:00
Sebastiaan van Stijn
22f6cfd4df
Jenkinsfile: use wildcards for artifacts, and don't fail on missing ones
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8b65e058be)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:27 +02:00
Sebastiaan van Stijn
88301d8f6c
hack/make: fix some linting issues reported by shellcheck
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 917b0dcd3d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:26 +02:00
Sebastiaan van Stijn
5143f3a62c
Replace libprotobuf-c0-dev with libprotobuf-c-dev
The `libprotobuf-c0-dev` virtual package is no longer available
in Debian Buster, but is provided by `libprotobuf-c-dev`, which
is available.

https://packages.debian.org/stretch/libprotobuf-c0-dev

> Virtual Package: libprotobuf-c0-dev
>
> This is a virtual package. See the Debian policy for a definition of virtual packages.
>
> Packages providing libprotobuf-c0-dev
> libprotobuf-c-dev
> Protocol Buffers C static library and headers (protobuf-c)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d185ca78ec)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:24 +02:00
Sebastiaan van Stijn
bb53ea71cb
hack/make.sh remove "latest" symlink
This symlink was added in d42753485b,
to allow finding the path to the latest built binary, because at the time,
those paths were prefixed with the version or commit (e.g. `bundles/1.5.0-dev`).

Commit bac2447964 removed the version-prefix in
paths, but kept the old symlink for backward compatiblity. However, many
things were moved since then (e.g. paths were renamed to `binary-daemon`,
and various other changes). With the symlink pointing to the symlink's parent
directory, following the symlink may result into an infinite recursion,
which can happen if scripts using wildcards / globbing to find files.

With this symlink no longer serving a real purpose, we can probably safely
remove this symlink now.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit dde1fd78c7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:22 +02:00
Stefan Scherer
339261224f
Use new windows labels
Signed-off-by: Stefan Scherer <stefan.scherer@docker.com>
(cherry picked from commit ca3e230b77)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:20 +02:00
Sebastiaan van Stijn
7f2d2e3cc3
Dockerfile: add python3-wheel back again (for yamllint)
Although the Dockerfile builds without it, adding wheel back
should save some time

```
00:45:28  #14 10.70 Building wheels for collected packages: pathspec, pyyaml
00:45:28  #14 10.70   Running setup.py bdist_wheel for pathspec: started
00:45:28  #14 10.88   Running setup.py bdist_wheel for pathspec: finished with status 'error'
00:45:28  #14 10.88   Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-mbotnxes/pathspec/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmpg9pl4u6kpip-wheel- --python-tag cp35:
00:45:28  #14 10.88   usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
00:45:28  #14 10.88      or: -c --help [cmd1 cmd2 ...]
00:45:28  #14 10.88      or: -c --help-commands
00:45:28  #14 10.88      or: -c cmd --help
00:45:28  #14 10.88
00:45:28  #14 10.88   error: invalid command 'bdist_wheel'
00:45:28  #14 10.88
00:45:28  #14 10.88   ----------------------------------------
00:45:28  #14 10.88   Failed building wheel for pathspec
00:45:28  #14 10.88   Running setup.py clean for pathspec
00:45:28  #14 11.05   Running setup.py bdist_wheel for pyyaml: started
00:45:28  #14 11.25   Running setup.py bdist_wheel for pyyaml: finished with status 'error'
00:45:28  #14 11.25   Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-mbotnxes/pyyaml/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmpyci_xi0bpip-wheel- --python-tag cp35:
00:45:28  #14 11.25   usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
00:45:28  #14 11.25      or: -c --help [cmd1 cmd2 ...]
00:45:28  #14 11.25      or: -c --help-commands
00:45:28  #14 11.25      or: -c cmd --help
00:45:28  #14 11.25
00:45:28  #14 11.25   error: invalid command 'bdist_wheel'
00:45:28  #14 11.25
00:45:28  #14 11.25   ----------------------------------------
00:45:28  #14 11.25   Failed building wheel for pyyaml
00:45:28  #14 11.25   Running setup.py clean for pyyaml
00:45:28  #14 11.44 Failed to build pathspec pyyaml
00:45:28  #14 11.45 Installing collected packages: pathspec, pyyaml, yamllint
00:45:28  #14 11.45   Running setup.py install for pathspec: started
00:45:29  #14 11.73     Running setup.py install for pathspec: finished with status 'done'
00:45:29  #14 11.73   Running setup.py install for pyyaml: started
00:45:29  #14 12.05     Running setup.py install for pyyaml: finished with status 'done'
00:45:29  #14 12.12 Successfully installed pathspec-0.5.9 pyyaml-5.1.2 yamllint-1.16.0
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ad70bf6866)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:18 +02:00
Sebastiaan van Stijn
9b836acdd0
Jenkinsfile: run DCO check before everything else
This will run the DCO check in a lightweight alpine container, before
running other stages, and before building the development image/container
(which can take a long time).

A Jenkins parameter was added to optionally skip the DCO check (skip_dco)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d6f7909c76)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:16 +02:00
Sebastiaan van Stijn
f7a3fb8f5a
Dockerfile: use DEBIAN_FRONTEND=noninteractive
Using a build-arg so that we don't have to specify it for each
`apt-get install`, and to preserve that the `DEBIAN_FRONTEND` is
preserved in the image itself (which changes the default behavior,
and can be surprising if the image is run interactively).`

With this patch, some (harmless, but possibly confusing) errors
are no longer printed during build, for example:

```patch
 Unpacking libgcc1:armhf (1:6.3.0-18+deb9u1) ...
 Selecting previously unselected package libc6:armhf.
 Preparing to unpack .../04-libc6_2.24-11+deb9u4_armhf.deb ...
-debconf: unable to initialize frontend: Dialog
-debconf: (TERM is not set, so the dialog frontend is not usable.)
-debconf: falling back to frontend: Readline
 Unpacking libc6:armhf (2.24-11+deb9u4) ...
 Selecting previously unselected package libgcc1:arm64.
 Preparing to unpack .../05-libgcc1_1%3a6.3.0-18+deb9u1_arm64.deb ...
 Unpacking libgcc1:arm64 (1:6.3.0-18+deb9u1) ...
 Selecting previously unselected package libc6:arm64.
 Preparing to unpack .../06-libc6_2.24-11+deb9u4_arm64.deb ...
-debconf: unable to initialize frontend: Dialog
-debconf: (TERM is not set, so the dialog frontend is not usable.)
-debconf: falling back to frontend: Readline

```

Looks like some output is now also printed on stdout instead of stderr

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2ff9ac4de5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:14 +02:00
Sebastiaan van Stijn
d6ba2b6a68
Dockerfile: use --no-install-recommends for all stages
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b0835dd088)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:13 +02:00
Brian Goff
78abff3e39
Add support for setting a test filter
This is basically taking some stuff that make a custom shell function
for.
This takes a test filter, builds the appropriate TESTFLAGS, and sets the
integration API test dirs that match the given filter to avoid building
all test dirs.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 13064b155e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:11 +02:00
Andrew Hsu
43919c2455
added hack/ci/master as entry point for master codeline checks
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit aac6e62209)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:09 +02:00
Sebastiaan van Stijn
c5c73c2e1f
Fix "Removing bundles/" not actually removing bundles
Before:

Running `ls -la bundles/` before, and after removing:

    ls -la bundles/
    total 16
    drwxr-xr-x  7 root root  224 Jul 12 12:25 .
    drwxr-xr-x  1 root root 4096 Jul 12 12:30 ..
    drwxr-xr-x  2 root root   64 Jul 12 10:00 dynbinary
    drwxr-xr-x  6 root root  192 Jul 12 12:25 dynbinary-daemon
    lrwxrwxrwx  1 root root    1 Jul 12 12:25 latest -> .
    drwxr-xr-x 92 root root 2944 Jul 12 12:29 test-integration

    Removing bundles/

    ls -la bundles/
    total 16
    drwxr-xr-x  7 root root  224 Jul 12 12:25 .
    drwxr-xr-x  1 root root 4096 Jul 12 12:30 ..
    drwxr-xr-x  2 root root   64 Jul 12 10:00 dynbinary
    drwxr-xr-x  6 root root  192 Jul 12 12:25 dynbinary-daemon
    lrwxrwxrwx  1 root root    1 Jul 12 12:25 latest -> .
    drwxr-xr-x 92 root root 2944 Jul 12 12:29 test-integration

After:

Running `ls -la bundles/` before, and after removing:

    ls -la bundles/
    total 16
    drwxr-xr-x  7 root root  224 Jul 12 12:25 .
    drwxr-xr-x  1 root root 4096 Jul 12 12:30 ..
    drwxr-xr-x  2 root root   64 Jul 12 10:00 dynbinary
    drwxr-xr-x  6 root root  192 Jul 12 12:25 dynbinary-daemon
    lrwxrwxrwx  1 root root    1 Jul 12 12:25 latest -> .
    drwxr-xr-x 92 root root 2944 Jul 12 12:29 test-integration

    Removing bundles/

    ls -la bundles/
    total 4
    drwxr-xr-x 2 root root   64 Jul 12 12:25 .
    drwxr-xr-x 1 root root 4096 Jul 12 12:30 ..

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f75f34249b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-31 13:27:06 +02:00
Brian Goff
7639c4bdeb
Set DOCKER_BINDDIR mount options from env
Adds `DOCKER_BINDDIR_MOUNT_OPTS` to easily tweak the BINDDIR mount
options... primarily adding so I can control the caching mode for
osxfs because compiling takes > 1min for me with the default and < 30s
with both `cached` and `delegated`.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit b1e6536ceb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-28 23:37:02 +02:00
Andrew Hsu
ed20165a37
Merge pull request #331 from tiborvass/19.03-buildkit-fixes
[19.03 backport] Cherry-picking build fixes
2019-08-22 13:57:25 -07:00
Tonis Tiigi
52cef4bbee builder-next: update mount signature
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit d495eeb365)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-08-22 18:10:43 +00:00
Tonis Tiigi
278cb7aed5 vendor: update buildkit to 588c73e1e4
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 52ed97c5c1)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-08-22 18:10:16 +00:00
Tonis Tiigi
613a32482f builder-next: close progress on layer export error
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 27f1f2b5be)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-08-22 18:08:51 +00:00
Sebastiaan van Stijn
9ae801cfd1
Merge pull request #328 from thaJeztah/19.03_backport_jenkinsfile
[19.03 backport] Jenkinsfile and related test-changes
2019-08-20 23:34:45 +02:00
Sebastiaan van Stijn
df1d66e6ba
Set locale to fix yamlllint
Attempting to fix;

```
21:16:00 Traceback (most recent call last):
21:16:00 File "/usr/local/bin/yamllint", line 11, in <module>
21:16:00 sys.exit(run())
21:16:00 File "/usr/local/lib/python3.5/dist-packages/yamllint/cli.py", line 170, in run
21:16:00 problems = linter.run(f, conf, filepath)
21:16:00 File "/usr/local/lib/python3.5/dist-packages/yamllint/linter.py", line 233, in run
21:16:00 content = input.read()
21:16:00 File "/usr/lib/python3.5/encodings/ascii.py", line 26, in decode
21:16:00 return codecs.ascii_decode(input, self.errors)[0]
21:16:00 UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 123522: ordinal not in range(128)
21:16:00 Build step 'Execute shell' marked build as failure
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b5e5cac0f5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 19:53:53 +02:00
Sebastiaan van Stijn
0873c3b57d
hack: fix mixed tabs/spaces for indentation
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2cffe9be3d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 19:53:47 +02:00
Sebastiaan van Stijn
83e7de55aa
Jenkinsfile: save docker-py artifacts
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8b6da9d82f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:46 +02:00
Sebastiaan van Stijn
fb471aab26
Jenkinsfile: build dynamic binary for docker-py, to match makefile
This also makes sure that we can test all functionality of the
daemon, because some features are not available on static binaries.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4ddb40ee8a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:44 +02:00
Sebastiaan van Stijn
70303ded8e
docker-py: output junit.xml for test-results
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5969bbee79)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:42 +02:00
Sebastiaan van Stijn
16639f549e
docker-py: use --mount for bind-mounting docker.sock
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 535e29da05)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:40 +02:00
Sebastiaan van Stijn
34fa29e8d4
docker-py: run without tty to disable color output
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b04cbf1072)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:38 +02:00
Sebastiaan van Stijn
16d0807e7e
docker-py: fix linting issues reported by shellcheck
- SC2006: use $(...) notation instead of legacy backticked `...`
- SC2086: double quote to prevent globbing and word splitting

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0b3d201892)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:35 +02:00
Sebastiaan van Stijn
75d217961e
Jenkinsfile: collect junit.xml for all architectures
Jenkins groups them per stage, so collecting them for all architectures
is possible (without them conflicting or becoming ambiguous)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e2f5b78e78)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:28 +02:00
Sebastiaan van Stijn
c243ffaa06
Jenkinsfile: send junit.xml in the stage that produced it
This will send the results directly after the tests complete,
and make the stage more atomic.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7f9328ad2e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:26 +02:00
Andrew Hsu
e6b49956d8
fix bundles filenames in Jenkinsfile
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit eb30f0ad84)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:18 +02:00
Andrew Hsu
4593b400f9
rename powerpc bundles in Jenkinsfile
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit ad29f9e471)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:16 +02:00
Andrew Hsu
b81a2581ff
rename z bundles in Jenkinsfile
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit a049ea1a93)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:14 +02:00
Andrew Hsu
9ccba2faf1
be more lenient on junit report gathering in Jenkinsfile
In case a job fails before even generating a report file.

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit 0cfc1ec2bd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:12 +02:00
Andrew Hsu
55b938cb98
use environment for z jobs in Jenkinsfile
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit 4e2f39cf14)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:10 +02:00
Andrew Hsu
46fa5a43ff
use environment for power jobs in Jenkinsfile
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit 3564b03fbc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:08 +02:00
Andrew Hsu
75c408cb6c
set timeouts in Jenkinsfile to 2 hrs
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit bf70a5975d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:06 +02:00
Andrew Hsu
23b7bdf785
add z-master stage to Jenkinsfile
The z-master stage will just run the integration-cli tests. The
existing z stage will run the unit tests and the integration
tests. In this way, PR check jobs will be shorter, but all
integration tests will run after PR is merged to master.

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit bdc1c1a02a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:05 +02:00
Andrew Hsu
10dd4a25ba
add powerpc-master stage to Jenkinsfile
The powerpc-master stage will just run the integration-cli tests. The
existing powerpc stage will run the unit tests and the integration
tests. In this way, PR check jobs will be shorter, but all integration
tests will run after PR is merged to master.

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit c2f9d58375)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:03 +02:00
Tibor Vass
eee3f67571
Jenkinsfile: reduce time of integration tests by dividing tests into 3 parallel runs
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit e554fb23c8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:58:00 +02:00
Tibor Vass
78207d5380
hack: unmount leftover daemon root folders
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 13df617d4c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:58 +02:00
Tibor Vass
34a8dcae17
Jenkinsfile: move static and cross compilation to unit-validate stage
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 251c8dca28)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:56 +02:00
Kir Kolyshkin
5c2e0c6f9b
Jenkinsfile: avoid errors from find
There are many errors like this one:

> 01:39:28.750 find: ‘bundles/test-integration/dbc77018d39a5/root/overlay2/f49953a883daceee60a481dd8e1e37b0f806d309258197d6ba0f6871236d3d47/work/work’: Permission denied

(probably caused by bad permissions)

These directories are not to be looked at when we search for logs, so
let's exclude them. It's not super easy to do in find, here is some
kind of an explanation for find arguments

```
PATTERN ACTION OR PATTERN                           ACTION
-path X -prune -o -type f [AND] (-name A -o name B) -print
```

(here -o means OR, while AND is implicit)

While at it,
 - let the find know we're only looking for files, not directories
 - remove a subshell and || true
 - remove `-name integration.test` (there are no such files)

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit b283dff3ff)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:48 +02:00
Sebastiaan van Stijn
61e218a502
Dockerfile: add back yamllint
This was inadvertedly removed in 7bfe48cc00,
because it was documented as a dependency for docker-py, but
actually used to validate the swagger file.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b1723b3721)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:42 +02:00
Sebastiaan van Stijn
3dd11dd0b5
docker-py: skip PullImageTest::test_pull_invalid_platform
and remove `PullImageTest::test_build_invalid_platform` from the list,
which was a copy/paste error in f8cde0b32d

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6f5c377ddc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:35 +02:00
Sebastiaan van Stijn
0d0b1e77c0
Jenkinsfile: remove "experimental" stage
All tests that require experimental either spin up a separate daemon,
or use the main daemon if experimental is enabled.

This patch

- allows enabling "experimental" for stages through an environment variable
- enables experimental by default on all stages, so that some of these tests
  don't have to start a new daemon.
- removes the seaprate "experimental" stage, because it was running exactly
  the same tests as the "janky" stage.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e856b46cfb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:26 +02:00
Sebastiaan van Stijn
94d56428a6
Consistently use DOCKER_EXPERIMENTAL=1 instead or =y
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a43123cab1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:24 +02:00
Sebastiaan van Stijn
52ec660936
docker-py: deselect broken experimental tests
These tests are fixed upstream, but those fixes are not yet in a
released version.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f8cde0b32d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:22 +02:00
Sebastiaan van Stijn
30549cdb4b
Jenkinsfile: move docker-py to separate stage
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ad28fec1c9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:12 +02:00
Sebastiaan van Stijn
62af652a5a
Jenkinsfile: inline experimental, power, z steps, and split Unit test
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1e8ede514e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:10 +02:00
Sebastiaan van Stijn
63966dec02
Jenkinsfile: inline janky steps, and move validate to separate stage
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f411be2072)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:08 +02:00
Sebastiaan van Stijn
997037964e
Jenkinsfile: remove .git mount in stages that don't use it
The .git mount is only needed for the DCO check, and for building
the binaries if `DOCKER_GITCOMMIT` is not set.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 47ac8a97de)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:06 +02:00
Sebastiaan van Stijn
7a82829520
Jenkinsfile: consistent indentation and order of env-vars
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f814e04652)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:04 +02:00
Sebastiaan van Stijn
678121ef05
Jenkinsfile: remove unused GIT_SHA1 env-var
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0634816c0c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:57:01 +02:00
Sebastiaan van Stijn
970fb3c1df
Jenkinsfile: move building e2e image to "unit-vendor" stage
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit efacee1cdd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:59 +02:00
Sebastiaan van Stijn
04ca2e6b92
Jenkinsfile: extract DOCKER_GRAPHDRIVER as environment variable
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 781e79d1fa)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:57 +02:00
Sebastiaan van Stijn
8dbb761fad
Jenkinsfile: use overlay2 for Power and s390x as well
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c75d7e0e22)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:55 +02:00
Sebastiaan van Stijn
0a5ea9d310
Jenkinsfile: run check-config.sh to print system configuration
Having this information can help debugging issues in CI (which could
be caused by missing/incorrect configuration of the machines).

We ping to a fixed version of the script, because this script is ran
directly on the host, and we don't want pull-requests modifying this
script to have direct access to the machines.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a2ad56dfad)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:53 +02:00
Sebastiaan van Stijn
c94dca16fa
Jenkinsfile: remove ip_vs modprobe for unit/vendor stage
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6523ced950)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:50 +02:00
Sebastiaan van Stijn
cbbff33086
Jenkinsfile: standardize cointainer names and fix s390x cleanup
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f2e09afff4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:48 +02:00
Sebastiaan van Stijn
949708a745
Jenkinsfile: combine "vendor" and "unit tests"
Both of these tests are fairly short, and shouldn't interfer with
eachother, so we can combine them and re-use the same dev-image
(so that it'll only be built once).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f51c139792)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:46 +02:00
Sebastiaan van Stijn
b81c762493
Jenkinsfile: use GIT_COMMIT from Git plugin instead of manually
This patch removes the manual steps to resolve the Git commit, and
instead, uses the `GIT_COMMIT` that's set by Jenkins's Git plugin.

Behavior changes slightly, because `GIT_PLUGIN` contains the full
commit-sha, not the short one.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit be0e6e9d34)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:44 +02:00
Sebastiaan van Stijn
002bf3806e
Jenkinsfile: disable buildkit on power and s390x
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 355bcf6d48)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:35 +02:00
Sebastiaan van Stijn
337f0fc80d
Jenkinsfile: Add "info" step to all stages
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3897796548)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:32 +02:00
Sebastiaan van Stijn
c6a4351edd
Jenkinsfile: split some shell steps
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b04c769d65)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:30 +02:00
Sebastiaan van Stijn
b40359b551
Jenkinsfile: busybox is multi-arch
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9f0e10fe24)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:28 +02:00
Sebastiaan van Stijn
1d0821eee2
Jenkinsfile: remove arch-specific suffixes from names
Container and image names are already unique because they have
the git-sha or build-number, and a single machine won't be running
tests for multiple architectures.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 337d03a5f0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:26 +02:00
Sebastiaan van Stijn
37536cdfa4
Jenkinsfile: run "make clean" in cleanup step
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a0bf935f9c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:24 +02:00
Sebastiaan van Stijn
3040f3fdbb
Jenkinsfile: use sub-stages to describe steps
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 79713d8d07)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:22 +02:00
Sebastiaan van Stijn
ea508a8574
Jenkinsfile: set DOCKER_BUILDKIT globally
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f648964875)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:20 +02:00
Sebastiaan van Stijn
d974674126
Jenkinsfile: set APT_MIRROR globally
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a28f2a2338)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:18 +02:00
Sebastiaan van Stijn
31cb280682
Jenkinsfile: remove check for arch-specific Dockerfiles
The main Dockerfile is multi-arch now.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 61fd8b7384)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:16 +02:00
Sebastiaan van Stijn
de941c990e
Jenkinsfile: remove build --rm, because it's the default
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 722d582c92)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:14 +02:00
Sebastiaan van Stijn
5da534d8db
Jenkinsfile: consistently indent with 4 spaces
From the code style guidelines;
https://wiki.jenkins.io/display/JENKINS/Code+Style+Guidelines

> 1. Use spaces. Tabs are banned.
> 2. Java blocks are 4 spaces. JavaScript blocks as for Java. XML nesting is 2 spaces

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a95f16ca28)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:11 +02:00
Andrew Hsu
24144dbdc9
run unit tests and generate junit report
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit 42f0a0db75)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:56:04 +02:00
Brian Goff
da9289fb54
Improvements to the test runners
1. Use `go list` to get list of integration dirs to build. This means we
   do not need to have a valid `.go` in every subdirectory and also
   filters out other dirs like "bundles" which may have been created.
2. Add option to specify custom flags for integration and
   integration-cli. This is needed so both suites can be run AND set
   custom flags... since the cli suite does not support standard go
   flags.
3. Add options to skip an entire integration suite.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit abece9b562)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:55:57 +02:00
Michael Zhao
0b274cf18f
Set TIMEOUT according to os/arch.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
(cherry picked from commit 790da6c223)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:55:52 +02:00
Andrew Hsu
55fc016efc
allow running of single integration test
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit c222c5ac6f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:55:45 +02:00
zelahi
5bed812503
ADDED changes to integrate with our new Jenkins ci
Signed-off-by: zelahi <elahi.zuhayr@gmail.com>
(cherry picked from commit 0ecd6ab30f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:55:39 +02:00
Andrew Hsu
3e266baca4
use overlay2 for janky and experimental checks
instead of vfs

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit ccfaf1ed92)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:55:33 +02:00
Andrew Hsu
191b03834a
remove DOCKER_EXECDRIVER from Jenkinsfile
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit 9d98458fb7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:55:27 +02:00
Sebastiaan van Stijn
63e03b155f
Remove Codecov
Codecov has shown to be flaky, and calculate the wrong diff, in
addition, it doesn't show coverage for integration tests, which
makes the coverage report not useful.

Removing it for now, while we look at alternatives.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit bd5c5373f1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:55:21 +02:00
Sebastiaan van Stijn
130c0746c1
Cleanup "address" when connecting over a UNIX socket
When connecting with the daemon using a UNIX socket, the HTTP hostname was set, based
on the socket location, which was generating some noise in the test-logs.

Given that the actual hostname is not important (the URL just has to be well-formed),
the hostname/address can be cleaned up to reduce the noise.

This patch strips the path from the `addr`, and keeps `<random-id>.sock` as address.

Before:

    daemon.go:329: [d15d31ba75501] error pinging daemon on start: Get http://%2Ftmp%2Fdocker-integration%2Fd15d31ba75501.sock/_ping: dial unix /tmp/docker-integration/d15d31ba75501.sock: connect: no such file or directory

After:

    daemon.go:329: [d15d31ba75501] error pinging daemon on start: Get http://d15d31ba75501.sock/_ping: dial unix /tmp/docker-integration/d15d31ba75501.sock: connect: no such file or directory

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 92e6e7dd5f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:55:14 +02:00
Sebastiaan van Stijn
624ff4fef4
integration: organize bundle directory per test
The test-integration/test=integration-cli directory contains
a directory for each daemon that was created during the integration
tests, which makes it a long list to browse through. In addition,
some tests spin up multiple daemons, and when debugging test-failures,
the daemon-logs often have to be looked at together.

This patch organizes the bundl directory to group daemon storage
locationos per test, making it easier to find information about
all the daemons that were used in a specific test.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9b5e78888d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:55:06 +02:00
Sebastiaan van Stijn
19fa3ab213
WIP Move docker-py tests first again
See if networking works if we run it first

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6aafe0fd9e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:54:55 +02:00
Sebastiaan van Stijn
ab68b5dd9a
docker-py: skip flaky tests
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 980f2813b4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:54:53 +02:00
Sebastiaan van Stijn
9ad75d26fc
docker-py: use host-network for nested build of docker-py
When building this image docker-in-docker, the DNS in the environment
may not be usable for the build-container, causing resolution to fail:

```
02:35:31 W: Failed to fetch http://deb.debian.org/debian/dists/jessie/Release.gpg  Temporary failure resolving 'deb.debian.org'
```

This patch detects if we're building from within a container, and if
so, skips creating a networking namespace for the build by using
`--network=host`.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3c15cea650)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:54:51 +02:00
Sebastiaan van Stijn
c3a7556f73
docker-py: don't build --quiet is TESTDEBUG is set
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ba8f4c7994)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:54:49 +02:00
Sebastiaan van Stijn
89e812a1e6
Makefile: Allow passing DOCKER_TEST_HOST and TESTDEBUG to container
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 968345bc5c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:54:47 +02:00
Sebastiaan van Stijn
e092ff3f74
Bump docker-py to 4.0.2, and run tests from upstream repository
This removes all the installation steps for docker-py from the
Dockerfile, and instead builds the upstream Dockerfile, and runs
docker-py tests in a container.

To test;

```
make test-docker-py

...

Removing bundles/

---> Making bundle: dynbinary (in bundles/dynbinary)
Building: bundles/dynbinary-daemon/dockerd-dev
Created binary: bundles/dynbinary-daemon/dockerd-dev

---> Making bundle: test-docker-py (in bundles/test-docker-py)
---> Making bundle: .integration-daemon-start (in bundles/test-docker-py)
Using test binary docker
Starting dockerd
INFO: Waiting for daemon to start...
.
INFO: Building docker-sdk-python3:3.7.0...
sha256:686428ae28479e9b5c8fdad1cadc9b7a39b462e66bd13a7e35bd79c6a152a402
INFO: Starting docker-py tests...
============================= test session starts ==============================
platform linux -- Python 3.6.8, pytest-4.1.0, py-1.8.0, pluggy-0.9.0
rootdir: /src, inifile: pytest.ini
plugins: timeout-1.3.3, cov-2.6.1
collected 359 items

tests/integration/api_build_test.py .......s....
....
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7bfe48cc00)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:54:45 +02:00
Akihiro Suda
2b33fe3512
hack: remove integration-cli-on-swarm
integration-on-swarm had unnecessary complexity and was too hard to
maintain. Also, it didn't support the new non-CLI integration test suite.

I'm now doing some experiments out of the repo using Kubernetes:
https://github.com/AkihiroSuda/kube-moby-integration

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit e7fbe8e457)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-19 18:54:38 +02:00
Sebastiaan van Stijn
93c18c73a3
Merge pull request #327 from tonistiigi/double-unmount-1903
[19.03] builder-next: avoid double unmounting mountable
2019-08-16 20:30:09 +02:00
Tonis Tiigi
cad2cd71b7 builder-next: avoid double unmounting mountable
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 9ea2cf320a)
2019-08-15 22:45:08 -07:00
Sebastiaan van Stijn
96e086dc33
Merge pull request #326 from tonistiigi/update-buildkit-1903
[19.03] vendor: update buildkit to v0.6.1
2019-08-15 19:34:11 +02:00
Sebastiaan van Stijn
cddce2dfa7
Merge pull request #321 from dani-docker/19.03-bk-3945401
[19.03 backport] do not stop health check before sending signal
2019-08-15 16:16:58 +02:00
Tonis Tiigi
65f964aa6b vendor: update buildkit to v0.6.1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit e59b26087f)
2019-08-14 18:56:38 -07:00
Dani Louca
8fca769bd5 Fixing integration test
Signed-off-by: Dani Louca <dani.louca@docker.com>
(cherry picked from commit 614daf1171)
Signed-off-by: Dani Louca <dani.louca@docker.com>
2019-08-14 17:07:40 -04:00
Sebastiaan van Stijn
ef5dd6e46d Skip TestHealthKillContainer on Windows
This test is failing on Windows currently:

```
11:59:47 --- FAIL: TestHealthKillContainer (8.12s)
11:59:47     health_test.go:57: assertion failed: error is not nil: Error response from daemon: Invalid signal: SIGUSR1
``

That test was added recently in https://github.com/moby/moby/pull/39454, but
rewritten in a commit in the same PR:
f8aef6a92f

In that rewrite, there were some changes:

- originally it was skipped on Windows, but the rewritten test doesn't have that skip:

    ```go
    testRequires(c, DaemonIsLinux) // busybox doesn't work on Windows
    ```

- the original test used `SIGINT`, but the new one uses `SIGUSR1`

Analysis:

- The Error bubbles up from: 8e610b2b55/pkg/signal/signal.go (L29-L44)
- Interestingly; `ContainerKill` should validate if a signal is valid for the given platform, but somehow we don't hit that part; f1b5612f20/daemon/kill.go (L40-L48)
- Windows only looks to support 2 signals currently 8e610b2b55/pkg/signal/signal_windows.go (L17-L26)
- Upstream Golang looks to define `SIGINT` as well; 77f9b2728e/src/runtime/defs_windows.go (L44)
- This looks like the current list of Signals upstream in Go; 3b58ed4ad3/windows/types_windows.go (L52-L67)

```go
const (
	// More invented values for signals
	SIGHUP  = Signal(0x1)
	SIGINT  = Signal(0x2)
	SIGQUIT = Signal(0x3)
	SIGILL  = Signal(0x4)
	SIGTRAP = Signal(0x5)
	SIGABRT = Signal(0x6)
	SIGBUS  = Signal(0x7)
	SIGFPE  = Signal(0x8)
	SIGKILL = Signal(0x9)
	SIGSEGV = Signal(0xb)
	SIGPIPE = Signal(0xd)
	SIGALRM = Signal(0xe)
	SIGTERM = Signal(0xf)
)
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit eeaa0b30d4)
Signed-off-by: Dani Louca <dani.louca@docker.com>
2019-08-14 17:07:39 -04:00
Brian Goff
8533594ad6 Move kill health test to integration
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit f8aef6a92f)
Signed-off-by: Dani Louca <dani.louca@docker.com>
2019-08-14 17:07:39 -04:00
Ruilin Li
32802bc7d9 do not stop health check before sending signal
Docker daemon always stops healthcheck before sending signal to a
container now. However, when we use "docker kill" to send signals
other than SIGTERM or SIGKILL to a container, such as SIGINT,
daemon still stops container health check though container process
handles the signal normally and continues to work.

Signed-off-by: Ruilin Li <liruilin4@huawei.com>
(cherry picked from commit da574f9343)
Signed-off-by: Dani Louca <dani.louca@docker.com>
2019-08-14 17:07:39 -04:00
Andrew Hsu
4bed01298c
Merge pull request #322 from thaJeztah/19.03_backport_bump_golang_1.12.8
[19.03 backport] Bump golang 1.12.8 (CVE-2019-9512, CVE-2019-9514)
2019-08-14 11:55:08 -07:00
Andrew Hsu
56ca630f27
Merge pull request #325 from thaJeztah/19.03_backport_harden_TestClientWithRequestTimeout
[19.03 backport] Harden TestClientWithRequestTimeout
2019-08-14 08:45:49 -07:00
Sebastiaan van Stijn
a02539b3e8
Harden TestClientWithRequestTimeout
DeadlineExceeded now implements a TimeOut() function,
since dc4427f372

Check for this interface, to prevent possibly incorrect failures;

```
00:16:41 --- FAIL: TestClientWithRequestTimeout (0.00s)
00:16:41     client_test.go:259: assertion failed:
00:16:41         --- context.DeadlineExceeded
00:16:41         +++ err
00:16:41         :
00:16:41         	-: context.deadlineExceededError{}
00:16:41         	+: &net.OpError{Op: "dial", Net: "tcp", Addr: s"127.0.0.1:49294", Err: &poll.TimeoutError{}}
00:16:41
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c7816c5323)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-14 15:21:34 +02:00
Sebastiaan van Stijn
0dc7bdc325
Adjust tests for changes in Go 1.12.8 / 1.11.13
```
00:38:11 === Failed
00:38:11 === FAIL: opts TestParseDockerDaemonHost (0.00s)
00:38:11     hosts_test.go:87: tcp tcp:a.b.c.d address expected error "Invalid bind address format: tcp:a.b.c.d" return, got "parse tcp://tcp:a.b.c.d: invalid port \":a.b.c.d\" after host" and addr
00:38:11     hosts_test.go:87: tcp tcp:a.b.c.d/path address expected error "Invalid bind address format: tcp:a.b.c.d/path" return, got "parse tcp://tcp:a.b.c.d/path: invalid port \":a.b.c.d\" after host" and addr
00:38:11
00:38:11 === FAIL: opts TestParseTCP (0.00s)
00:38:11     hosts_test.go:129: tcp tcp:a.b.c.d address expected error Invalid bind address format: tcp:a.b.c.d return, got parse tcp://tcp:a.b.c.d: invalid port ":a.b.c.d" after host and addr
00:38:11     hosts_test.go:129: tcp tcp:a.b.c.d/path address expected error Invalid bind address format: tcp:a.b.c.d/path return, got parse tcp://tcp:a.b.c.d/path: invalid port ":a.b.c.d" after host and addr
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 683766613a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-14 02:52:42 +02:00
Sebastiaan van Stijn
b61ee6e4af
Bump golang 1.12.8 (CVE-2019-9512, CVE-2019-9514)
go1.12.8 (released 2019/08/13) includes security fixes to the net/http and net/url packages.
See the Go 1.12.8 milestone on our issue tracker for details:

https://github.com/golang/go/issues?q=milestone%3AGo1.12.8

- net/http: Denial of Service vulnerabilities in the HTTP/2 implementation
  net/http and golang.org/x/net/http2 servers that accept direct connections from untrusted
  clients could be remotely made to allocate an unlimited amount of memory, until the program
  crashes. Servers will now close connections if the send queue accumulates too many control
  messages.
  The issues are CVE-2019-9512 and CVE-2019-9514, and Go issue golang.org/issue/33606.
  Thanks to Jonathan Looney from Netflix for discovering and reporting these issues.
  This is also fixed in version v0.0.0-20190813141303-74dc4d7220e7 of golang.org/x/net/http2.
  net/url: parsing validation issue
- url.Parse would accept URLs with malformed hosts, such that the Host field could have arbitrary
  suffixes that would appear in neither Hostname() nor Port(), allowing authorization bypasses
  in certain applications. Note that URLs with invalid, not numeric ports will now return an error
  from url.Parse.
  The issue is CVE-2019-14809 and Go issue golang.org/issue/29098.
  Thanks to Julian Hector and Nikolai Krein from Cure53, and Adi Cohen (adico.me) for discovering
  and reporting this issue.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 73b0e4c589)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-14 02:37:04 +02:00
Andrew Hsu
56784591bf
Merge pull request #319 from kolyshkin/19.03-journald
[19.03] backport journald reading fixes (ENGCORE-941)
2019-08-13 11:48:05 -07:00
Sebastiaan van Stijn
6eeb9ec3d6
Merge pull request #291 from thaJeztah/19.03_backport_fix_more_grpc_sizes
[19.03 backport] Fix more grpc list message sizes
2019-08-10 20:31:25 +02:00
Kir Kolyshkin
dd7ef76474 journald/read: fix/unify errors
1. Use "in-place" variables for if statements to limit their scope to
   the respectful `if` block.

2. Report the error returned from sd_journal_* by using CErr().

3. Use errors.New() instead of fmt.Errorf().

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
 (cherry picked from commit 20a0e58a79)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-09 16:50:39 -07:00
Kir Kolyshkin
0375566412 journald: fix for --tail 0
From the first glance, `docker logs --tail 0` does not make sense,
as it is supposed to produce no output, but `tail -n 0` from GNU
coreutils is working like that, plus there is even a test case
(`TestLogsTail` in integration-cli/docker_cli_logs_test.go).

Now, something like `docker logs --follow --tail 0` makes total
sense, so let's make it work.

(NOTE if --tail is not used, config.Tail is set to -1)

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit dd4bfe30a8)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-09 16:47:43 -07:00
Kir Kolyshkin
3678438dd8 journald/read: avoid piling up open files
If we take a long time to process log messages, and during that time
journal file rotation occurs, the journald client library will keep
those rotated files open until sd_journal_process() is called.

By periodically calling sd_journal_process() during the processing
loop we shrink the window of time a client instance has open file
descriptors for rotated (deleted) journal files.

This code is modelled after that of journalctl [1]; the above explanation
as well as the value of 1024 is taken from there.

[v2: fix CErr() argument]

[1] https://github.com/systemd/systemd/blob/dc16327c48d/src/journal/journalctl.c#L2676
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit b73fb8fd5d)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-09 16:47:38 -07:00
Kir Kolyshkin
1cc7b3881d journald/read: simplify/fix followJournal()
TL;DR: simplify the code, fix --follow hanging indefinitely

Do the following to simplify the followJournal() code:

1. Use Go-native select instead of C-native polling.

2. Use Watch{Producer,Consumer}Gone(), eliminating the need
to have journald.closed variable, and an extra goroutine.

3. Use sd_journal_wait(). In the words of its own man page:

> A synchronous alternative for using sd_journal_get_fd(),
> sd_journal_get_events(), sd_journal_get_timeout() and
> sd_journal_process() is sd_journal_wait().

Unfortunately, the logic is still not as simple as it
could be; the reason being, once the container has exited,
journald might still be writing some logs from its internal
buffers onto journal file(s), and there is no way to
figure out whether it's done so we are guaranteed to
read all of it back. This bug can be reproduced with
something like

> $ ID=$(docker run -d busybox seq 1 150000); docker logs --follow $ID
> ...
> 128123
> $

(The last expected output line should be `150000`).

To avoid exiting from followJournal() early, add the
following logic: once the container is gone, keep trying
to drain the journal until there's no new data for at
least `waitTimeout` time period.

Should fix https://github.com/docker/for-linux/issues/575

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit f091febc94)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-09 16:47:34 -07:00
Kir Kolyshkin
03b1b078f9 Call sd_journal_get_fd() earlier, only if needed
1. The journald client library initializes inotify watch(es)
during the first call to sd_journal_get_fd(), and it make sense
to open it earlier in order to not lose any journal file rotation
events.

2. It only makes sense to call this if we're going to use it
later on -- so add a check for config.Follow.

3. Remove the redundant call to sd_journal_get_fd().

NOTE that any subsequent calls to sd_journal_get_fd() return
the same file descriptor, so there's no real need to save it
for later use in wait_for_data_cancelable().

Based on earlier patch by Nalin Dahyabhai <nalin@redhat.com>.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 981c01665b)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-09 16:47:34 -07:00
Kir Kolyshkin
5067389c36 journald/read: avoid being blocked on send
In case the LogConsumer is gone, the code that sends the message can
stuck forever. Wrap the code in select case, as all other loggers do.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 79039720c8)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-09 16:47:29 -07:00
Kir Kolyshkin
6d98ef8c69 journald/read: simplify walking backwards
In case Tail=N parameter is requested, we need to show N lines.
It does not make sense to walk backwards one by one if we can
do it at once. Now, if Since=T is also provided, make sure we
haven't jumped too far (before T), and if we did, move forward.

The primary motivation for this was to make the code simpler.

This also fixes a tiny bug in the "since" implementation.

Before this commit:
> $ docker logs -t --tail=6000 --since="2019-03-10T03:54:25.00" $ID | head
> 2019-03-10T03:54:24.999821000Z 95981

After:
> $ docker logs -t --tail=6000 --since="2019-03-10T03:54:25.00" $ID | head
> 2019-03-10T03:54:25.000013000Z 95982

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit ff3cd167ea)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-09 16:47:23 -07:00
Kir Kolyshkin
d5088c1488 journald/read: simplify code
Minor code simplification.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit e8f6166791)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-09 16:47:23 -07:00
Nalin Dahyabhai
df3689f8d0 Small journal cleanup
Clean up a deferred function call in the journal reading logic.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
(cherry picked from commit 1ada3e85bf)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-09 16:47:23 -07:00
Drew Erny
3fd0be03f0
Fix more grpc list message sizes
There are a few more places, apparently, that List operations against
Swarm exist, besides just in the List methods. This increases the max
received message size in those places.

Signed-off-by: Drew Erny <drew.erny@docker.com>
(cherry picked from commit a84a78e976)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-09 11:23:25 +02:00
Andrew Hsu
37d9901e0f
Merge pull request #306 from thaJeztah/19.03_bump_swarmkit
[19.03] bump swarmkit to 4fb9e961aba635f0240a140e89ece6d6c2082585 (bump_v19.03)
2019-08-08 22:52:52 -07:00
Andrew Hsu
29fe4e58c6
Merge pull request #317 from thaJeztah/19.03_backport_fix_39623
[19.03 backport] Fix regression in handling of NotFound err during startup ENGCORE-929
2019-08-08 20:39:02 -07:00
Deep Debroy
685565ad18
Fix regression in handling of NotFound err during startup
Signed-off-by: Deep Debroy <ddebroy@docker.com>
(cherry picked from commit 4d5b6260bc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-09 02:09:13 +02:00
Andrew Hsu
305b2416ea
Merge pull request #316 from thaJeztah/19.03_backport_buildkit_userns_remap_take2
[19.03 backport] builder-next: userns remap support and honor daemon's DNS config
2019-08-08 13:35:34 -07:00
Andrew Hsu
f5b64c3ffe
Merge pull request #294 from thaJeztah/19.03_backport_chroot_unsupported
[19.03 backport] Add realChroot for non linux/windows
2019-08-08 13:02:45 -07:00
Andrew Hsu
9f9dab03c1
Merge pull request #314 from thaJeztah/19.03_backport_revendor_go_winio
[19.03 backport] Update Microsoft/go-winio v0.4.14
2019-08-08 11:03:17 -07:00
Andrew Hsu
c7139be62b
Merge pull request #315 from thaJeztah/19.03_backport_fix_copy_on_windows
[19.03 backport] Builder: fix "COPY --from" to non-existing directory on Windows [ENGCORE-935]
2019-08-08 10:27:44 -07:00
Tonis Tiigi
b0ef7422b0
vendor: update buildkit to f5a55a95
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit c60e53a274)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 19:01:44 +02:00
Sebastiaan van Stijn
1fbed3ffc9
bump vndr to f5ab8fc5f, and revendor
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0a3c9b935c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 18:49:07 +02:00
Tibor Vass
dd85af0e12
build: buildkit now honors daemon's DNS config
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit a1cdd4bfcc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 18:43:00 +02:00
Tonis Tiigi
3bbf7b0d4d
builder-next: reset identitymapping if empty
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 0bdcc60c4c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 18:42:53 +02:00
Tonis Tiigi
bc9183ba0e
vendor: update buildkit to c2427506
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 5c484890e0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 18:42:44 +02:00
Tonis Tiigi
47517880ec
builder-next: userns remap support
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 07b3aac902)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 18:42:42 +02:00
Sebastiaan van Stijn
7b0cf8b16d
Revert "vendor: update buildkit to f5a55a95"
This reverts commit eaa83640fa.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 18:32:27 +02:00
Brian Goff
47a7f762d3
Add realChroot for non linux/windows
3029e765e2 broke compilation on
non-Linux/Windows systems.
This change fixes that.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 34d5b8867f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 18:15:26 +02:00
Justin Terry (VM)
8ba31dccd1
Update Microsoft/go-winio v0.4.14
Signed-off-by: Justin Terry (VM) <juterry@microsoft.com>
(cherry picked from commit 35fe16b7eb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 17:10:55 +02:00
Kevin Parsons
80376f9e13
Revendor go-winio
This is needed to provide fixes for ETW on ARM. The updated ETW package will
no-op on ARM, rather than crashing. Further changes are needed to Go itself to
allow ETW on ARM to work properly.

Signed-off-by: Kevin Parsons <kevpar@microsoft.com>
(cherry picked from commit e1f0f77bf4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 17:10:52 +02:00
Sebastiaan van Stijn
ee64eae903
bump swarmkit to 4fb9e961aba635f0240a140e89ece6d6c2082585 (bump_v19.03 branch)
full diff: 961ec3a56b...4fb9e961ab

included:

- docker/swarmkit#2873 [19.03 backport] Only update non-terminal tasks on node removal
  - backport of docker/swarmkit#2867 Only update non-terminal tasks on node removal

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 17:08:57 +02:00
Sebastiaan van Stijn
ff0a0e364b
Builder: fix "COPY --from" to non-existing directory on Windows
This fixes a regression introduced in 6d87f19142,
causing `COPY --from` to fail if the target directory does not exist:

```
FROM mcr.microsoft.com/windows/servercore:ltsc2019 as s1
RUN echo "Hello World" > /hello

FROM mcr.microsoft.com/windows/servercore:ltsc2019
COPY --from=s1 /hello /hello/another/world
```

Would produce an error:

```
Step 4/4 : COPY --from=s1 /hello /hello/another/world
failed to copy files: mkdir \\?: The filename, directory name, or volume label syntax is incorrect.
```

The cause for this was that Go's `os.MkdirAll()` does not support/detect volume GUID paths
(`\\?\Volume{dae8d3ac-b9a1-11e9-88eb-e8554b2ba1db}\hello\another}`), and as a result
attempted to create the volume as a directory (`\\?`), causing it to fail.

This patch replaces `os.MkdirAll()` with our own `system.MkdirAll()` function, which
is capable of detecting GUID volumes.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5858a99267)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 17:05:11 +02:00
Sebastiaan van Stijn
791aa3c338
make.ps1: Run-IntegrationTests(): set working directory for test suite
This function changed to the correct working directory before starting the tests
(which is the same as on Linux), however the `ProcessStartInfo` process does
not inherit this working directory, which caused Windows tests to be running
with a different working directory as Linux (causing files used in tests to not
be found).

From the documentation; https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.processstartinfo.workingdirectory?view=netframework-4.8

> When `UseShellExecute` is `true`, the fully qualified name of the directory that contains
> the process to be started. When the `UseShellExecute` property is `false`, the working
> directory for the process to be started. The default is an empty string (`""`).

This patch sets the `ProcessStartInfo.WorkingDirectory` to the correct working
directory before starting the process.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6ae46aeabf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 17:05:08 +02:00
Sebastiaan van Stijn
6e9aba883c
Merge pull request #313 from thaJeztah/19.03_backport_its_a_stretch_but_it_was_busted
[19.03 backport] use GO_VERSION and pin to debian stretch
2019-08-08 17:04:32 +02:00
Sebastiaan van Stijn
2f1984c6df
Pin Dockerfile to -stretch variant
The Golang base images switch to buster, which causes some breakage
in networking and packages that are no  longer available; (`btrfs-tools`
is now an empty package, and `libprotobuf-c0-dev` is gone).

Some of out tests also start faiilng on stretch, and will have to be
investigated further;

```
15:13:06 --- FAIL: TestRenameAnonymousContainer (3.37s)
15:13:06     rename_test.go:168: assertion failed: 0 (int) != 1 (inspect.State.ExitCode int): container a7fe866d588d65f353f42ffc5ea5288e52700384e1d90850e9c3d4dce8657666 exited with the wrong exitcode:

15:13:38 --- FAIL: TestHostnameDnsResolution (2.23s)
15:13:38     run_linux_test.go:128: assertion failed:
15:13:38         --- ←
15:13:38         +++ →
15:13:38         @@ -1 +1,2 @@
15:13:38         +ping: bad address 'foobar'
15:13:38
15:13:38
15:13:38     run_linux_test.go:129: assertion failed: 0 (int) != 1 (res.ExitCode int)
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ed672bb523)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 03:21:20 +02:00
Sebastiaan van Stijn
640193b2bb
Windows: fix Golang version checks for GO_VERSION build-arg
This check was used to make sure we don't bump Go versions independently
(Linux/Windows). The Dockerfile switched to using a build-arg to allow
overriding the Go version, which rendered this check non-functional.

It also fails if Linux versions use a specific variant of the image;

08:41:31 ERROR: Failed 'ERROR: Mismatched GO versions between Dockerfile and Dockerfile.windows. Update your PR to ensure that both files are updated and in sync. ${GO_VERSION}-stretch ${GO_VERSION}' at 07/20/2019 08:41:31
08:41:31 At C:\gopath\src\github.com\docker\docker\hack\ci\windows.ps1:448 char:9
08:41:31 +         Throw "ERROR: Mismatched GO versions between Dockerfile and D ...
08:41:31 +         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This patch fixes the check by looking for the value of `GO_VERSION` instead
of looking at the `FROM` line (which is harder to parse).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4fa57a8191)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 03:21:12 +02:00
Kir Kolyshkin
97ca6434e0
TESTING.md: document GO_VERSION
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit a557538770)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 03:20:58 +02:00
Sebastiaan van Stijn
c364e5d1ba
Dockerfile: use GO_VERSION build-arg for overriding Go version
This allows overriding the version of Go without making modifications in the
source code, which can be useful to test against multiple versions.

For example:

    make GO_VERSION=1.13beta1 shell

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c6281bc438)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 03:20:49 +02:00
Sebastiaan van Stijn
3bf3a1ae65
Dockerfile: Use APT_MIRROR for security.debian.org as well
The fastly cdn mirror we're using also mirrors the debian security
repository;

```
Welcome to deb.debian.org (fastly instance)!

This is deb.debian.org. This service provides mirrors for the following Debian archive repositories:

/debian/
/debian-debug/
/debian-ports/
/debian-security/
The server deb.debian.org does not have packages itself, but the name has SRV records in DNS that let apt in stretch and later find places.
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c8f43b5f6f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-08-08 03:20:36 +02:00
Andrew Hsu
439ed140ee
Merge pull request #300 from AkihiroSuda/rootlesskit-060-1903
[19.03 backport] rootless: allow exposing dockerd TCP socket easily
2019-08-07 17:57:29 -07:00
Andrew Hsu
a50d77700e
Merge pull request #285 from thaJeztah/19.03_backport_bump_golang_1.12.6
[19.03 backport] Bump Golang 1.12.7
2019-08-07 17:48:08 -07:00
Andrew Hsu
6b7330dcd4
Merge pull request #310 from kolyshkin/19.03-quota-map
[19.03] backport projectquota: protect concurrent map access (ENGCORE-920)
2019-08-07 16:58:28 -07:00
Andrew Hsu
8ecf5409e9
Merge pull request #312 from tonistiigi/1903-buildkit-bump
[19.03] vendor: update buildkit to f5a55a95
2019-08-07 16:53:08 -07:00
Tonis Tiigi
6efcd74c6b builder-next: ensure timestamps set for metadata commands
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 1a2bd3cf7d)
2019-08-07 14:03:11 -07:00
Tonis Tiigi
eaa83640fa vendor: update buildkit to f5a55a95
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit c60e53a274)
2019-08-07 14:02:45 -07:00
Kirill Kolyshkin
cbdf487768
Merge pull request #309 from thaJeztah/19.03_backport_prevent_network_attach_panic
[19.03 backport] Prevent panic on network attach
2019-08-06 12:21:35 -07:00
Kir Kolyshkin
b0f01be33f projectquota: protect concurrent map access
Protect access to q.quotas map, and lock around changing nextProjectID.

Techinically, the lock in findNextProjectID() is not needed as it is
only called during initialization, but one can never be too careful.

Fixes: 52897d1c09 ("projectquota: utility class for project quota controls")
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 1ac0a66a64)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-06 12:06:08 -07:00
Tonis Tiigi
80e2871d21 stats: avoid cgo in collector
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit cf104d85c3)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-01 14:38:02 -07:00
Tonis Tiigi
4ef8f6d323 copy: allow non-cgo build
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 230a55d337)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-01 14:38:02 -07:00
Tonis Tiigi
56ff8ccc91 quota: add noncgo build tag
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 186cd7cf4a)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-08-01 14:38:02 -07:00
Sebastiaan van Stijn
e01625bc70
Prevent panic on network attach
In situations where `container.NetworkSettings` was not nil, but
`container.NetworkSettings.Networks` was, a panic could occur:

```
2019-06-10 15:26:50.548309 I | http: panic serving @: assignment to entry in nil map
goroutine 1376 [running]:
net/http.(*conn).serve.func1(0xc4211068c0)
	/usr/local/go/src/net/http/server.go:1726 +0xd2
panic(0x558939d7e1e0, 0x55893a0c4410)
	/usr/local/go/src/runtime/panic.go:502 +0x22d
github.com/docker/docker/daemon.(*Daemon).updateNetworkSettings(0xc42090c5a0, 0xc420fb6fc0, 0x55893a101140, 0xc4210e0540, 0xc42112aa80, 0xc4217d77a0, 0x0)
	/go/src/github.com/docker/docker/daemon/container_operations.go:275 +0x40e
github.com/docker/docker/daemon.(*Daemon).updateNetworkConfig(0xc42090c5a0, 0xc420fb6fc0, 0x55893a101140, 0xc4210e0540, 0xc42112aa80, 0x55893a101101, 0xc4210e0540, 0x0)
	/go/src/github.com/docker/docker/daemon/container_operations.go:683 +0x219
github.com/docker/docker/daemon.(*Daemon).connectToNetwork(0xc42090c5a0, 0xc420fb6fc0, 0xc420e8290f, 0x40, 0xc42112aa80, 0x558937eabd01, 0x0, 0x0)
	/go/src/github.com/docker/docker/daemon/container_operations.go:728 +0x1cb
github.com/docker/docker/daemon.(*Daemon).ConnectToNetwork(0xc42090c5a0, 0xc420fb6fc0, 0xc420e8290f, 0x40, 0xc42112aa80, 0x0, 0x0)
	/go/src/github.com/docker/docker/daemon/container_operations.go:1046 +0x2b3
github.com/docker/docker/daemon.(*Daemon).ConnectContainerToNetwork(0xc42090c5a0, 0xc4214ca580, 0x40, 0xc420e8290f, 0x40, 0xc42112aa80, 0x2, 0xe600000000000001)
	/go/src/github.com/docker/docker/daemon/network.go:450 +0xa1
github.com/docker/docker/api/server/router/network.(*networkRouter).postNetworkConnect(0xc42121bbc0, 0x55893a0edee0, 0xc420de7cb0, 0x55893a0ec2e0, 0xc4207f0e00, 0xc420173600, 0xc420de7980, 0x5589394707cc, 0x5)
	/go/src/github.com/docker/docker/api/server/router/network/network_routes.go:278 +0x330
github.com/docker/docker/api/server/router/network.(*networkRouter).(github.com/docker/docker/api/server/router/network.postNetworkConnect)-fm(0x55893a0edee0, 0xc420de7cb0, 0x55893a0ec2e0, 0xc4207f0e00, 0xc420173600, 0xc420de7980, 0x558937fd89dc, 0x558939f2cec0)
	/go/src/github.com/docker/docker/api/server/router/network/network.go:37 +0x6b
github.com/docker/docker/api/server/middleware.ExperimentalMiddleware.WrapHandler.func1(0x55893a0edee0, 0xc420de7cb0, 0x55893a0ec2e0, 0xc4207f0e00, 0xc420173600, 0xc420de7980, 0x55893a0edee0, 0xc420de7cb0)
	/go/src/github.com/docker/docker/api/server/middleware/experimental.go:26 +0xda
github.com/docker/docker/api/server/middleware.VersionMiddleware.WrapHandler.func1(0x55893a0edee0, 0xc420de7a70, 0x55893a0ec2e0, 0xc4207f0e00, 0xc420173600, 0xc420de7980, 0x0, 0x0)
	/go/src/github.com/docker/docker/api/server/middleware/version.go:62 +0x401
github.com/docker/docker/pkg/authorization.(*Middleware).WrapHandler.func1(0x55893a0edee0, 0xc420de7a70, 0x55893a0ec2e0, 0xc4207f0e00, 0xc420173600, 0xc420de7980, 0x0, 0x558939640868)
	/go/src/github.com/docker/docker/pkg/authorization/middleware.go:59 +0x7ab
github.com/docker/docker/api/server/middleware.DebugRequestMiddleware.func1(0x55893a0edee0, 0xc420de7a70, 0x55893a0ec2e0, 0xc4207f0e00, 0xc420173600, 0xc420de7980, 0x55893a0edee0, 0xc420de7a70)
	/go/src/github.com/docker/docker/api/server/middleware/debug.go:53 +0x4b8
github.com/docker/docker/api/server.(*Server).makeHTTPHandler.func1(0x55893a0ec2e0, 0xc4207f0e00, 0xc420173600)
	/go/src/github.com/docker/docker/api/server/server.go:141 +0x19a
net/http.HandlerFunc.ServeHTTP(0xc420e0c0e0, 0x55893a0ec2e0, 0xc4207f0e00, 0xc420173600)
	/usr/local/go/src/net/http/server.go:1947 +0x46
github.com/docker/docker/vendor/github.com/gorilla/mux.(*Router).ServeHTTP(0xc420ce5950, 0x55893a0ec2e0, 0xc4207f0e00, 0xc420173600)
	/go/src/github.com/docker/docker/vendor/github.com/gorilla/mux/mux.go:103 +0x228
github.com/docker/docker/api/server.(*routerSwapper).ServeHTTP(0xc421078330, 0x55893a0ec2e0, 0xc4207f0e00, 0xc420173600)
	/go/src/github.com/docker/docker/api/server/router_swapper.go:29 +0x72
net/http.serverHandler.ServeHTTP(0xc420902f70, 0x55893a0ec2e0, 0xc4207f0e00, 0xc420173600)
	/usr/local/go/src/net/http/server.go:2697 +0xbe
net/http.(*conn).serve(0xc4211068c0, 0x55893a0ede20, 0xc420d81440)
	/usr/local/go/src/net/http/server.go:1830 +0x653
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2798 +0x27d
```

I have not been able to reproduce the situation, but preventing a panic should
not hurt.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 651e694508)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-07-30 13:32:12 -07:00
Justin Cormack
fa8dd90ceb Initialize nss libraries in Glibc so that the dynamic libraries are loaded in the host
environment not in the chroot from untrusted files.

See also OpenVZ a3f732ef75/src/enter.c (L227-L234)

Signed-off-by: Justin Cormack <justin.cormack@docker.com>
(cherry picked from commit cea6dca993c2b4cfa99b1e7a19ca134c8ebc236b)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-07-25 16:39:05 +00:00
Akihiro Suda
509a793378 rootless: allow exposing dockerd TCP socket easily
eg.

  $ DOCKERD_ROOTLESS_ROOTLESSKIT_FLAGS="-p 0.0.0.0:2376:2376/tcp" \
   dockerd-rootless.sh --experimental \
   -H tcp://0.0.0.0:2376 \
   --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem

This commit bumps up RootlessKit from v0.4.1 to v0.6.0:
27a0c7a248...2fcff6ceae

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 34f4729bc0)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2019-07-18 13:03:33 +09:00
Andrew Hsu
705d9623b7
Merge pull request #299 from thaJeztah/19.03_backport_scrub
[19.03 backport] DebugRequestMiddleware: unconditionally scrub data field
2019-07-17 09:10:51 -07:00
Sebastiaan van Stijn
c687381870
DebugRequestMiddleware: Remove path handling
Path-specific rules were removed, so this is no longer used.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 530e63c1a61b105a6f7fc143c5acb9b5cd87f958)
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit f8a0f26843)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-07-17 17:33:29 +02:00
Sebastiaan van Stijn
1eadbf1bd0
DebugRequestMiddleware: unconditionally scrub data field
Commit 77b8465d7e added a secret update
endpoint to allow updating labels on existing secrets. However, when
implementing the endpoint, the DebugRequestMiddleware was not updated
to scrub the Data field (as is being done when creating a secret).

When updating a secret (to set labels), the Data field should be either
`nil` (not set), or contain the same value as the existing secret. In
situations where the Data field is set, and the `dockerd` daemon is
running with debugging enabled / log-level debug, the base64-encoded
value of the secret is printed to the daemon logs.

The docker cli does not have a `docker secret update` command, but
when using `docker stack deploy`, the docker cli sends the secret
data both when _creating_ a stack, and when _updating_ a stack, thus
leaking the secret data if the daemon runs with debug enabled:

1. Start the daemon in debug-mode

        dockerd --debug

2. Initialize swarm

        docker swarm init

3. Create a file containing a secret

        echo secret > my_secret.txt

4. Create a docker-compose file using that secret

        cat > docker-compose.yml <<'EOF'
        version: "3.3"
        services:
          web:
            image: nginx:alpine
            secrets:
              - my_secret
        secrets:
          my_secret:
            file: ./my_secret.txt
        EOF

5. Deploy the stack

        docker stack deploy -c docker-compose.yml test

6. Verify that the secret is scrubbed in the daemon logs

        DEBU[2019-07-01T22:36:08.170617400Z] Calling POST /v1.30/secrets/create
        DEBU[2019-07-01T22:36:08.171364900Z] form data: {"Data":"*****","Labels":{"com.docker.stack.namespace":"test"},"Name":"test_my_secret"}

7. Re-deploy the stack to trigger an "update"

        docker stack deploy -c docker-compose.yml test

8. Notice that this time, the Data field is not scrubbed, and the base64-encoded secret is logged

        DEBU[2019-07-01T22:37:35.828819400Z] Calling POST /v1.30/secrets/w3hgvwpzl8yooq5ctnyp71v52/update?version=34
        DEBU[2019-07-01T22:37:35.829993700Z] form data: {"Data":"c2VjcmV0Cg==","Labels":{"com.docker.stack.namespace":"test"},"Name":"test_my_secret"}

This patch modifies `maskSecretKeys` to unconditionally scrub `Data` fields.
Currently, only the `secrets` and `configs` endpoints use a field with this
name, and no other POST API endpoints use a data field, so scrubbing this
field unconditionally will only scrub requests for those endpoints.

If a new endpoint is added in future where this field should not be scrubbed,
we can re-introduce more fine-grained (path-specific) handling.

This patch introduces some change in behavior:

- In addition to secrets, requests to create or update _configs_ will
  now have their `Data` field scrubbed. Generally, the actual data should
  not be interesting for debugging, so likely will not be problematic.
  In addition, scrubbing this data for configs may actually be desirable,
  because (even though they are not explicitely designed for this purpose)
  configs may contain sensitive data (credentials inside a configuration
  file, e.g.).
- Requests that send key/value pairs as a "map" and that contain a
  key named "data", will see the value of that field scrubbed. This
  means that (e.g.) setting a `label` named `data` on a config, will
  scrub/mask the value of that label.
- Note that this is already the case for any label named `jointoken`,
  `password`, `secret`, `signingcakey`, or `unlockkey`.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c7ce4be93ae8edd2da62a588e01c67313a4aba0c)
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 73db8c77bf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-07-17 17:33:27 +02:00
Sebastiaan van Stijn
685f13f3fd
TestMaskSecretKeys: use subtests
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 32d70c7e21631224674cd60021d3ec908c2d888c)
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit ebb542b3f8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-07-17 17:33:25 +02:00
Sebastiaan van Stijn
638cf86cbe
TestMaskSecretKeys: add more test-cases
Add tests for

- case-insensitive matching of fields
- recursive masking

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit db5f811216e70bcb4a10e477c1558d6c68f618c5)
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 18dac2cf32)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-07-17 17:33:22 +02:00
Jintao Zhang
d27a919cd2
Bump Golang 1.12.7
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit aafdb63f21)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-07-17 16:50:19 +02:00
Andrew Hsu
a69cd8239f
Merge pull request #287 from thaJeztah/19.03_backport_testForIpvlan
[19.03 backport] For ipvlan tests check that the ipvlan module is enabled
2019-06-25 11:48:57 -07:00
Kir Kolyshkin
8a2f96096a
For ipvlan tests check that the ipvlan module is enabled (instead of just ensuring the kernel version is greater than 4.2)
Co-Authored-By: Jim Ehrismann <jim-docker@users.noreply.github.com>
Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>
Signed-off-by: Jim Ehrismann <jim.ehrismann@docker.com>
(cherry picked from commit a77e147d32)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-25 00:40:32 +02:00
Andrew Hsu
b07f53d0a4
Merge pull request #284 from tiborvass/19.03-revert-remove-legacy-registry
[19.03] Keep but deprecate registry v2 schema1 logic and revert to libtrust-key-based engine ID
2019-06-18 14:30:11 -07:00
Tibor Vass
e61e107040 validate: temporarily disable deprecate-integration-cli as part of a revert
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 3f1cdd5364)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:55:04 +00:00
Tibor Vass
023166b530 Add deprecation message for schema1
This will add a warning log in the daemon, and will send the message
to be displayed by the CLI.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit d35f8f4329)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:55:02 +00:00
Tibor Vass
884c9e268f Add test for keeping same daemon ID on upgrade
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit f923321aae)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:55:00 +00:00
Tibor Vass
99678a93ed Remove v1 manifest code
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 53dad9f027)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:54:59 +00:00
Tibor Vass
99cd23cefd Revert "Remove the rest of v1 manifest support"
This reverts commit 98fc09128b in order to
keep registry v2 schema1 handling and libtrust-key-based engine ID.

Because registry v2 schema1 was not officially deprecated and
registries are still relying on it, this patch puts its logic back.

However, registry v1 relics are not added back since v1 logic has been
removed a while ago.

This also fixes an engine upgrade issue in a swarm cluster. It was relying
on the Engine ID to be the same upon upgrade, but the mentioned commit
modified the logic to use UUID and from a different file.

Since the libtrust key is always needed to support v2 schema1 pushes,
that the old engine ID is based on the libtrust key, and that the engine ID
needs to be conserved across upgrades, adding a UUID-based engine ID logic
seems to add more complexity than it solves the problems.

Hence reverting the engine ID changes as well.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit f695e98cb7)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:54:57 +00:00
Tibor Vass
4d3dfd24ec use gotest.tools assertions in docker_cli_push_test.go
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 0811297608)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:54:55 +00:00
Tibor Vass
21ae66c664 Revert "Remove Schema1 integration test suite"
This reverts commit 13b7d11be1.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit f23a51a860)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:54:45 +00:00
Andrew Hsu
da6dddcd04
Merge pull request #279 from thaJeztah/19.03_backport_attach_to_existing_network_error
[19.03 backport] Handle the error case when a container reattaches to the same network
2019-06-18 10:30:28 -07:00
Jintao Zhang
d1b0475d89
Bump Golang 1.12.6
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 6f446d041b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-18 09:40:13 +01:00
Andrew Hsu
42757e8794
Merge pull request #277 from thaJeztah/19.03_backport_enable_new_integration_tests_for_win
[19.03 backport] Enable integrations API tests for Windows CI
2019-06-17 12:05:26 -07:00
Andrew Hsu
3452f743ab
Merge pull request #280 from tiborvass/19.03-chroot-tar-untar-and-cp-slash-fix
[19.03] Add chroot back to Tar/Untar without the previously introduced regression
2019-06-17 12:04:34 -07:00
Andrew Hsu
b9cd7b59b6
Merge pull request #261 from kolyshkin/19.03-aufs-lock
[19.03 backport ENGCORE-831] aufs optimizations #39107
2019-06-17 12:02:48 -07:00
Tibor Vass
8f4b96f19e integration: have container.Create call compile
For reference on why this is needed:
https://github.com/docker/engine/pull/280#issuecomment-502056661

Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 18:26:12 +00:00
Tibor Vass
186afe3ce3 pkg/archive: keep walkRoot clean if source is /
Previously, getWalkRoot("/", "foo") would return "//foo"
Now it returns "/foo"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 7410f1a859)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 04:02:08 +00:00
Tibor Vass
a0063c534a daemon: fix docker cp when container source is /
Before 7a7357da, archive.TarResourceRebase was being used to copy files
and folders from the container. That function splits the source path
into a dirname + basename pair to support copying a file:
if you wanted to tar `dir/file` it would tar from `dir` the file `file`
(as part of the IncludedFiles option).

However, that path splitting logic was kept for folders as well, which
resulted in weird inputs to archive.TarWithOptions:
if you wanted to tar `dir1/dir2` it would tar from `dir1` the directory
`dir2` (as part of IncludedFiles option).

Although it was weird, it worked fine until we started chrooting into
the container rootfs when doing a `docker cp` with container source set
to `/` (cf 3029e765).

The fix is to only do the path splitting logic if the source is a file.

Unfortunately, 7a7357da added support for LCOW by duplicating some of
this subtle logic. Ideally we would need to do more refactoring of the
archive codebase to properly encapsulate these behaviors behind well-
documented APIs.

This fix does not do that. Instead, it fixes the issue inline.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 171538c190)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 01:37:57 +00:00
Tibor Vass
9b97965f22 add more tests
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 02f1eb89a4)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 01:37:57 +00:00
Brian Goff
e3f83e7aa7 Add test for copying entire container rootfs
CID=$(docker create alpine)
docker cp $CID:/ out

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 6db9f1c3d6)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 01:37:57 +00:00
Tibor Vass
44023afb7d Revert "Revert "Add chroot for tar packing operations""
This reverts commit 96df6d4d0b.

Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 01:37:32 +00:00
Tibor Vass
29ff2800c3 Revert "Revert "Pass root to chroot to for chroot Untar""
This reverts commit 60013ba69b.

Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 01:37:30 +00:00
Arko Dasgupta
d44a48835f
Change Forbidden Error (403) to Conflict(409)
Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
(cherry picked from commit 31e8fcc678)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-12 20:00:59 +02:00
Arko Dasgupta
275bf7ec03
Gracefully take care of the error case when a container
retries to attach to a network, it is already connected to

Fixes - https://github.com/docker/for-linux/issues/632

Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
(cherry picked from commit 871acb1c86)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-12 20:00:51 +02:00
Olli Janatuinen
de45ce73eb
Enable integrations API tests for Windows CI
Signed-off-by: Olli Janatuinen <olli.janatuinen@gmail.com>
(cherry picked from commit 2f22247cad)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-12 10:16:04 +02:00
Andrew Hsu
ceb773e1ff
Merge pull request #275 from tiborvass/19.03-revert-chroot-tar-untar
[19.03] Revert Pass root to chroot to for chroot Tar/Untar (CVE-2018-15664)
2019-06-11 23:42:09 -07:00
Tibor Vass
60013ba69b Revert "Pass root to chroot to for chroot Untar"
This reverts commit 9781cceb09.

Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-12 04:06:21 +00:00
Tibor Vass
96df6d4d0b Revert "Add chroot for tar packing operations"
This reverts commit 3e057d527d.

Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-12 04:06:14 +00:00
Andrew Hsu
a33a82b42f
Merge pull request #267 from thaJeztah/19.03_backport_test_fixes
[19.03 backport] test-fixes and improvements
2019-06-11 15:15:40 -07:00
Andrew Hsu
367870a4d5
Merge pull request #274 from thaJeztah/19.03_backport_entropy_cannot_be_saved
[19.03 backport] Entropy cannot be saved
2019-06-11 13:53:11 -07:00
Andrew Hsu
175013d0cb
Merge pull request #270 from thaJeztah/19.03_backport_fix_swagger_copy
[19.03 backport] fix: fix lack of copyUIDGID in swagger.yaml
2019-06-11 10:55:29 -07:00
Andrew Hsu
a6905fa2e5
Merge pull request #272 from thaJeztah/19.03_backport_align_libnetwork
[19.03 backport] Re-align proxy commit with libnetwork vendor
2019-06-11 10:52:38 -07:00
Justin Cormack
510e79ebe9
Entropy cannot be saved
Remove non cryptographic randomness.

Signed-off-by: Justin Cormack <justin.cormack@docker.com>
(cherry picked from commit 2df693e533)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-11 17:40:09 +02:00
Sebastiaan van Stijn
a8d1b4a1ab
Merge pull request #266 from thaJeztah/19.03_backport_do_not_order_uid_gid_mappings
[19.03 backport] Stop sorting uid and gid ranges in id maps
2019-06-11 09:43:14 +02:00
Sebastiaan van Stijn
88374fa982
Re-align proxy commit with libnetwork vendor
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 35069de3fd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-09 11:18:39 +02:00
zhangyue
049a1090c3
fix: fix lack of copyUIDGID in swagger.yaml
Signed-off-by: Zhang Yue <zy675793960@yeah.net>
Signed-off-by: zhangyue <zy675793960@yeah.net>
(cherry picked from commit a4f828cb89)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-07 14:40:21 +02:00
Sebastiaan van Stijn
020bb75219
Harden TestPsListContainersFilterExited
This test runs on a daemon also used by other tests
so make sure we don't get failures if another test
doesn't cleanup or is running in parallel.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 915acffdb4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-07 14:13:23 +02:00
Brian Goff
a24b9087ce
Add log entries for daemon startup/shutdown
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 595987fd08)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-07 14:10:45 +02:00
Brian Goff
48786ba842
Optimize test daemon startup
This adds some logs, handles timers better, and sets a request timeout
for the ping request.

I'm not sure the ticker in that loop is what we really want since the
ticker keeps ticking while we are (attempting) to make a request... but
I opted to not change that for now.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 20ea8942b8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-07 14:10:37 +02:00
Sebastiaan van Stijn
dde48c6715
Merge pull request #264 from thaJeztah/19.03_backport_39290alternate
[19.03 backport] Windows: Don't attempt detach VHD for R/O layers
2019-06-06 14:19:47 +02:00
Jonas Dohse
e7c02a0508
Stop sorting uid and gid ranges in id maps
Moby currently sorts uid and gid ranges in id maps. This causes subuid
and subgid files to be interpreted wrongly.

The subuid file

```
> cat /etc/subuid
jonas:100000:1000
jonas:1000:1
```

configures that the container uids 0-999 are mapped to the host uids
100000-100999 and uid 1000 in the container is mapped to uid 1000 on the
host. The expected uid_map is:

```
> docker run ubuntu cat /proc/self/uid_map
         0     100000       1000
      1000       1000          1
```

Moby currently sorts the ranges by the first id in the range. Therefore
with the subuid file above the uid 0 in the container is mapped to uid
100000 on host and the uids 1-1000 in container are mapped to the uids
1-1000 on the host. The resulting uid_map is:

```
> docker run ubuntu cat /proc/self/uid_map
         0       1000          1
         1     100000       1000
```

The ordering was implemented to work around a limitation in Linux 3.8.
This is fixed since Linux 3.9 as stated on the user namespaces manpage
[1]:

> In the initial implementation (Linux 3.8), this requirement was
> satisfied by a simplistic implementation that imposed the further
> requirement that the values in both field 1 and field 2 of successive
> lines must be in ascending numerical order, which prevented some
> otherwise valid maps from being created.  Linux 3.9 and later fix this
> limitation, allowing any valid set of nonoverlapping maps.

This fix changes the interpretation of subuid and subgid files which do
not have the ids of in the numerical order for each individual user.
This breaks users that rely on the current behaviour.

The desired mapping above - map low user ids in the container to high
user ids on the host and some higher user ids in the container to lower
user on host - can unfortunately not archived with the current
behaviour.

[1] http://man7.org/linux/man-pages/man7/user_namespaces.7.html

Signed-off-by: Jonas Dohse <jonas@dohse.ch>
(cherry picked from commit c4628d79d2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-05 14:46:55 +02:00
John Howard
31722d3f5a
Windows: Don't attempt detach VHD for R/O layers
Signed-off-by: John Howard <jhoward@microsoft.com>
(cherry picked from commit 293c74ba79)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-05 14:45:56 +02:00
Kir Kolyshkin
a81278befe aufs: retry auplink flush
Running a bundled aufs benchmark sometimes results in this warning:

> WARN[0001] Couldn't run auplink before unmount /tmp/aufs-tests/aufs/mnt/XXXXX  error="exit status 22" storage-driver=aufs

If we take a look at what aulink utility produces on stderr, we'll see:

> auplink:proc_mnt.c:96: /tmp/aufs-tests/aufs/mnt/XXXXX: Invalid argument

and auplink exits with exit code of 22 (EINVAL).

Looking into auplink source code, what happens is it tries to find a
record in /proc/self/mounts corresponding to the mount point (by using
setmntent()/getmntent_r() glibc functions), and it fails.

Some manual testing, as well as runtime testing with lots of printf
added on mount/unmount, as well as calls to check the superblock fs
magic on mount point (as in graphdriver.Mounted(graphdriver.FsMagicAufs, target)
confirmed that this record is in fact there, but sometimes auplink
can't find it. I was also able to reproduce the same error (inability
to find a mount in /proc/self/mounts that should definitely be there)
using a small C program, mocking what `auplink` does:

```c
 #include <stdio.h>
 #include <err.h>
 #include <mntent.h>
 #include <string.h>
 #include <stdlib.h>

int main(int argc, char **argv)
{
	FILE *fp;
	struct mntent m, *p;
	char a[4096];
	char buf[4096 + 1024];
	int found =0, lines = 0;

	if (argc != 2) {
		fprintf(stderr, "Usage: %s <mountpoint>\n", argv[0]);
		exit(1);
	}

	fp = setmntent("/proc/self/mounts", "r");
	if (!fp) {
		err(1, "setmntent");
	}
	setvbuf(fp, a, _IOLBF, sizeof(a));
	while ((p = getmntent_r(fp, &m, buf, sizeof(buf)))) {
		lines++;
		if (!strcmp(p->mnt_dir, argv[1])) {
			found++;
		}
	}
	printf("found %d entries for %s (%d lines seen)\n", found, argv[1], lines);
	return !found;
}
```

I have also wrote a few other C proggies -- one that reads
/proc/self/mounts directly, one that reads /proc/self/mountinfo instead.
They are also prone to the same occasional error.

It is not perfectly clear why this happens, but so far my best theory
is when a lot of mounts/unmounts happen in parallel with reading
contents of /proc/self/mounts, sometimes the kernel fails to provide
continuity (i.e. it skips some part of file or mixes it up in some
other way). In other words, this is a kernel bug (which is probably
hard to fix unless some other interface to get a mount entry is added).

Now, there is no real fix, and a workaround I was able to come up
with is to retry when we got EINVAL. It usually works on the second
attempt, although I've once seen it took two attempts to go through.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit ae431b10a9)
2019-06-04 15:07:53 -07:00
Kir Kolyshkin
cad766f6c7 aufs.Cleanup: optimize
Do not use filepath.Walk() as there's no requirement to recursively
go into every directory under mnt -- a (non-recursive) list of
directories in mnt is sufficient.

With filepath.Walk(), in case some container will fail to unmount,
it'll go through the whole container filesystem which is both
excessive and useless.

This is similar to commit f1a4592297 ("devmapper.shutdown:
optimize")

While at it, raise the priority of "unmount error" message from debug
to a warning. Note we don't have to explicitly add `m` as unmount error (from
pkg/mount) will have it.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 8fda12c607)
2019-06-04 15:07:53 -07:00
Kir Kolyshkin
f0f7020b5d aufs: optimize lots of layers case
In case there are a big number of layers, so that mount data won't fit
into a single memory page (4096 bytes on most platforms, which is good
enough for about 40 layers, depending on how long graphdriver root path
is), we supply additional layers with O_REMOUNT, as described in aufs
documentation.

Problem is, the current implementation does that one layer at a time
(i.e. there is one mount syscall per each additional layer).

Optimize the code to supply as many layers as we can fit in one page
(basically reusing the same code as for the original mount).

Note, per aufs docs, "[a]t remount-time, the options are interpreted
in the given order, e.g. left to right" so we should be good.

Tested on an image with ~100 layers.

Before (35 syscalls):
> [pid 22756] 1556919088.686955 mount("none", "/mnt/volume_sfo2_09/docker-aufs/aufs/mnt/a86f8c9dd0ec2486293119c20b0ec026e19bbc4d51332c554f7cf05d777c9866", "aufs", 0, "br:/mnt/volume_sfo2_09/docker-au"...) = 0 <0.000504>
> [pid 22756] 1556919088.687643 mount("none", "/mnt/volume_sfo2_09/docker-aufs/aufs/mnt/a86f8c9dd0ec2486293119c20b0ec026e19bbc4d51332c554f7cf05d777c9866", 0xc000c451b0, MS_REMOUNT, "append:/mnt/volume_sfo2_09/docke"...) = 0 <0.000105>
> [pid 22756] 1556919088.687851 mount("none", "/mnt/volume_sfo2_09/docker-aufs/aufs/mnt/a86f8c9dd0ec2486293119c20b0ec026e19bbc4d51332c554f7cf05d777c9866", 0xc000c451ba, MS_REMOUNT, "append:/mnt/volume_sfo2_09/docke"...) = 0 <0.000098>
> ..... (~30 lines skipped for clarity)
> [pid 22756] 1556919088.696182 mount("none", "/mnt/volume_sfo2_09/docker-aufs/aufs/mnt/a86f8c9dd0ec2486293119c20b0ec026e19bbc4d51332c554f7cf05d777c9866", 0xc000c45310, MS_REMOUNT, "append:/mnt/volume_sfo2_09/docke"...) = 0 <0.000266>

After (2 syscalls):
> [pid 24352] 1556919361.799889 mount("none", "/mnt/volume_sfo2_09/docker-aufs/aufs/mnt/8e7ba189e347a834e99eea4ed568f95b86cec809c227516afdc7c70286ff9a20", "aufs", 0, "br:/mnt/volume_sfo2_09/docker-au"...) = 0 <0.001717>
> [pid 24352] 1556919361.801761 mount("none", "/mnt/volume_sfo2_09/docker-aufs/aufs/mnt/8e7ba189e347a834e99eea4ed568f95b86cec809c227516afdc7c70286ff9a20", 0xc000dbecb0, MS_REMOUNT, "append:/mnt/volume_sfo2_09/docke"...) = 0 <0.001358>

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit d58c434bff)
2019-06-04 15:07:53 -07:00
Kir Kolyshkin
65ba452bb0 aufs: add lock around mount
Apparently there is some kind of race in aufs kernel module code,
which leads to the errors like:

[98221.158606] aufs au_xino_create2:186:dockerd[25801]: aufs.xino create err -17
[98221.162128] aufs au_xino_set:1229:dockerd[25801]: I/O Error, failed creating xino(-17).
[98362.239085] aufs au_xino_create2:186:dockerd[6348]: aufs.xino create err -17
[98362.243860] aufs au_xino_set:1229:dockerd[6348]: I/O Error, failed creating xino(-17).
[98373.775380] aufs au_xino_create:767:dockerd[27435]: open /dev/shm/aufs.xino(-17)
[98389.015640] aufs au_xino_create2:186:dockerd[26753]: aufs.xino create err -17
[98389.018776] aufs au_xino_set:1229:dockerd[26753]: I/O Error, failed creating xino(-17).
[98424.117584] aufs au_xino_create:767:dockerd[27105]: open /dev/shm/aufs.xino(-17)

So, we have to have a lock around mount syscall.

While at it, don't call the whole Unmount() on an error path, as
it leads to bogus error from auplink flush.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 5cd62852fa)
2019-06-04 15:07:53 -07:00
Kir Kolyshkin
76d936ae76 aufs: aufsMount: better errors for unix.Mount()
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 5873768dbe)
2019-06-04 15:07:53 -07:00
Kir Kolyshkin
d1eae89590 aufs: use mount.Unmount
1. Use mount.Unmount() which ignores EINVAL ("not mounted") error,
and provides better error diagnostics (so we don't have to explicitly
add target to error messages).

2. Since we're ignoring "not mounted" error, we can call
multiple unmounts without any locking -- but since "auplink flush"
is still involved and can produce an error in logs, let's keep
the check for fs being mounted (it's just a statfs so should be fast).

2. While at it, improve the "can't unmount" error message in Put().

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 4beee98026)
2019-06-04 15:07:53 -07:00
Kir Kolyshkin
7d1414ec3e aufs: remove extra locking
Both mount and unmount calls are already protected by fine-grained
(per id) locks in Get()/Put() introduced in commit fc1cf1911b
("Add more locking to storage drivers"), so there's no point in
having a global lock in mount/unmount.

The only place from which unmount is called without any locking
is Cleanup() -- this is to be addressed in the next patch.

This reverts commit 824c24e680.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit f93750b2c4)
2019-06-04 15:07:53 -07:00
Andrew Hsu
5fbc0a16e2
Merge pull request #260 from thaJeztah/19.03_backport_buildkit_systemd_resolvconf
[19.03 backport] build: buildkit now also uses systemd's resolv.conf
2019-06-04 11:44:18 -07:00
Andrew Hsu
0678d71038
Merge pull request #252 from thaJeztah/19.03_bump_swarmkit
[19.03 backport] Revert docker/swarmkit#2804
2019-06-04 10:33:22 -07:00
Sebastiaan van Stijn
746dce1994
Merge pull request #256 from thaJeztah/19.03_backport_increase_swarmkit_grpc
[19.03 backport] Increase max recv gRPC message size for nodes and secrets
2019-06-04 19:01:27 +02:00
Sebastiaan van Stijn
36f0fe6524
Merge pull request #247 from thaJeztah/19.03_aufs_backports
[19.03 backport] backport layer store optimizations
2019-06-04 18:43:43 +02:00
Sebastiaan van Stijn
737d57bad6
Merge pull request #251 from thaJeztah/19.03_backport_fix_fix_win_tmp
[19.03 backport] Windows CI - Corrected LOCALAPPDATA location
2019-06-04 18:42:37 +02:00
Sebastiaan van Stijn
287240a965
Merge pull request #255 from thaJeztah/19.03_backport_ro_none_cgroupdriver
[19.03 backport] info: report cgroup driver as "none" when running rootless
2019-06-04 18:41:58 +02:00
Sebastiaan van Stijn
ca602fa7c6
Merge pull request #249 from thaJeztah/19.03_backport_fix_api_operation_PutContainerArchive
[19.03 backport] API: Set format of body parameter in operation PutContainerArchive to "binary"
2019-06-04 18:41:19 +02:00
Andrew Hsu
36324c3bbd
Merge pull request #254 from thaJeztah/19.03_backport_root_dir_on_copy
[19.03 backport] Pass root to chroot to for chroot Tar/Untar (CVE-2018-15664)
2019-06-04 09:33:40 -07:00
Andrew Hsu
21c33eb7e3
Merge pull request #259 from thaJeztah/19.03_backport_fix_build_panic
[19.03 backport] build: fix panic when exporting to tar
2019-06-04 09:16:37 -07:00
Tibor Vass
feb373a216
build: buildkit now also uses systemd's resolv.conf
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 8ff4ec98cf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-04 18:11:14 +02:00
Andrew Hsu
d7080a7a2e
Merge pull request #258 from thaJeztah/19.03_backport_update_buildkit
[19.03 backport] vendor: update buildkit to 37d53758
2019-06-04 09:08:14 -07:00
Tibor Vass
b915ec1e7b
build: fix panic when exporting to tar
Fixes a panic on `docker build -t foo -o - . >/dev/null`

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 6104eb1ae2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-04 11:21:18 +02:00
Tonis Tiigi
2de4afdee5
vendor: update buildkit to 37d53758
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 85bbbd4495)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-04 11:17:10 +02:00
Drew Erny
26a35ddcd1
Increase max recv gRPC message size for nodes and secrets
Increases the max recieved gRPC message size for Node and Secret list
operations. This has already been done for the other swarm types, but
was not done for these.

Signed-off-by: Drew Erny <drew.erny@docker.com>
(cherry picked from commit a0903e1fa3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 23:00:07 +02:00
Akihiro Suda
d575af39ac
rootless: update docker info docs
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit ca5aab19b4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 22:54:22 +02:00
Akihiro Suda
57b59f876e
info: report cgroup driver as "none" when running rootless
Previously `docker info` had reported "cgroupfs" as the cgroup driver
but the driver wasn't actually used at all.

This PR reports "none" as the cgroup driver so as to avoid confusion.
e.g. kubeadm/kubelet will detect cgroupless-ness by checking this docker
info field. https://github.com/rootless-containers/usernetes/pull/97

Note that user still cannot specify `native.cgroupdriver=none` manually.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 153466ba0a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 22:48:36 +02:00
Brian Goff
3e057d527d
Add chroot for tar packing operations
Previously only unpack operations were supported with chroot.
This adds chroot support for packing operations.
This prevents potential breakouts when copying data from a container.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 3029e765e2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 18:55:45 +02:00
Brian Goff
9781cceb09
Pass root to chroot to for chroot Untar
This is useful for preventing CVE-2018-15664 where a malicious container
process can take advantage of a race on symlink resolution/sanitization.

Before this change chrootarchive would chroot to the destination
directory which is attacker controlled. With this patch we always chroot
to the container's root which is not attacker controlled.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit d089b63937)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 18:55:37 +02:00
Drew Erny
d0f4f42bd4
Revert docker/swarmkit#2804
Reverts the change to swarmkit that made all updates set UpdateStatus to
Completed

Signed-off-by: Drew Erny <drew.erny@docker.com>
(cherry picked from commit c7d9599e3d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 15:52:54 +02:00
Olli Janatuinen
d59fb97c5b
Windows CI - Corrected LOCALAPPDATA location
Signed-off-by: Olli Janatuinen <olli.janatuinen@gmail.com>
(cherry picked from commit 61815f6763)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 11:25:45 +02:00
Dominic Tubach
e1e47d090d
API: Set format of body parameter in operation PutContainerArchive to "binary"
Signed-off-by: Dominic Tubach <dominic.tubach@to.com>
(cherry picked from commit fa6f63e79b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 11:08:05 +02:00
Sebastiaan van Stijn
a62d9b9c21
Merge pull request #232 from thaJeztah/19.03_backport_lb_stale_force_leave
[19.03 backport] Network not deleted after stack is removed
2019-05-29 22:47:45 +03:00
Sebastiaan van Stijn
a004854097
Merge pull request #229 from thaJeztah/19.03_backport_windows_tag
[19.03 backport] Consider WINDOWS_BASE_IMAGE_TAG override when setting Windows base image for tests
2019-05-27 21:07:10 +03:00
Sebastiaan van Stijn
5925508b31
Merge pull request #222 from thaJeztah/19.03_backport_swarmnanocpu
[19.03 backport] Switch swarmmode services to NanoCpu
2019-05-27 21:04:31 +03:00
Sebastiaan van Stijn
5051fe047c
Merge pull request #231 from AkihiroSuda/bk-ramdisk-1903
[19.03 backport] builder-next: support DOCKER_RAMDISK
2019-05-27 21:01:01 +03:00
Sebastiaan van Stijn
57a9697161
Merge pull request #241 from thaJeztah/19.03_swagger_fixes
[19.03 backport] swagger fixes
2019-05-27 20:54:35 +03:00
Kir Kolyshkin
936432326a
layer: protect from same-name races
As pointed out by Tonis, there's a race between ReleaseRWLayer()
and GetRWLayer():

```
----- goroutine 1 -----               ----- goroutine 2 -----
ReleaseRWLayer()
  m := ls.mounts[l.Name()]
  ...
  m.deleteReference(l)
  m.hasReferences()
  ...                                 GetRWLayer()
  ...                                   mount := ls.mounts[id]
  ls.driver.Remove(m.mountID)
  ls.store.RemoveMount(m.name)          return mount.getReference()
  delete(ls.mounts, m.Name())
-----------------------               -----------------------
```

When something like this happens, GetRWLayer will return
an RWLayer without a storage. Oops.

There might be more races like this, and it seems the best
solution is to lock by layer id/name by using pkg/locker.

With this in place, name collision could not happen, so remove
the part of previous commit that protected against it in
CreateRWLayer (temporary nil assigmment and associated rollback).

So, now we have
* layerStore.mountL sync.Mutex to protect layerStore.mount map[]
  (against concurrent access);
* mountedLayer's embedded `sync.Mutex` to protect its references map[];
* layerStore.layerL (which I haven't touched);
* per-id locker, to avoid name conflicts and concurrent operations
  on the same rw layer.

The whole rig seems to look more readable now (mutexes use is
straightforward, no nested locks).

Reported-by: Tonis Tiigi <tonistiigi@gmail.com>
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit af433dd200)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 22:33:45 +02:00
Kir Kolyshkin
9eeb2b5ef0
layer/CreateRWLayerByGraphID: remove
This is an additon to commit 1fea38856a ("Remove v1.10 migrator")
aka PR #38265. Since that one, CreateRWLayerByGraphID() is not
used anywhere, so let's drop it.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit b4e9b50765)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 22:33:37 +02:00
Xinfeng Liu
eaa3e69d14
layer: optimize layerStore mountL
Goroutine stack analisys shown some lock contention
while doing massively (100 instances of `docker rm`)
parallel image removal, with many goroutines waiting
for the mountL mutex. Optimize it.

With this commit, the above operation is about 3x
faster, with no noticeable change to container
creation times (tested on aufs and overlay2).

kolyshkin@:
- squashed commits
- added description
- protected CreateRWLayer against name collisions by
temporary assiging nil to ls.mounts[name], and treating
nil as "non-existent" in all the other functions.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 05250a4f00)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 22:32:51 +02:00
Kir Kolyshkin
80a35e0bd4
layer: protect mountedLayer.references
Add a mutex to protect concurrent access to mountedLayer.references map.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit f73b5cb4e8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 22:32:43 +02:00
Adam Dobrawy
cdeef06801
Update docs to remove restriction of tty resize
Signed-off-by: Adam Dobrawy <naczelnik@jawnosc.tk>
(cherry picked from commit 4898f493d8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 21:58:53 +02:00
Dominic Tubach
181a64a5aa
API: Move "x-nullable: true" from type PortBinding to type PortMap
Currently the API spec would allow `"443/tcp": [null]`, but what should
be allowed is `"443/tcp": null`
Signed-off-by: Dominic Tubach <dominic.tubach@to.com>
(cherry picked from commit 32b5d296ea)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 21:58:40 +02:00
Dominic Tubach
63eecadf82
API: Change type of RemotrAddrs to array of strings in operation SwarmJoin
Signed-off-by: Dominic Tubach <dominic.tubach@to.com>
(cherry picked from commit d5f6bdb027)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 21:58:28 +02:00
Arko Dasgupta
2b216674da
Network not deleted after stack is removed
Make sure adapter.removeNetworks executes during task Remove
adapter.removeNetworks was being skipped for cases when
isUnknownContainer(err) was true after adapter.remove was executed

This fix eliminates the nil return case forcing the function
to continue executing unless there is a true error

Fixes https://github.com/moby/moby/issues/39225

Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
(cherry picked from commit 70fa7b6a3f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 21:53:13 +02:00
Andrew Hsu
868d87b08e
Merge pull request #224 from thaJeztah/19.03_backport_devno
[19.03 backport] bugfix: fetch the right device number which great than 255
2019-05-24 09:46:38 -07:00
Andrew Hsu
a7e03f69be
Merge pull request #216 from AkihiroSuda/rootless-fix-kill-1903
[19.03 backport] rootless: fix killing daemon
2019-05-24 09:46:01 -07:00
Sebastiaan van Stijn
96daf37c83
Merge pull request #238 from thaJeztah/19.03_backport_remove_TestSearchCmdOptions
[19.03 backport] Remove TestSearchCmdOptions test
2019-05-23 22:52:11 +02:00
Sebastiaan van Stijn
3dec835d84
Merge pull request #217 from thaJeztah/19.03_backport_EDGE374_TestDaemonNoSpaceLeftOnDeviceError
[19.03 backport] explicitly set filesystem type for mount to avoid 'invalid argument' error on arm
2019-05-23 22:51:01 +02:00
Sebastiaan van Stijn
4da607559f
Merge pull request #211 from thaJeztah/19.03_backport_api_fixes
[19.03 backport] backport small API fixes
2019-05-23 22:50:12 +02:00
Sebastiaan van Stijn
8dd7bd9981
Merge pull request #234 from thaJeztah/19.03_backport_update_seccomp_test_for_aarch64
[19.03 backport] Update TestRunWithDaemonDefaultSeccompProfile for ARM64
2019-05-23 22:49:21 +02:00
Sebastiaan van Stijn
7cc3681ad6
Merge pull request #206 from thaJeztah/19.03_backport_no_retry_ping_on_errconn
[19.03 backport] client: do not fallback to GET if HEAD on _ping fail to connect
2019-05-23 22:48:02 +02:00
Sebastiaan van Stijn
e205cd89cd
Merge pull request #228 from thaJeztah/19.03_backport_bump_libnetwork
[19.03 backport] bump libnetwork 5ac07abef4eee176423fdc1b870d435258e2d381
2019-05-23 21:47:57 +02:00
Sebastiaan van Stijn
c56df1abf3
Merge pull request #235 from thaJeztah/19.03_backport_make_sure_to_hydrate_when_eating_pretzels
[19.03 backport] Fix error handling for bind mount spec parser.
2019-05-23 12:09:08 +02:00
Sebastiaan van Stijn
d8185417d9
Remove TestSearchCmdOptions test
This test is dependent on the search results returned by Docker Hub, which
can change at any moment, and causes this test to be unpredictable.

Removing this test instead of trying to catch up with Docker Hub any time
the results change, because it's effectively testing Docker Hub, and not
the daemon.

Unit tests are already in place to test the core functionality of the daemon,
so it should be safe to remove this test.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 21e662c774)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-23 10:58:06 +02:00
Brian Goff
7cb78b6259
Fix error handling for bind mount spec parser.
Errors were being ignored and always telling the user that the path
doesn't exist even if it was some other problem, such as a permission
error.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit ebcef28834)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-23 01:09:37 +02:00
Sebastiaan van Stijn
79ac8f95af
Update TestRunWithDaemonDefaultSeccompProfile for ARM64
`chmod` is a legacy syscall, and not present on arm64, which
caused this test to fail.

Add `fchmodat` to the profile so that this test can run both
on x64 and arm64.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4bd8964b23)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-22 13:33:18 +02:00
Akihiro Suda
1c346f16a3 builder-next: support DOCKER_RAMDISK
For https://github.com/kubernetes/minikube/issues/4143

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit b4247b433e)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2019-05-22 06:58:44 +09:00
Deep Debroy
d347049802
Consider WINDOWS_BASE_IMAGE_TAG override when setting Windows base image for tests
Signed-off-by: Deep Debroy <ddebroy@docker.com>
(cherry picked from commit 15419d7ba0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-21 17:00:07 +02:00
Sebastiaan van Stijn
939aa52465
bump libnetwork 5ac07abef4eee176423fdc1b870d435258e2d381
full diff: 9ff9b57c34...5ac07abef4

brings in:

- docker/libnetwork#2376 Forcing a nil IP specified in PortBindings to IPv4zero (0.0.0.0)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a66ddd8ab8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-21 14:49:34 +02:00
Kir Kolyshkin
29c50668b3
int-cli/TestSearchCmdOptions: fail earlier
Sometimes this test fails (allegedly due to problems with Docker Hub),
but it fails later than it should, for example:

> 01:20:34.845 assertion failed: expression is false: strings.Count(outSearchCmdStars, "[OK]") <= strings.Count(outSearchCmd, "[OK]"): The quantity of images with stars should be less than that of all images: <...>

This, with non-empty list of images following, means that the initial
`docker search busybox` command returned not enough results. So, add
a check that `docker search busybox` returns something.

While at it,
 * raise the number of stars to 10;
 * simplify check for number of lines (no need to count [OK]'s);
 * improve error message.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 4f80a1953d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-21 12:51:09 +02:00
Sebastiaan van Stijn
55c5381584
Merge pull request #220 from dperny/19.03-backport-constraintenforcer-fix
[19.03 backport] ConstraintEnforcer fix
2019-05-21 12:47:38 +02:00
frankyang
750e0ace06
bugfix: fetch the right device number which great than 255
Signed-off-by: frankyang <yyb196@gmail.com>
(cherry picked from commit b9f31912de)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-21 12:06:26 +02:00
Olly Pomeroy
29498693dd
Switch swarmmode services to NanoCpu
Today `$ docker service create --limit-cpu` configures a containers
`CpuPeriod` and `CpuQuota` variables, this commit switches this to
configure a containers `NanoCpu` variable instead.

Signed-off-by: Olly Pomeroy <olly@docker.com>
(cherry picked from commit 8a60a1e14a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-21 11:17:37 +02:00
Drew Erny
56e92239a6 backport ConstraintEnforcer fix
Revendors docker/swarmkit to backport fixes made to the
ConstraintEnforcer (see docker/swarmkit#2857)

Signed-off-by: Drew Erny <drew.erny@docker.com>
2019-05-20 13:30:00 -05:00
Jim Ehrismann
11319732ab
explicitly set filesystem type for mount to avoid 'invalid argument' error on arm
Signed-off-by: Jim Ehrismann <jim.ehrismann@docker.com>
(cherry picked from commit d7de1a8b9f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-17 17:18:16 +02:00
Akihiro Suda
853816ae79 dockerd-rootless.sh: use exec
Killing the shell script process does not kill the forked process.

This commit switches to `exec` so that the executed process can be
easily killed.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 34cc5c24d0)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2019-05-16 22:06:01 +09:00
Akihiro Suda
8f61032ec4 bump up rootlesskit to v0.4.1
Now the child process is killed when the parent dies (rootless-containers/rootlesskit#66)

e92d5e7...27a0c7a

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 00c92a6719)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2019-05-16 22:05:48 +09:00
Sebastiaan van Stijn
bff7e300e6
Merge pull request #215 from thaJeztah/19.03_backport_buildkit_fixes
[19.03 backport] BuildKit fixes
2019-05-13 20:16:34 -07:00
Andrew Hsu
ff44133643
Merge pull request #214 from thaJeztah/19.03_backport_log-daemon-exit-before-tests-finish
[19.03 backport] Ensure all integration daemon logging happens before test exit
2019-05-13 19:13:34 -07:00
Andrew Hsu
9fdccf6a47
Merge pull request #212 from thaJeztah/19.03_backport_gcr_fix
[19.03 backport] builder-next: fix gcr workaround token cache
2019-05-13 19:11:55 -07:00
Andrew Hsu
3f4657f6db
Merge pull request #210 from thaJeztah/19.03_backport_bump_runc_1.0.0-rc.8
[19.03 backport] Bump runc 1.0.0-rc8, opencontainers/selinux v1.2.2
2019-05-13 19:06:00 -07:00
Sebastiaan van Stijn
dcc05fcf3e
Merge pull request #213 from thaJeztah/19.03_backport_remove_stale_lb_ep
[19.03 backport] Remove a network during task SHUTDOWN instead of REMOVE to
2019-05-13 18:52:53 -07:00
Tibor Vass
03ce4080a4
Merge pull request #208 from thaJeztah/19.03_backport_rootless_fixes
[19.03 backport] backport rootless fixes
2019-05-13 18:38:17 -07:00
Andrew Hsu
61828453db
Merge pull request #209 from thaJeztah/19.03_backport_bump_golang_1.12.5
[19.03 backport] Bump Golang 1.12.5
2019-05-13 18:05:15 -07:00
Sebastiaan van Stijn
d371b283c3
bump google.golang.org/grpc v1.20.1
full diff: https://github.com/grpc/grpc-go/compare/v1.12.2...v1.20.1

includes  grpc/grpc-go#2695 transport: do not close channel that can lead to panic
addresses moby/moby#39053

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 28ad54d84f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:59:40 -07:00
Tonis Tiigi
4784740273
builder-next: call stopprogress on download error
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 91a57f3e7f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:44:38 -07:00
Tonis Tiigi
31b0688de7
vendor: update buildkit to f238f1ef
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit a3cbd53ed2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:44:23 -07:00
Sebastiaan van Stijn
6896305b57
bump golang.org/x/crypto 88737f569e3a9c7ab309cdc09a07fe7fc87233c3
no local changes, just syncing with containerd

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d6d2b30fd2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:43:56 -07:00
Sebastiaan van Stijn
931c4c1023
bump gogo/googleapis v1.2.0
full diff: 08a7655d27...v1.2.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5d51ac544b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:43:45 -07:00
Sebastiaan van Stijn
6cc14f5854
bump gogo/protobuf v1.2.1
full diff: https://github.com/gogo/protobuf/compare/v1.2.0...v1.2.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 647f31b7d0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:43:25 -07:00
Sebastiaan van Stijn
1910607215
bump containerd/console 0650fd9eeb50bab4fc99dceb9f2e14cf58f36e7f
full diff: c12b1e7919...0650fd9eeb

- containerd/console#30 Add common project repo checks/README references

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3d7d8a579f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:43:14 -07:00
Sebastiaan van Stijn
dfa1031015
bump containerd 3a3f0aac8819165839a41fee77a4f4ac8b103097
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 25e6487fc2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:42:40 -07:00
Sebastiaan van Stijn
790388a8c5
bump containerd/continuity aaeac12a7ffcd198ae25440a9dff125c2e2703a7
- containerd/continuity#140 Fix directory comparison in changes

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 447cbff50a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:25:31 -07:00
Sebastiaan van Stijn
ea09008423
bump buildkit v0.5.0
full diff: 8818c67cff...v0.5.0

- moby/buildkit#909 exporter: support unpack opt for image exporter
- moby/buildkit#961 dockerfile: allow subdirs for remote contexts

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3e4723cf33)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:25:19 -07:00
Daniel Sweet
8d8904f02b
Ensure all integration daemon logging happens before test exit
As of Go 1.12, the `testing` package panics if a goroutine logs to a
`testing.T` after the relevant test has completed. This was not
documented as a change at all; see the commit
95d06ab6c982f58b127b14a52c3325acf0bd3926 in the Go repository for the
relevant change.

At any point in the integration tests, tests could panic with the
message "Log in goroutine after TEST_FUNCTION has completed". This was
exacerbated by less direct logging I/O, e.g. running `make test` with
its output piped instead of attached to a TTY.

The most common cause of panics was that there was a race condition
between an exit logging goroutine and the `StopWithError` method:
`StopWithError` could return, causing the calling test method to return,
causing the `testing.T` to be marked as finished, before the goroutine
could log that the test daemon had exited. The fix is simple: capture
the result of `cmd.Wait()`, _then_ log, _then_ send the captured
result over the `Wait` channel. This ensures that the message is
logged before `StopWithError` can return, blocking the test method
so that the target `testing.T` is not marked as finished.

Signed-off-by: Daniel Sweet <danieljsweet@icloud.com>
(cherry picked from commit 7546322e99)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:50:43 -07:00
Arko Dasgupta
2a7513a972
Remove a network during task SHUTDOWN instead of REMOVE to
make sure the LB sandbox is removed when a service is updated
with a --network-rm option

Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
(cherry picked from commit 680d0ba4ab)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:47:35 -07:00
Tonis Tiigi
c47f2a4a1a
builder-next: fix gcr workaround token cache
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit cfce0acd33)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:45:27 -07:00
Yash Murty
526a72fd77
Remove DiskQouta field.
Signed-off-by: Yash Murty <yashmurty@gmail.com>
(cherry picked from commit a31a088665)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:42:41 -07:00
Sebastiaan van Stijn
f76879dd64
Add "import" statement to generated API types
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 93886fcc5a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:42:22 -07:00
Sebastiaan van Stijn
e7a837120d
bump opencontainers/selinux v1.2.2
full diff: https://github.com/opencontainers/selinux/compare/v1.2.1...v1.2.2

- opencontainers/selinux#51 Older kernels do not support keyring labeling

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0d453115fe)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:40:02 -07:00
Sebastiaan van Stijn
04c51495da
bump runc binary v1.0.0-rc8
full diff: 029124da7a...425e105d5a

- opencontainers/runc#2043 Vendor in latest selinux code for keycreate errors

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4bc310c11b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:39:54 -07:00
Sebastiaan van Stijn
02baf07d77
bump runc vendor v1.0.0-rc8
full diff: 029124da7a...425e105d5a

- opencontainers/runc#2043 Vendor in latest selinux code for keycreate errors

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6df6fe6020)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:39:46 -07:00
Jintao Zhang
6d0823af0a
Bump Golang 1.12.5
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 3a4c5b6a0d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:37:07 -07:00
Akihiro Suda
8493fb18ae
dockerd: fix rootless detection (alternative to #39024)
The `--rootless` flag had a couple of issues:
* #38702: euid=0, $USER="root" but no access to cgroup ("rootful" Docker in rootless Docker)
* #39009: euid=0 but $USER="docker" (rootful boot2docker)

To fix #38702, XDG dirs are ignored as in rootful Docker, unless the
dockerd is directly running under RootlessKit namespaces.

RootlessKit detection is implemented by checking whether `$ROOTLESSKIT_STATE_DIR` is set.

To fix #39009, the non-robust `$USER` check is now completely removed.

The entire logic can be illustrated as follows:

```
withRootlessKit := getenv("ROOTLESSKIT_STATE_DIR")
rootlessMode := withRootlessKit || cliFlag("--rootless")
honorXDG := withRootlessKit
useRootlessKitDockerProxy := withRootlessKit
removeCgroupSpec := rootlessMode
adjustOOMScoreAdj := rootlessMode
```

Close #39024
Fix #38702 #39009

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 3518383ed9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:31:31 -07:00
Akihiro Suda
e8b9a752d3
rootless: optional support for lxc-user-nic SUID binary
lxc-user-nic can eliminate slirp overhead but needs /etc/lxc/lxc-usernet to be configured for the current user.

To use lxc-user-nic, $DOCKERD_ROOTLESS_ROOTLESSKIT_NET=lxc-user-nic also needs to be set.

This commit also bumps up RootlessKit from v0.3.0 to v0.4.0:
70e0502f32...e92d5e772e

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 63a66b0eb0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:31:11 -07:00
Kir Kolyshkin
14bb71d508
Dockerfile.e2e: fix DOCKER_GITCOMMIT handling
1. There is no need to persist DOCKER_GITCOMMIT,
as it's not needed for runtime, only for build.
So, remove ENV.

2. In case $GITCOMMIT is not defined during build time
(and it happens if .git directory is not present),
we still need to have some value set, so set it to
`undefined`. Otherwise we'll have something like

>  => ERROR [builder 2/3] RUN hack/make.sh build-integration-test-binary
> ------
>  > [builder 2/3] RUN hack/make.sh build-integration-test-binary:
> #32 0.488
> #32 0.505 error: .git directory missing and DOCKER_GITCOMMIT not specified
> #32 0.505   Please either build with the .git directory accessible, or specify the
> #32 0.505   exact (--short) commit hash you are building using DOCKER_GITCOMMIT for
> #32 0.505   future accountability in diagnosing build issues.  Thanks!

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit c3b24944ca)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:27:02 -07:00
Kir Kolyshkin
2e95499142
Dockerfile.e2e: copy test sources
Package "gotest.tools/assert" uses source introspection to
print more info in case of assertion failure. When source code
is not available, it prints an error instead.

In other words, before this commit:

> --- SKIP: TestCgroupDriverSystemdMemoryLimit (0.00s)
>     cgroupdriver_systemd_test.go:32: failed to parse source file: /go/src/github.com/docker/docker/integration/system/cgroupdriver_systemd_test.go: open /go/src/github.com/docker/docker/integration/system/cgroupdriver_systemd_test.go: no such file or directory
>     cgroupdriver_systemd_test.go:32:

and after:

> --- SKIP: TestCgroupDriverSystemdMemoryLimit (0.09s)
>    cgroupdriver_systemd_test.go:32: !hasSystemd()

This increases the resulting image size by about 2 MB
on my system (from 758 to 760 MB).

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 0deb18ab42)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:26:53 -07:00
Kir Kolyshkin
8d428458a2
TestIpcModeOlderClient: skip if client < 1.40
This test case requires not just daemon >= 1.40, but also
client API >= 1.40. In case older client is used, we'll
get failure from the very first check:

> ipcmode_linux_test.go:313: assertion failed: shareable (string) != private (string)

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 1ada1c8391)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:26:34 -07:00
Sebastiaan van Stijn
5f60a56544
Skip TestImagesFilterMultiReference on API < v1.40
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 83ac2b4c13)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:26:22 -07:00
Sebastiaan van Stijn
a3b4e92d66
Skip TestUUIDGeneration on API < v1.40
Older versions did not use an UUID as ID

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 05bd9958f2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:26:10 -07:00
Sebastiaan van Stijn
cedf201aef
Skip TestPingCacheHeaders on API < v1.40
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d080a866cc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:25:55 -07:00
Sebastiaan van Stijn
545bc6b4d8
Skip TestBuildWithEmptyDockerfile on API < v1.40
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0e7b46aafe)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:25:47 -07:00
Sebastiaan van Stijn
620d9d3c75
Fix TestVolumesCreateAndList when running against a shared daemon
The daemon may already have other volumes, so filter out those
when running the test.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 566eea13e6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:25:34 -07:00
Sebastiaan van Stijn
e1b045c25e
Remove TestContainerAPICreateWithHostName
TestNISDomainname in the integration suite covers this

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2b5880c2eb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:25:25 -07:00
Sebastiaan van Stijn
11e2802015
Skip TestNISDomainname on API < 1.40
Older versions of the daemon would concatenate hostname and
domainname, so hostname "foobar" and domainname "baz.cyphar.com"
would produce `foobar.baz.cyphar.com` as hostname.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c91c3776ea)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:25:17 -07:00
Sebastiaan van Stijn
cb8d67505d
Dockerfile.e2e: builder: change output directory to simplify copy
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b73e3407e3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:25:07 -07:00
Sebastiaan van Stijn
7d3405b4ba
Dockerfile.e2e: move "contrib" to a separate build-stage
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3ededb850f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:24:58 -07:00
Sebastiaan van Stijn
d36c7de19e
Dockerfile.e2e: move dockercli to a separate build-stage
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e7784a6c7e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:24:50 -07:00
Sebastiaan van Stijn
6605a26c75
Dockerfile.e2e: use /build to be consistent with main Dockerfile
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 045beed6c8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:24:41 -07:00
Sebastiaan van Stijn
ce9cabf0f0
Dockerfile.e2e: re-order steps for caching
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 63aefbfbca)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:24:32 -07:00
Sebastiaan van Stijn
dc6d1ac663
Dockerfile.e2e: move frozen-images to a separate stage
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5554bd1a7b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:24:24 -07:00
Sebastiaan van Stijn
1fdd24579c
Dockerfile.e2e: use alpine 3.9
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 20262688df)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:24:15 -07:00
Sebastiaan van Stijn
3afbf83cc5
Dockerfile.e2e fix TestBuildPreserveOwnership
The Dockerfile missed some fixtures, which caused this test
fail when running from this image.

I also noticed some other fixtures missing in integration-cli,
where the image had symlinks to some certificates, but the
original files were not included;

```
|-- integration-cli
    |-- fixtures
    |   |-- auth
    |   |   `-- docker-credential-shell-test
    |   |-- credentialspecs
    |   |   `-- valid.json
    |   |-- https
    |   |   |-- ca.pem -> ../../../integration/testdata/https/ca.pem
    |   |   |-- client-cert.pem -> ../../../integration/testdata/https/client-cert.pem
    |   |   |-- client-key.pem -> ../../../integration/testdata/https/client-key.pem
    |   |   |-- client-rogue-cert.pem
    |   |   |-- client-rogue-key.pem
    |   |   |-- server-cert.pem -> ../../../integration/testdata/https/server-cert.pem
    |   |   |-- server-key.pem -> ../../../integration/testdata/https/server-key.pem
    |   |   |-- server-rogue-cert.pem
    |   |   `-- server-rogue-key.pem
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 48fd0e921c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:23:48 -07:00
Ian Campbell
61a234d562
client: do not fallback to GET if HEAD on _ping fail to connect
When we see an `ECONNREFUSED` (or equivalent) from an attempted `HEAD` on the
`/_ping` endpoint there is no point in trying again with `GET` since the server
is not responding/available at all.

Once vendored into the cli this will partially mitigate https://github.com/docker/cli/issues/1739
("Docker commands take 1 minute to timeout if context endpoint is unreachable")
by cutting the effective timeout in half.

Signed-off-by: Ian Campbell <ijc@docker.com>
(cherry picked from commit 8c8457b0f2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:17:51 -07:00
10277 changed files with 725259 additions and 1448855 deletions

21
.DEREK.yml Normal file
View file

@ -0,0 +1,21 @@
curators:
- aboch
- alexellis
- andrewhsu
- anonymuse
- chanwit
- ehazlett
- fntlnz
- gianarb
- kolyshkin
- mgoelzer
- olljanat
- programmerq
- rheinwein
- ripcurld0
- thajeztah
features:
- comments
- pr_description_required

View file

@ -1,21 +0,0 @@
{
"name": "moby",
"build": {
"context": "..",
"dockerfile": "../Dockerfile",
"target": "devcontainer"
},
"workspaceFolder": "/go/src/github.com/docker/docker",
"workspaceMount": "source=${localWorkspaceFolder},target=/go/src/github.com/docker/docker,type=bind,consistency=cached",
"remoteUser": "root",
"runArgs": ["--privileged"],
"customizations": {
"vscode": {
"extensions": [
"golang.go"
]
}
}
}

View file

@ -1,4 +1,6 @@
bundles
.gopath
vendor/pkg
.go-pkg-cache
.git .git
bundles/
cli/winresources/**/winres.json
cli/winresources/**/*.syso

3
.gitattributes vendored
View file

@ -1,3 +0,0 @@
Dockerfile* linguist-language=Dockerfile
vendor.mod linguist-language=Go-Module
vendor.sum linguist-language=Go-Checksums

5
.github/CODEOWNERS vendored
View file

@ -5,8 +5,11 @@
builder/** @tonistiigi builder/** @tonistiigi
contrib/mkimage/** @tianon contrib/mkimage/** @tianon
daemon/graphdriver/devmapper/** @rhvgoyal
daemon/graphdriver/lcow/** @johnstep @jhowardmsft
daemon/graphdriver/overlay/** @dmcgowan
daemon/graphdriver/overlay2/** @dmcgowan daemon/graphdriver/overlay2/** @dmcgowan
daemon/graphdriver/windows/** @johnstep daemon/graphdriver/windows/** @johnstep @jhowardmsft
daemon/logger/awslogs/** @samuelkarp daemon/logger/awslogs/** @samuelkarp
hack/** @tianon hack/** @tianon
plugin/** @cpuguy83 plugin/** @cpuguy83

70
.github/ISSUE_TEMPLATE.md vendored Normal file
View file

@ -0,0 +1,70 @@
<!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/moby/moby/blob/master/CONTRIBUTING.md#reporting-other-issues
---------------------------------------------------
GENERAL SUPPORT INFORMATION
---------------------------------------------------
The GitHub issue tracker is for bug reports and feature requests.
General support for **docker** can be found at the following locations:
- Docker Support Forums - https://forums.docker.com
- Slack - community.docker.com #general channel
- Post a question on StackOverflow, using the Docker tag
General support for **moby** can be found at the following locations:
- Moby Project Forums - https://forums.mobyproject.org
- Slack - community.docker.com #moby-project channel
- Post a question on StackOverflow, using the Moby tag
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
**Description**
<!--
Briefly describe the problem you are having in a few paragraphs.
-->
**Steps to reproduce the issue:**
1.
2.
3.
**Describe the results you received:**
**Describe the results you expected:**
**Additional information you deem important (e.g. issue happens only occasionally):**
**Output of `docker version`:**
```
(paste your output here)
```
**Output of `docker info`:**
```
(paste your output here)
```
**Additional environment details (AWS, VirtualBox, physical, etc.):**

View file

@ -1,146 +0,0 @@
name: Bug report
description: Create a report to help us improve
labels:
- kind/bug
- status/0-triage
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to report a bug!
If this is a security issue please report it to the [Docker Security team](mailto:security@docker.com).
- type: textarea
id: description
attributes:
label: Description
description: Please give a clear and concise description of the bug
validations:
required: true
- type: textarea
id: repro
attributes:
label: Reproduce
description: Steps to reproduce the bug
placeholder: |
1. docker run ...
2. docker kill ...
3. docker rm ...
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected behavior
description: What is the expected behavior?
placeholder: |
E.g. "`docker rm` should remove the container and cleanup all associated data"
- type: textarea
id: version
attributes:
label: docker version
description: Output of `docker version`
render: bash
placeholder: |
Client:
Version: 20.10.17
API version: 1.41
Go version: go1.17.11
Git commit: 100c70180fde3601def79a59cc3e996aa553c9b9
Built: Mon Jun 6 21:36:39 UTC 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.17
API version: 1.41 (minimum version 1.12)
Go version: go1.17.11
Git commit: a89b84221c8560e7a3dee2a653353429e7628424
Built: Mon Jun 6 22:32:38 2022
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: 1.6.6
GitCommit: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc:
Version: 1.1.2
GitCommit: a916309fff0f838eb94e928713dbc3c0d0ac7aa4
docker-init:
Version: 0.19.0
GitCommit:
validations:
required: true
- type: textarea
id: info
attributes:
label: docker info
description: Output of `docker info`
render: bash
placeholder: |
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., 0.8.2)
compose: Docker Compose (Docker Inc., 2.6.0)
Server:
Containers: 4
Running: 2
Paused: 0
Stopped: 2
Images: 80
Server Version: 20.10.17
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: false
userxattr: false
Logging Driver: local
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
Default Runtime: runc
Init Binary: docker-init
containerd version: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc version: a916309fff0f838eb94e928713dbc3c0d0ac7aa4
init version:
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.13.0-1031-azure
Operating System: Ubuntu 20.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.63GiB
Name: dev
ID: UC44:2RFL:7NQ5:GGFW:34O5:DYRE:CLOH:VLGZ:64AZ:GFXC:PY6H:SAHY
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 46
Goroutines: 134
System Time: 2022-07-06T18:07:54.812439392Z
EventsListeners: 0
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: true
validations:
required: true
- type: textarea
id: additional
attributes:
label: Additional Info
description: Additional info you want to provide such as logs, system info, environment, etc.
validations:
required: false

View file

@ -1,8 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: Security and Vulnerabilities
url: https://github.com/moby/moby/blob/master/SECURITY.md
about: Please report any security issues or vulnerabilities responsibly to the Docker security team. Please do not use the public issue tracker.
- name: Questions and Discussions
url: https://github.com/moby/moby/discussions/new
about: Use Github Discussions to ask questions and/or open discussion topics.

View file

@ -1,13 +0,0 @@
name: Feature request
description: Missing functionality? Come tell us about it!
labels:
- kind/feature
- status/0-triage
body:
- type: textarea
id: description
attributes:
label: Description
description: What is the feature you want to see?
validations:
required: true

View file

@ -22,12 +22,9 @@ Please provide the following information:
**- Description for the changelog** **- Description for the changelog**
<!-- <!--
Write a short (one line) summary that describes the changes in this Write a short (one line) summary that describes the changes in this
pull request for inclusion in the changelog. pull request for inclusion in the changelog:
It must be placed inside the below triple backticks section:
--> -->
```markdown changelog
```
**- A picture of a cute animal (not mandatory but encouraged)** **- A picture of a cute animal (not mandatory but encouraged)**

View file

@ -1,27 +0,0 @@
name: 'Setup Runner'
description: 'Composite action to set up the GitHub Runner for jobs in the test.yml workflow'
runs:
using: composite
steps:
- run: |
sudo modprobe ip_vs
sudo modprobe ipv6
sudo modprobe ip6table_filter
sudo modprobe -r overlay
sudo modprobe overlay redirect_dir=off
shell: bash
- run: |
if [ ! -e /etc/docker/daemon.json ]; then
echo '{}' | sudo tee /etc/docker/daemon.json >/dev/null
fi
DOCKERD_CONFIG=$(jq '.+{"experimental":true,"live-restore":true,"ipv6":true,"fixed-cidr-v6":"2001:db8:1::/64"}' /etc/docker/daemon.json)
sudo tee /etc/docker/daemon.json <<<"$DOCKERD_CONFIG" >/dev/null
sudo service docker restart
shell: bash
- run: |
./contrib/check-config.sh || true
shell: bash
- run: |
docker info
shell: bash

View file

@ -1,14 +0,0 @@
name: 'Setup Tracing'
description: 'Composite action to set up the tracing for test jobs'
runs:
using: composite
steps:
- run: |
set -e
# Jaeger is set up on Windows through an inline run step. If you update Jaeger here, don't forget to update
# the version set in .github/workflows/.windows.yml.
docker run -d --net=host --name jaeger -e COLLECTOR_OTLP_ENABLED=true jaegertracing/all-in-one:1.46
docker0_ip="$(ip -f inet addr show docker0 | grep -Po 'inet \K[\d.]+')"
echo "OTEL_EXPORTER_OTLP_ENDPOINT=http://${docker0_ip}:4318" >> "${GITHUB_ENV}"
shell: bash

View file

@ -1,48 +0,0 @@
# reusable workflow
name: .dco
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
on:
workflow_call:
env:
ALPINE_VERSION: 3.16
jobs:
run:
runs-on: ubuntu-20.04
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Dump context
uses: actions/github-script@v7
with:
script: |
console.log(JSON.stringify(context, null, 2));
-
name: Get base ref
id: base-ref
uses: actions/github-script@v7
with:
result-encoding: string
script: |
if (/^refs\/pull\//.test(context.ref) && context.payload?.pull_request?.base?.ref != undefined) {
return context.payload.pull_request.base.ref;
}
return context.ref.replace(/^refs\/heads\//g, '');
-
name: Validate
run: |
docker run --rm \
-v "$(pwd):/workspace" \
-e VALIDATE_REPO \
-e VALIDATE_BRANCH \
alpine:${{ env.ALPINE_VERSION }} sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && cd /workspace && hack/validate/dco'
env:
VALIDATE_REPO: ${{ github.server_url }}/${{ github.repository }}.git
VALIDATE_BRANCH: ${{ steps.base-ref.outputs.result }}

View file

@ -1,35 +0,0 @@
# reusable workflow
name: .test-prepare
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
on:
workflow_call:
outputs:
matrix:
description: Test matrix
value: ${{ jobs.run.outputs.matrix }}
jobs:
run:
runs-on: ubuntu-20.04
outputs:
matrix: ${{ steps.set.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Create matrix
id: set
uses: actions/github-script@v7
with:
script: |
let matrix = ['graphdriver'];
if ("${{ contains(github.event.pull_request.labels.*.name, 'containerd-integration') || github.event_name != 'pull_request' }}" == "true") {
matrix.push('snapshotter');
}
await core.group(`Set matrix`, async () => {
core.info(`matrix: ${JSON.stringify(matrix)}`);
core.setOutput('matrix', JSON.stringify(matrix));
});

View file

@ -1,445 +0,0 @@
# reusable workflow
name: .test
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
on:
workflow_call:
inputs:
storage:
required: true
type: string
default: "graphdriver"
env:
GO_VERSION: "1.21.9"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.25
ITG_CLI_MATRIX_SIZE: 6
DOCKER_EXPERIMENTAL: 1
DOCKER_GRAPHDRIVER: ${{ inputs.storage == 'snapshotter' && 'overlayfs' || 'overlay2' }}
TEST_INTEGRATION_USE_SNAPSHOTTER: ${{ inputs.storage == 'snapshotter' && '1' || '' }}
jobs:
unit:
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 120
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-unit
-
name: Prepare reports
if: always()
run: |
mkdir -p bundles /tmp/reports
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C /tmp/reports
sudo chown -R $(id -u):$(id -g) /tmp/reports
tree -nh /tmp/reports
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-unit-${{ inputs.storage }}
path: /tmp/reports/*
retention-days: 1
unit-report:
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 10
if: always()
needs:
- unit
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v4
with:
name: test-reports-unit-${{ inputs.storage }}
path: /tmp/reports
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
docker-py:
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 120
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up tracing
uses: ./.github/actions/setup-tracing
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-docker-py
-
name: Prepare reports
if: always()
run: |
mkdir -p bundles /tmp/reports
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C /tmp/reports
sudo chown -R $(id -u):$(id -g) /tmp/reports
tree -nh /tmp/reports
curl -sSLf localhost:16686/api/traces?service=integration-test-client > /tmp/reports/jaeger-trace.json
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-docker-py/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-docker-py-${{ inputs.storage }}
path: /tmp/reports/*
retention-days: 1
integration-flaky:
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 120
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-integration-flaky
env:
TEST_SKIP_INTEGRATION_CLI: 1
integration:
runs-on: ${{ matrix.os }}
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 120
strategy:
fail-fast: false
matrix:
os:
- ubuntu-20.04
- ubuntu-22.04
mode:
- ""
- rootless
- systemd
#- rootless-systemd FIXME: https://github.com/moby/moby/issues/44084
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up tracing
uses: ./.github/actions/setup-tracing
-
name: Prepare
run: |
CACHE_DEV_SCOPE=dev
if [[ "${{ matrix.mode }}" == *"rootless"* ]]; then
echo "DOCKER_ROOTLESS=1" >> $GITHUB_ENV
fi
if [[ "${{ matrix.mode }}" == *"systemd"* ]]; then
echo "SYSTEMD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}systemd"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=${{ env.CACHE_DEV_SCOPE }}
-
name: Test
run: |
make -o build test-integration
env:
TEST_SKIP_INTEGRATION_CLI: 1
TESTCOVERAGE: 1
-
name: Prepare reports
if: always()
run: |
reportsName=${{ matrix.os }}
if [ -n "${{ matrix.mode }}" ]; then
reportsName="$reportsName-${{ matrix.mode }}"
fi
reportsPath="/tmp/reports/$reportsName"
echo "TESTREPORTS_NAME=$reportsName" >> $GITHUB_ENV
mkdir -p bundles $reportsPath
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C $reportsPath
sudo chown -R $(id -u):$(id -g) $reportsPath
tree -nh $reportsPath
curl -sSLf localhost:16686/api/traces?service=integration-test-client > $reportsPath/jaeger-trace.json
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles/test-integration
env_vars: RUNNER_OS
flags: integration,${{ matrix.mode }}
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-integration/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-integration-${{ inputs.storage }}-${{ env.TESTREPORTS_NAME }}
path: /tmp/reports/*
retention-days: 1
integration-report:
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 10
if: always()
needs:
- integration
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v4
with:
path: /tmp/reports
pattern: test-reports-integration-${{ inputs.storage }}-*
merge-multiple: true
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
integration-cli-prepare:
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
outputs:
matrix: ${{ steps.tests.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Install gotestlist
run:
go install github.com/crazy-max/gotestlist/cmd/gotestlist@${{ env.GOTESTLIST_VERSION }}
-
name: Create matrix
id: tests
working-directory: ./integration-cli
run: |
# This step creates a matrix for integration-cli tests. Tests suites
# are distributed in integration-cli job through a matrix. There is
# also overrides being added to the matrix like "./..." to run
# "Test integration" step exclusively and specific tests suites that
# take a long time to run.
matrix="$(gotestlist -d ${{ env.ITG_CLI_MATRIX_SIZE }} -o "./..." -o "DockerSwarmSuite" -o "DockerNetworkSuite|DockerExternalVolumeSuite" ./...)"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.tests.outputs.matrix }}
integration-cli:
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 120
needs:
- integration-cli-prepare
strategy:
fail-fast: false
matrix:
test: ${{ fromJson(needs.integration-cli-prepare.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up tracing
uses: ./.github/actions/setup-tracing
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-integration
env:
TEST_SKIP_INTEGRATION: 1
TESTCOVERAGE: 1
TESTFLAGS: "-test.run (${{ matrix.test }})/"
-
name: Prepare reports
if: always()
run: |
reportsName=$(echo -n "${{ matrix.test }}" | sha256sum | cut -d " " -f 1)
reportsPath=/tmp/reports/$reportsName
echo "TESTREPORTS_NAME=$reportsName" >> $GITHUB_ENV
mkdir -p bundles $reportsPath
echo "${{ matrix.test }}" | tr -s '|' '\n' | tee -a "$reportsPath/tests.txt"
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C $reportsPath
sudo chown -R $(id -u):$(id -g) $reportsPath
tree -nh $reportsPath
curl -sSLf localhost:16686/api/traces?service=integration-test-client > $reportsPath/jaeger-trace.json
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles/test-integration
env_vars: RUNNER_OS
flags: integration-cli
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-integration/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-integration-cli-${{ inputs.storage }}-${{ env.TESTREPORTS_NAME }}
path: /tmp/reports/*
retention-days: 1
integration-cli-report:
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 10
if: always()
needs:
- integration-cli
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v4
with:
path: /tmp/reports
pattern: test-reports-integration-cli-${{ inputs.storage }}-*
merge-multiple: true
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY

View file

@ -1,551 +0,0 @@
# reusable workflow
name: .windows
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
on:
workflow_call:
inputs:
os:
required: true
type: string
storage:
required: true
type: string
default: "graphdriver"
send_coverage:
required: false
type: boolean
default: false
env:
GO_VERSION: "1.21.9"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.25
WINDOWS_BASE_IMAGE: mcr.microsoft.com/windows/servercore
WINDOWS_BASE_TAG_2019: ltsc2019
WINDOWS_BASE_TAG_2022: ltsc2022
TEST_IMAGE_NAME: moby:test
TEST_CTN_NAME: moby
DOCKER_BUILDKIT: 0
ITG_CLI_MATRIX_SIZE: 6
jobs:
build:
runs-on: ${{ inputs.os }}
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
BIN_OUT: ${{ github.workspace }}\out
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
name: Env
run: |
Get-ChildItem Env: | Out-String
-
name: Init
run: |
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
-
name: Cache
uses: actions/cache@v4
with:
path: |
~\AppData\Local\go-build
~\go\pkg\mod
${{ github.workspace }}\go-build
${{ env.GOPATH }}\pkg\mod
key: ${{ inputs.os }}-${{ github.job }}-${{ hashFiles('**/vendor.sum') }}
restore-keys: |
${{ inputs.os }}-${{ github.job }}-
-
name: Docker info
run: |
docker info
-
name: Build base image
run: |
& docker build `
--build-arg WINDOWS_BASE_IMAGE `
--build-arg WINDOWS_BASE_IMAGE_TAG `
--build-arg GO_VERSION `
-t ${{ env.TEST_IMAGE_NAME }} `
-f Dockerfile.windows .
-
name: Build binaries
run: |
& docker run --name ${{ env.TEST_CTN_NAME }} -e "DOCKER_GITCOMMIT=${{ github.sha }}" `
-v "${{ github.workspace }}\go-build:C:\Users\ContainerAdministrator\AppData\Local\go-build" `
-v "${{ github.workspace }}\go\pkg\mod:C:\gopath\pkg\mod" `
${{ env.TEST_IMAGE_NAME }} hack\make.ps1 -Daemon -Client
-
name: Copy artifacts
run: |
New-Item -ItemType "directory" -Path "${{ env.BIN_OUT }}"
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\src\github.com\docker\docker\bundles\docker.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\src\github.com\docker\docker\bundles\dockerd.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\bin\gotestsum.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\containerd\bin\containerd.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\containerd\bin\containerd-shim-runhcs-v1.exe" ${{ env.BIN_OUT }}\
-
name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: build-${{ inputs.storage }}-${{ inputs.os }}
path: ${{ env.BIN_OUT }}/*
if-no-files-found: error
retention-days: 2
unit-test:
runs-on: ${{ inputs.os }}
timeout-minutes: 120
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
name: Env
run: |
Get-ChildItem Env: | Out-String
-
name: Init
run: |
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
New-Item -ItemType "directory" -Path "bundles"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
-
name: Cache
uses: actions/cache@v4
with:
path: |
~\AppData\Local\go-build
~\go\pkg\mod
${{ github.workspace }}\go-build
${{ env.GOPATH }}\pkg\mod
key: ${{ inputs.os }}-${{ github.job }}-${{ hashFiles('**/vendor.sum') }}
restore-keys: |
${{ inputs.os }}-${{ github.job }}-
-
name: Docker info
run: |
docker info
-
name: Build base image
run: |
& docker build `
--build-arg WINDOWS_BASE_IMAGE `
--build-arg WINDOWS_BASE_IMAGE_TAG `
--build-arg GO_VERSION `
-t ${{ env.TEST_IMAGE_NAME }} `
-f Dockerfile.windows .
-
name: Test
run: |
& docker run --name ${{ env.TEST_CTN_NAME }} -e "DOCKER_GITCOMMIT=${{ github.sha }}" `
-v "${{ github.workspace }}\go-build:C:\Users\ContainerAdministrator\AppData\Local\go-build" `
-v "${{ github.workspace }}\go\pkg\mod:C:\gopath\pkg\mod" `
-v "${{ env.GOPATH }}\src\github.com\docker\docker\bundles:C:\gopath\src\github.com\docker\docker\bundles" `
${{ env.TEST_IMAGE_NAME }} hack\make.ps1 -TestUnit
-
name: Send to Codecov
if: inputs.send_coverage
uses: codecov/codecov-action@v4
with:
working-directory: ${{ env.GOPATH }}\src\github.com\docker\docker
directory: bundles
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.os }}-${{ inputs.storage }}-unit-reports
path: ${{ env.GOPATH }}\src\github.com\docker\docker\bundles\*
retention-days: 1
unit-test-report:
runs-on: ubuntu-latest
if: always()
needs:
- unit-test
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download artifacts
uses: actions/download-artifact@v4
with:
name: ${{ inputs.os }}-${{ inputs.storage }}-unit-reports
path: /tmp/artifacts
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/artifacts -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
integration-test-prepare:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.tests.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Install gotestlist
run:
go install github.com/crazy-max/gotestlist/cmd/gotestlist@${{ env.GOTESTLIST_VERSION }}
-
name: Create matrix
id: tests
working-directory: ./integration-cli
run: |
# This step creates a matrix for integration-cli tests. Tests suites
# are distributed in integration-test job through a matrix. There is
# also an override being added to the matrix like "./..." to run
# "Test integration" step exclusively.
matrix="$(gotestlist -d ${{ env.ITG_CLI_MATRIX_SIZE }} -o "./..." ./...)"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.tests.outputs.matrix }}
integration-test:
runs-on: ${{ inputs.os }}
continue-on-error: ${{ inputs.storage == 'snapshotter' && github.event_name != 'pull_request' }}
timeout-minutes: 120
needs:
- build
- integration-test-prepare
strategy:
fail-fast: false
matrix:
storage:
- ${{ inputs.storage }}
runtime:
- builtin
- containerd
test: ${{ fromJson(needs.integration-test-prepare.outputs.matrix) }}
exclude:
- storage: snapshotter
runtime: builtin
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
BIN_OUT: ${{ github.workspace }}\out
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
name: Set up Jaeger
run: |
# Jaeger is set up on Linux through the setup-tracing action. If you update Jaeger here, don't forget to
# update the version set in .github/actions/setup-tracing/action.yml.
Invoke-WebRequest -Uri "https://github.com/jaegertracing/jaeger/releases/download/v1.46.0/jaeger-1.46.0-windows-amd64.tar.gz" -OutFile ".\jaeger-1.46.0-windows-amd64.tar.gz"
tar -zxvf ".\jaeger-1.46.0-windows-amd64.tar.gz"
Start-Process '.\jaeger-1.46.0-windows-amd64\jaeger-all-in-one.exe'
echo "OTEL_EXPORTER_OTLP_ENDPOINT=http://127.0.0.1:4318" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
shell: pwsh
-
name: Env
run: |
Get-ChildItem Env: | Out-String
-
name: Download artifacts
uses: actions/download-artifact@v4
with:
name: build-${{ inputs.storage }}-${{ inputs.os }}
path: ${{ env.BIN_OUT }}
-
name: Init
run: |
New-Item -ItemType "directory" -Path "bundles"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
Write-Output "${{ env.BIN_OUT }}" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
$testName = ([System.BitConverter]::ToString((New-Object System.Security.Cryptography.SHA256Managed).ComputeHash([System.Text.Encoding]::UTF8.GetBytes("${{ matrix.test }}"))) -replace '-').ToLower()
echo "TESTREPORTS_NAME=$testName" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
-
# removes docker service that is currently installed on the runner. we
# could use Uninstall-Package but not yet available on Windows runners.
# more info: https://github.com/actions/virtual-environments/blob/d3a5bad25f3b4326c5666bab0011ac7f1beec95e/images/win/scripts/Installers/Install-Docker.ps1#L11
name: Removing current daemon
run: |
if (Get-Service docker -ErrorAction SilentlyContinue) {
$dockerVersion = (docker version -f "{{.Server.Version}}")
Write-Host "Current installed Docker version: $dockerVersion"
# remove service
Stop-Service -Force -Name docker
Remove-Service -Name docker
# removes event log entry. we could use "Remove-EventLog -LogName -Source docker"
# but this cmd is not available atm
$ErrorActionPreference = "SilentlyContinue"
& reg delete "HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Application\docker" /f 2>&1 | Out-Null
$ErrorActionPreference = "Stop"
Write-Host "Service removed"
}
-
name: Starting containerd
if: matrix.runtime == 'containerd'
run: |
Write-Host "Generating config"
& "${{ env.BIN_OUT }}\containerd.exe" config default | Out-File "$env:TEMP\ctn.toml" -Encoding ascii
Write-Host "Creating service"
New-Item -ItemType Directory "$env:TEMP\ctn-root" -ErrorAction SilentlyContinue | Out-Null
New-Item -ItemType Directory "$env:TEMP\ctn-state" -ErrorAction SilentlyContinue | Out-Null
Start-Process -Wait "${{ env.BIN_OUT }}\containerd.exe" `
-ArgumentList "--log-level=debug", `
"--config=$env:TEMP\ctn.toml", `
"--address=\\.\pipe\containerd-containerd", `
"--root=$env:TEMP\ctn-root", `
"--state=$env:TEMP\ctn-state", `
"--log-file=$env:TEMP\ctn.log", `
"--register-service"
Write-Host "Starting service"
Start-Service -Name containerd
Start-Sleep -Seconds 5
Write-Host "Service started successfully!"
-
name: Starting test daemon
run: |
Write-Host "Creating service"
If ("${{ matrix.runtime }}" -eq "containerd") {
$runtimeArg="--containerd=\\.\pipe\containerd-containerd"
echo "DOCKER_WINDOWS_CONTAINERD_RUNTIME=1" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
New-Item -ItemType Directory "$env:TEMP\moby-root" -ErrorAction SilentlyContinue | Out-Null
New-Item -ItemType Directory "$env:TEMP\moby-exec" -ErrorAction SilentlyContinue | Out-Null
Start-Process -Wait -NoNewWindow "${{ env.BIN_OUT }}\dockerd" `
-ArgumentList $runtimeArg, "--debug", `
"--host=npipe:////./pipe/docker_engine", `
"--data-root=$env:TEMP\moby-root", `
"--exec-root=$env:TEMP\moby-exec", `
"--pidfile=$env:TEMP\docker.pid", `
"--register-service"
If ("${{ inputs.storage }}" -eq "snapshotter") {
# Make the env-var visible to the service-managed dockerd, as there's no CLI flag for this option.
& reg add "HKLM\SYSTEM\CurrentControlSet\Services\docker" /v Environment /t REG_MULTI_SZ /s '@' /d TEST_INTEGRATION_USE_SNAPSHOTTER=1
echo "TEST_INTEGRATION_USE_SNAPSHOTTER=1" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
Write-Host "Starting service"
Start-Service -Name docker
Write-Host "Service started successfully!"
-
name: Waiting for test daemon to start
run: |
$tries=20
Write-Host "Waiting for the test daemon to start..."
While ($true) {
$ErrorActionPreference = "SilentlyContinue"
& "${{ env.BIN_OUT }}\docker" version
$ErrorActionPreference = "Stop"
If ($LastExitCode -eq 0) {
break
}
$tries--
If ($tries -le 0) {
Throw "Failed to get a response from the daemon"
}
Write-Host -NoNewline "."
Start-Sleep -Seconds 1
}
Write-Host "Test daemon started and replied!"
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Docker info
run: |
& "${{ env.BIN_OUT }}\docker" info
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Building contrib/busybox
run: |
& "${{ env.BIN_OUT }}\docker" build -t busybox `
--build-arg WINDOWS_BASE_IMAGE `
--build-arg WINDOWS_BASE_IMAGE_TAG `
.\contrib\busybox\
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: List images
run: |
& "${{ env.BIN_OUT }}\docker" images
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Test integration
if: matrix.test == './...'
run: |
.\hack\make.ps1 -TestIntegration
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
GO111MODULE: "off"
TEST_CLIENT_BINARY: ${{ env.BIN_OUT }}\docker
-
name: Test integration-cli
if: matrix.test != './...'
run: |
.\hack\make.ps1 -TestIntegrationCli
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
GO111MODULE: "off"
TEST_CLIENT_BINARY: ${{ env.BIN_OUT }}\docker
INTEGRATION_TESTRUN: ${{ matrix.test }}
-
name: Send to Codecov
if: inputs.send_coverage
uses: codecov/codecov-action@v4
with:
working-directory: ${{ env.GOPATH }}\src\github.com\docker\docker
directory: bundles
env_vars: RUNNER_OS
flags: integration,${{ matrix.runtime }}
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Docker info
run: |
& "${{ env.BIN_OUT }}\docker" info
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Stop containerd
if: always() && matrix.runtime == 'containerd'
run: |
$ErrorActionPreference = "SilentlyContinue"
Stop-Service -Force -Name containerd
$ErrorActionPreference = "Stop"
-
name: Containerd logs
if: always() && matrix.runtime == 'containerd'
run: |
Copy-Item "$env:TEMP\ctn.log" -Destination ".\bundles\containerd.log"
Get-Content "$env:TEMP\ctn.log" | Out-Host
-
name: Stop daemon
if: always()
run: |
$ErrorActionPreference = "SilentlyContinue"
Stop-Service -Force -Name docker
$ErrorActionPreference = "Stop"
-
# as the daemon is registered as a service we have to check the event
# logs against the docker provider.
name: Daemon event logs
if: always()
run: |
Get-WinEvent -ea SilentlyContinue `
-FilterHashtable @{ProviderName= "docker"; LogName = "application"} |
Sort-Object @{Expression="TimeCreated";Descending=$false} |
ForEach-Object {"$($_.TimeCreated.ToUniversalTime().ToString("o")) [$($_.LevelDisplayName)] $($_.Message)"} |
Tee-Object -file ".\bundles\daemon.log"
-
name: Download Jaeger traces
if: always()
run: |
Invoke-WebRequest `
-Uri "http://127.0.0.1:16686/api/traces?service=integration-test-client" `
-OutFile ".\bundles\jaeger-trace.json"
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.os }}-${{ inputs.storage }}-integration-reports-${{ matrix.runtime }}-${{ env.TESTREPORTS_NAME }}
path: ${{ env.GOPATH }}\src\github.com\docker\docker\bundles\*
retention-days: 1
integration-test-report:
runs-on: ubuntu-latest
continue-on-error: ${{ inputs.storage == 'snapshotter' && github.event_name != 'pull_request' }}
if: always()
needs:
- integration-test
strategy:
fail-fast: false
matrix:
storage:
- ${{ inputs.storage }}
runtime:
- builtin
- containerd
exclude:
- storage: snapshotter
runtime: builtin
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v4
with:
path: /tmp/reports
pattern: ${{ inputs.os }}-${{ inputs.storage }}-integration-reports-${{ matrix.runtime }}-*
merge-multiple: true
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY

View file

@ -1,191 +0,0 @@
name: bin-image
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
tags:
- 'v*'
pull_request:
env:
MOBYBIN_REPO_SLUG: moby/moby-bin
DOCKER_GITCOMMIT: ${{ github.sha }}
VERSION: ${{ github.ref }}
PLATFORM: Moby Engine - Nightly
PRODUCT: moby-bin
PACKAGER_NAME: The Moby Project
jobs:
validate-dco:
if: ${{ !startsWith(github.ref, 'refs/tags/v') }}
uses: ./.github/workflows/.dco.yml
prepare:
runs-on: ubuntu-20.04
outputs:
platforms: ${{ steps.platforms.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ env.MOBYBIN_REPO_SLUG }}
### versioning strategy
## push semver tag v23.0.0
# moby/moby-bin:23.0.0
# moby/moby-bin:latest
## push semver prelease tag v23.0.0-beta.1
# moby/moby-bin:23.0.0-beta.1
## push on master
# moby/moby-bin:master
## push on 23.0 branch
# moby/moby-bin:23.0
## any push
# moby/moby-bin:sha-ad132f5
tags: |
type=semver,pattern={{version}}
type=ref,event=branch
type=ref,event=pr
type=sha
-
name: Rename meta bake definition file
# see https://github.com/docker/metadata-action/issues/381#issuecomment-1918607161
run: |
bakeFile="${{ steps.meta.outputs.bake-file }}"
mv "${bakeFile#cwd://}" "/tmp/bake-meta.json"
-
name: Upload meta bake definition
uses: actions/upload-artifact@v4
with:
name: bake-meta
path: /tmp/bake-meta.json
if-no-files-found: error
retention-days: 1
-
name: Create platforms matrix
id: platforms
run: |
echo "matrix=$(docker buildx bake bin-image-cross --print | jq -cr '.target."bin-image-cross".platforms')" >>${GITHUB_OUTPUT}
build:
runs-on: ubuntu-20.04
needs:
- validate-dco
- prepare
if: always() && !contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled')
strategy:
fail-fast: false
matrix:
platform: ${{ fromJson(needs.prepare.outputs.platforms) }}
steps:
-
name: Prepare
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Download meta bake definition
uses: actions/download-artifact@v4
with:
name: bake-meta
path: /tmp
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Login to Docker Hub
if: github.event_name != 'pull_request' && github.repository == 'moby/moby'
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_MOBYBIN_USERNAME }}
password: ${{ secrets.DOCKERHUB_MOBYBIN_TOKEN }}
-
name: Build
id: bake
uses: docker/bake-action@v4
with:
files: |
./docker-bake.hcl
/tmp/bake-meta.json
targets: bin-image
set: |
*.platform=${{ matrix.platform }}
*.output=type=image,name=${{ env.MOBYBIN_REPO_SLUG }},push-by-digest=true,name-canonical=true,push=${{ github.event_name != 'pull_request' && github.repository == 'moby/moby' }}
*.tags=
-
name: Export digest
if: github.event_name != 'pull_request' && github.repository == 'moby/moby'
run: |
mkdir -p /tmp/digests
digest="${{ fromJSON(steps.bake.outputs.metadata)['bin-image']['containerimage.digest'] }}"
touch "/tmp/digests/${digest#sha256:}"
-
name: Upload digest
if: github.event_name != 'pull_request' && github.repository == 'moby/moby'
uses: actions/upload-artifact@v4
with:
name: digests-${{ env.PLATFORM_PAIR }}
path: /tmp/digests/*
if-no-files-found: error
retention-days: 1
merge:
runs-on: ubuntu-20.04
needs:
- build
if: always() && !contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && github.event_name != 'pull_request' && github.repository == 'moby/moby'
steps:
-
name: Download meta bake definition
uses: actions/download-artifact@v4
with:
name: bake-meta
path: /tmp
-
name: Download digests
uses: actions/download-artifact@v4
with:
path: /tmp/digests
pattern: digests-*
merge-multiple: true
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_MOBYBIN_USERNAME }}
password: ${{ secrets.DOCKERHUB_MOBYBIN_TOKEN }}
-
name: Create manifest list and push
working-directory: /tmp/digests
run: |
set -x
docker buildx imagetools create $(jq -cr '.target."docker-metadata-action".tags | map("-t " + .) | join(" ")' /tmp/bake-meta.json) \
$(printf '${{ env.MOBYBIN_REPO_SLUG }}@sha256:%s ' *)
-
name: Inspect image
run: |
set -x
docker buildx imagetools inspect ${{ env.MOBYBIN_REPO_SLUG }}:$(jq -cr '.target."docker-metadata-action".args.DOCKER_META_VERSION' /tmp/bake-meta.json)

View file

@ -1,139 +0,0 @@
name: buildkit
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
pull_request:
env:
GO_VERSION: "1.21.9"
DESTDIR: ./build
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-20.04
needs:
- validate-dco
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build
uses: docker/bake-action@v4
with:
targets: binary
-
name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: binary
path: ${{ env.DESTDIR }}
if-no-files-found: error
retention-days: 1
test:
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- build
env:
TEST_IMAGE_BUILD: "0"
TEST_IMAGE_ID: "buildkit-tests"
strategy:
fail-fast: false
matrix:
worker:
- dockerd
- dockerd-containerd
pkg:
- client
- cmd/buildctl
- solver
- frontend
- frontend/dockerfile
typ:
- integration
steps:
-
name: Prepare
run: |
disabledFeatures="cache_backend_azblob,cache_backend_s3"
if [ "${{ matrix.worker }}" = "dockerd" ]; then
disabledFeatures="${disabledFeatures},merge_diff"
fi
echo "BUILDKIT_TEST_DISABLE_FEATURES=${disabledFeatures}" >> $GITHUB_ENV
# Expose `ACTIONS_RUNTIME_TOKEN` and `ACTIONS_CACHE_URL`, which is used
# in BuildKit's test suite to skip/unskip cache exporters:
# https://github.com/moby/buildkit/blob/567a99433ca23402d5e9b9f9124005d2e59b8861/client/client_test.go#L5407-L5411
-
name: Expose GitHub Runtime
uses: crazy-max/ghaction-github-runtime@v3
-
name: Checkout
uses: actions/checkout@v4
with:
path: moby
-
name: BuildKit ref
run: |
echo "$(./hack/buildkit-ref)" >> $GITHUB_ENV
working-directory: moby
-
name: Checkout BuildKit ${{ env.BUILDKIT_REF }}
uses: actions/checkout@v4
with:
repository: ${{ env.BUILDKIT_REPO }}
ref: ${{ env.BUILDKIT_REF }}
path: buildkit
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Download binary artifacts
uses: actions/download-artifact@v4
with:
name: binary
path: ./buildkit/build/moby/
-
name: Update daemon.json
run: |
sudo rm -f /etc/docker/daemon.json
sudo service docker restart
docker version
docker info
-
name: Build test image
uses: docker/bake-action@v4
with:
workdir: ./buildkit
targets: integration-tests
set: |
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
-
name: Test
run: |
./hack/test ${{ matrix.typ }}
env:
CONTEXT: "."
TEST_DOCKERD: "1"
TEST_DOCKERD_BINARY: "./build/moby/dockerd"
TESTPKGS: "./${{ matrix.pkg }}"
TESTFLAGS: "-v --parallel=1 --timeout=30m --run=//worker=${{ matrix.worker }}$"
working-directory: buildkit

View file

@ -1,113 +0,0 @@
name: ci
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
pull_request:
env:
DESTDIR: ./build
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-20.04
needs:
- validate-dco
strategy:
fail-fast: false
matrix:
target:
- binary
- dynbinary
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build
uses: docker/bake-action@v4
with:
targets: ${{ matrix.target }}
-
name: List artifacts
run: |
tree -nh ${{ env.DESTDIR }}
-
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
prepare-cross:
runs-on: ubuntu-latest
needs:
- validate-dco
outputs:
matrix: ${{ steps.platforms.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Create matrix
id: platforms
run: |
matrix="$(docker buildx bake binary-cross --print | jq -cr '.target."binary-cross".platforms')"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.platforms.outputs.matrix }}
cross:
runs-on: ubuntu-20.04
needs:
- validate-dco
- prepare-cross
strategy:
fail-fast: false
matrix:
platform: ${{ fromJson(needs.prepare-cross.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Prepare
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build
uses: docker/bake-action@v4
with:
targets: all
set: |
*.platform=${{ matrix.platform }}
-
name: List artifacts
run: |
tree -nh ${{ env.DESTDIR }}
-
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +

View file

@ -1,177 +0,0 @@
name: test
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
pull_request:
env:
GO_VERSION: "1.21.9"
GIT_PAGER: "cat"
PAGER: "cat"
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build-dev:
runs-on: ubuntu-20.04
needs:
- validate-dco
strategy:
fail-fast: false
matrix:
mode:
- ""
- systemd
steps:
-
name: Prepare
run: |
if [ "${{ matrix.mode }}" = "systemd" ]; then
echo "SYSTEMD=true" >> $GITHUB_ENV
fi
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
with:
targets: dev
set: |
*.cache-from=type=gha,scope=dev${{ matrix.mode }}
*.cache-to=type=gha,scope=dev${{ matrix.mode }},mode=max
*.output=type=cacheonly
test:
needs:
- build-dev
- validate-dco
uses: ./.github/workflows/.test.yml
secrets: inherit
strategy:
fail-fast: false
matrix:
storage:
- graphdriver
- snapshotter
with:
storage: ${{ matrix.storage }}
validate-prepare:
runs-on: ubuntu-20.04
needs:
- validate-dco
outputs:
matrix: ${{ steps.scripts.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Create matrix
id: scripts
run: |
scripts=$(cd ./hack/validate && jq -nc '$ARGS.positional - ["all", "default", "dco"] | map(select(test("[.]")|not)) + ["generate-files"]' --args *)
echo "matrix=$scripts" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.scripts.outputs.matrix }}
validate:
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- validate-prepare
- build-dev
strategy:
fail-fast: true
matrix:
script: ${{ fromJson(needs.validate-prepare.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v4
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Validate
run: |
make -o build validate-${{ matrix.script }}
smoke-prepare:
runs-on: ubuntu-20.04
needs:
- validate-dco
outputs:
matrix: ${{ steps.platforms.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Create matrix
id: platforms
run: |
matrix="$(docker buildx bake binary-smoketest --print | jq -cr '.target."binary-smoketest".platforms')"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.platforms.outputs.matrix }}
smoke:
runs-on: ubuntu-20.04
needs:
- smoke-prepare
strategy:
fail-fast: false
matrix:
platform: ${{ fromJson(needs.smoke-prepare.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Prepare
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
-
name: Test
uses: docker/bake-action@v4
with:
targets: binary-smoketest
set: |
*.platform=${{ matrix.platform }}

View file

@ -1,62 +0,0 @@
name: validate-pr
on:
pull_request:
types: [opened, edited, labeled, unlabeled]
jobs:
check-area-label:
runs-on: ubuntu-20.04
steps:
- name: Missing `area/` label
if: contains(join(github.event.pull_request.labels.*.name, ','), 'impact/') && !contains(join(github.event.pull_request.labels.*.name, ','), 'area/')
run: |
echo "::error::Every PR with an 'impact/*' label should also have an 'area/*' label"
exit 1
- name: OK
run: exit 0
check-changelog:
if: contains(join(github.event.pull_request.labels.*.name, ','), 'impact/')
runs-on: ubuntu-20.04
env:
PR_BODY: |
${{ github.event.pull_request.body }}
steps:
- name: Check changelog description
run: |
# Extract the `markdown changelog` note code block
block=$(echo -n "$PR_BODY" | tr -d '\r' | awk '/^```markdown changelog$/{flag=1;next}/^```$/{flag=0}flag')
# Strip empty lines
desc=$(echo "$block" | awk NF)
if [ -z "$desc" ]; then
echo "::error::Changelog section is empty. Please provide a description for the changelog."
exit 1
fi
len=$(echo -n "$desc" | wc -c)
if [[ $len -le 6 ]]; then
echo "::error::Description looks too short: $desc"
exit 1
fi
echo "This PR will be included in the release notes with the following note:"
echo "$desc"
check-pr-branch:
runs-on: ubuntu-20.04
env:
PR_TITLE: ${{ github.event.pull_request.title }}
steps:
# Backports or PR that target a release branch directly should mention the target branch in the title, for example:
# [X.Y backport] Some change that needs backporting to X.Y
# [X.Y] Change directly targeting the X.Y branch
- name: Get branch from PR title
id: title_branch
run: echo "$PR_TITLE" | sed -n 's/^\[\([0-9]*\.[0-9]*\)[^]]*\].*/branch=\1/p' >> $GITHUB_OUTPUT
- name: Check release branch
if: github.event.pull_request.base.ref != steps.title_branch.outputs.branch && !(github.event.pull_request.base.ref == 'master' && steps.title_branch.outputs.branch == '')
run: echo "::error::PR title suggests targetting the ${{ steps.title_branch.outputs.branch }} branch, but is opened against ${{ github.event.pull_request.base.ref }}" && exit 1

View file

@ -1,33 +0,0 @@
name: windows-2019
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
schedule:
- cron: '0 10 * * *'
workflow_dispatch:
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
test-prepare:
uses: ./.github/workflows/.test-prepare.yml
needs:
- validate-dco
run:
needs:
- test-prepare
uses: ./.github/workflows/.windows.yml
secrets: inherit
strategy:
fail-fast: false
matrix:
storage: ${{ fromJson(needs.test-prepare.outputs.matrix) }}
with:
os: windows-2019
storage: ${{ matrix.storage }}
send_coverage: false

View file

@ -1,36 +0,0 @@
name: windows-2022
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
pull_request:
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
test-prepare:
uses: ./.github/workflows/.test-prepare.yml
needs:
- validate-dco
run:
needs:
- test-prepare
uses: ./.github/workflows/.windows.yml
secrets: inherit
strategy:
fail-fast: false
matrix:
storage: ${{ fromJson(needs.test-prepare.outputs.matrix) }}
with:
os: windows-2022
storage: ${{ matrix.storage }}
send_coverage: true

35
.gitignore vendored
View file

@ -1,28 +1,25 @@
# If you want to ignore files created by your editor/tools, please consider a # Docker project generated files to ignore
# [global .gitignore](https://help.github.com/articles/ignoring-files). # if you want to ignore files created by your editor/tools,
# please consider a global .gitignore https://help.github.com/articles/ignoring-files
*~ *.exe
*.bak *.exe~
*.gz
*.orig *.orig
test.main
.*.swp .*.swp
.DS_Store .DS_Store
thumbs.db # a .bashrc may be added to customize the build environment
# local repository customization
.envrc
.bashrc .bashrc
.editorconfig .editorconfig
.gopath/
# build artifacts .go-pkg-cache/
autogen/
bundles/ bundles/
cli/winresources/*/*.syso cmd/dockerd/dockerd
cli/winresources/*/winres.json
contrib/builder/rpm/*/changelog contrib/builder/rpm/*/changelog
dockerversion/version_autogen.go
# ci artifacts dockerversion/version_autogen_unix.go
*.exe vendor/pkg/
*.gz
go-test-report.json go-test-report.json
junit-report.xml
profile.out profile.out
test.main junit-report.xml

View file

@ -1,137 +0,0 @@
linters:
enable:
- depguard
- dupword # Checks for duplicate words in the source code.
- goimports
- gosec
- gosimple
- govet
- importas
- ineffassign
- misspell
- revive
- staticcheck
- typecheck
- unconvert
- unused
disable:
- errcheck
run:
concurrency: 2
modules-download-mode: vendor
skip-dirs:
- docs
linters-settings:
dupword:
ignore:
- "true" # some tests use this as expected output
- "false" # some tests use this as expected output
- "root" # for tests using "ls" output with files owned by "root:root"
importas:
# Do not allow unaliased imports of aliased packages.
no-unaliased: true
alias:
# Enforce alias to prevent it accidentally being used instead of our
# own errdefs package (or vice-versa).
- pkg: github.com/containerd/containerd/errdefs
alias: cerrdefs
- pkg: github.com/opencontainers/image-spec/specs-go/v1
alias: ocispec
govet:
check-shadowing: false
depguard:
rules:
main:
deny:
- pkg: io/ioutil
desc: The io/ioutil package has been deprecated, see https://go.dev/doc/go1.16#ioutil
- pkg: "github.com/stretchr/testify/assert"
desc: Use "gotest.tools/v3/assert" instead
- pkg: "github.com/stretchr/testify/require"
desc: Use "gotest.tools/v3/assert" instead
- pkg: "github.com/stretchr/testify/suite"
desc: Do not use
revive:
rules:
# FIXME make sure all packages have a description. Currently, there's many packages without.
- name: package-comments
disabled: true
issues:
# The default exclusion rules are a bit too permissive, so copying the relevant ones below
exclude-use-default: false
exclude-rules:
# We prefer to use an "exclude-list" so that new "default" exclusions are not
# automatically inherited. We can decide whether or not to follow upstream
# defaults when updating golang-ci-lint versions.
# Unfortunately, this means we have to copy the whole exclusion pattern, as
# (unlike the "include" option), the "exclude" option does not take exclusion
# ID's.
#
# These exclusion patterns are copied from the default excluses at:
# https://github.com/golangci/golangci-lint/blob/v1.46.2/pkg/config/issues.go#L10-L104
# EXC0001
- text: "Error return value of .((os\\.)?std(out|err)\\..*|.*Close|.*Flush|os\\.Remove(All)?|.*print(f|ln)?|os\\.(Un)?Setenv). is not checked"
linters:
- errcheck
# EXC0006
- text: "Use of unsafe calls should be audited"
linters:
- gosec
# EXC0007
- text: "Subprocess launch(ed with variable|ing should be audited)"
linters:
- gosec
# EXC0008
# TODO: evaluate these and fix where needed: G307: Deferring unsafe method "*os.File" on type "Close" (gosec)
- text: "(G104|G307)"
linters:
- gosec
# EXC0009
- text: "(Expect directory permissions to be 0750 or less|Expect file permissions to be 0600 or less)"
linters:
- gosec
# EXC0010
- text: "Potential file inclusion via variable"
linters:
- gosec
# Looks like the match in "EXC0007" above doesn't catch this one
# TODO: consider upstreaming this to golangci-lint's default exclusion rules
- text: "G204: Subprocess launched with a potential tainted input or cmd arguments"
linters:
- gosec
# Looks like the match in "EXC0009" above doesn't catch this one
# TODO: consider upstreaming this to golangci-lint's default exclusion rules
- text: "G306: Expect WriteFile permissions to be 0600 or less"
linters:
- gosec
# Exclude some linters from running on tests files.
- path: _test\.go
linters:
- errcheck
- gosec
# Suppress golint complaining about generated types in api/types/
- text: "type name will be used as (container|volume)\\.(Container|Volume).* by other packages, and that stutters; consider calling this"
path: "api/types/(volume|container)/"
linters:
- revive
# FIXME temporarily suppress these (see https://github.com/gotestyourself/gotest.tools/issues/272)
- text: "SA1019: (assert|cmp|is)\\.ErrorType is deprecated"
linters:
- staticcheck
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0

310
.mailmap
View file

@ -1,24 +1,15 @@
# This file lists the canonical name and email of contributors, and is used to # Generate AUTHORS: hack/generate-authors.sh
# generate AUTHORS (in hack/generate-authors.sh).
#
# To find new duplicates, regenerate AUTHORS and scan for name duplicates, or
# run the following to find email duplicates:
# git log --format='%aE - %aN' | sort -uf | awk -v IGNORECASE=1 '$1 in a {print a[$1]; print}; {a[$1]=$0}'
#
# For an explanation of this file format, consult gitmailmap(5).
# Tip for finding duplicates (besides scanning the output of AUTHORS for name
# duplicates that aren't also email duplicates): scan the output of:
# git log --format='%aE - %aN' | sort -uf
#
# For explanation on this file format: man git-shortlog
<21551195@zju.edu.cn> <hsinko@users.noreply.github.com>
<mr.wrfly@gmail.com> <wrfly@users.noreply.github.com>
Aaron L. Xu <liker.xu@foxmail.com> Aaron L. Xu <liker.xu@foxmail.com>
Aaron L. Xu <liker.xu@foxmail.com> <likexu@harmonycloud.cn> Abhinandan Prativadi <abhi@docker.com>
Aaron Lehmann <alehmann@netflix.com>
Aaron Lehmann <alehmann@netflix.com> <aaron.lehmann@docker.com>
Abhinandan Prativadi <aprativadi@gmail.com>
Abhinandan Prativadi <aprativadi@gmail.com> <abhi@docker.com>
Abhinandan Prativadi <aprativadi@gmail.com> abhi <user.email>
Abhishek Chanda <abhishek.becs@gmail.com>
Abhishek Chanda <abhishek.becs@gmail.com> <abhishek.chanda@emc.com>
Ada Mancini <ada@docker.com>
Adam Dobrawy <naczelnik@jawnosc.tk>
Adam Dobrawy <naczelnik@jawnosc.tk> <ad-m@users.noreply.github.com>
Adrien Gallouët <adrien@gallouet.fr> <angt@users.noreply.github.com> Adrien Gallouët <adrien@gallouet.fr> <angt@users.noreply.github.com>
Ahmed Kamal <email.ahmedkamal@googlemail.com> Ahmed Kamal <email.ahmedkamal@googlemail.com>
Ahmet Alp Balkan <ahmetb@microsoft.com> <ahmetalpbalkan@gmail.com> Ahmet Alp Balkan <ahmetb@microsoft.com> <ahmetalpbalkan@gmail.com>
@ -27,40 +18,23 @@ AJ Bowen <aj@soulshake.net> <aj@gandi.net>
AJ Bowen <aj@soulshake.net> <amy@gandi.net> AJ Bowen <aj@soulshake.net> <amy@gandi.net>
Akihiro Matsushima <amatsusbit@gmail.com> <amatsus@users.noreply.github.com> Akihiro Matsushima <amatsusbit@gmail.com> <amatsus@users.noreply.github.com>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.akihiro@lab.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.kyoto@gmail.com> Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.kyoto@gmail.com>
Akshay Moghe <akshay.moghe@gmail.com> Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.akihiro@lab.ntt.co.jp>
Albin Kerouanton <albinker@gmail.com>
Albin Kerouanton <albinker@gmail.com> <albin@akerouanton.name>
Albin Kerouanton <albinker@gmail.com> <557933+akerouanton@users.noreply.github.com>
Aleksa Sarai <asarai@suse.de> Aleksa Sarai <asarai@suse.de>
Aleksa Sarai <asarai@suse.de> <asarai@suse.com> Aleksa Sarai <asarai@suse.de> <asarai@suse.com>
Aleksa Sarai <asarai@suse.de> <cyphar@cyphar.com> Aleksa Sarai <asarai@suse.de> <cyphar@cyphar.com>
Aleksandrs Fadins <aleks@s-ko.net> Aleksandrs Fadins <aleks@s-ko.net>
Alessandro Boch <aboch@tetrationanalytics.com>
Alessandro Boch <aboch@tetrationanalytics.com> <aboch@docker.com> Alessandro Boch <aboch@tetrationanalytics.com> <aboch@docker.com>
Alessandro Boch <aboch@tetrationanalytics.com> <aboch@socketplane.io>
Alessandro Boch <aboch@tetrationanalytics.com> <aboch@users.noreply.github.com>
Alex Chan <alex@alexwlchan.net>
Alex Chan <alex@alexwlchan.net> <alex.chan@metaswitch.com>
Alex Chen <alexchenunix@gmail.com> <root@localhost.localdomain> Alex Chen <alexchenunix@gmail.com> <root@localhost.localdomain>
Alex Ellis <alexellis2@gmail.com> Alex Ellis <alexellis2@gmail.com>
Alex Goodman <wagoodman@gmail.com> <wagoodman@users.noreply.github.com> Alex Goodman <wagoodman@gmail.com> <wagoodman@users.noreply.github.com>
Alexander Larsson <alexl@redhat.com> <alexander.larsson@gmail.com> Alexander Larsson <alexl@redhat.com> <alexander.larsson@gmail.com>
Alexander Morozov <lk4d4math@gmail.com> Alexander Morozov <lk4d4@docker.com>
Alexander Morozov <lk4d4math@gmail.com> <lk4d4@docker.com> Alexander Morozov <lk4d4@docker.com> <lk4d4math@gmail.com>
Alexandre Beslic <alexandre.beslic@gmail.com> <abronan@docker.com> Alexandre Beslic <alexandre.beslic@gmail.com> <abronan@docker.com>
Alexandre González <agonzalezro@gmail.com>
Alexis Ries <ries.alexis@gmail.com>
Alexis Ries <ries.alexis@gmail.com> <alexis.ries.ext@orange.com>
Alexis Thomas <fr.alexisthomas@gmail.com>
Alicia Lauerman <alicia@eta.im> <allydevour@me.com> Alicia Lauerman <alicia@eta.im> <allydevour@me.com>
Allen Sun <allensun.shl@alibaba-inc.com> <allen.sun@daocloud.io> Allen Sun <allensun.shl@alibaba-inc.com> <allen.sun@daocloud.io>
Allen Sun <allensun.shl@alibaba-inc.com> <shlallen1990@gmail.com> Allen Sun <allensun.shl@alibaba-inc.com> <shlallen1990@gmail.com>
Anca Iordache <anca.iordache@docker.com>
Andrea Denisse Gómez <crypto.andrea@protonmail.ch>
Andrew Kim <taeyeonkim90@gmail.com>
Andrew Kim <taeyeonkim90@gmail.com> <akim01@fortinet.com>
Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@microsoft.com> Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@microsoft.com>
Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@outlook.com> Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@outlook.com>
Andrey Kolomentsev <andrey.kolomentsev@docker.com> Andrey Kolomentsev <andrey.kolomentsev@docker.com>
@ -68,8 +42,6 @@ Andrey Kolomentsev <andrey.kolomentsev@docker.com> <andrey.kolomentsev@gmail.com
André Martins <aanm90@gmail.com> <martins@noironetworks.com> André Martins <aanm90@gmail.com> <martins@noironetworks.com>
Andy Rothfusz <github@developersupport.net> <github@metaliveblog.com> Andy Rothfusz <github@developersupport.net> <github@metaliveblog.com>
Andy Smith <github@anarkystic.com> Andy Smith <github@anarkystic.com>
Andy Zhang <andy.zhangtao@hotmail.com>
Andy Zhang <andy.zhangtao@hotmail.com> <ztao@tibco-support.com>
Ankush Agarwal <ankushagarwal11@gmail.com> <ankushagarwal@users.noreply.github.com> Ankush Agarwal <ankushagarwal11@gmail.com> <ankushagarwal@users.noreply.github.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <amurdaca@redhat.com> Antonio Murdaca <antonio.murdaca@gmail.com> <amurdaca@redhat.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <me@runcom.ninja> Antonio Murdaca <antonio.murdaca@gmail.com> <me@runcom.ninja>
@ -79,48 +51,30 @@ Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@users.noreply.github.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com> Anuj Bahuguna <anujbahuguna.dev@gmail.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com> <abahuguna@fiberlink.com> Anuj Bahuguna <anujbahuguna.dev@gmail.com> <abahuguna@fiberlink.com>
Anusha Ragunathan <anusha.ragunathan@docker.com> <anusha@docker.com> Anusha Ragunathan <anusha.ragunathan@docker.com> <anusha@docker.com>
Anyu Wang <wanganyu@outlook.com> Arnaud Porterie <arnaud.porterie@docker.com>
Arko Dasgupta <arko@tetrate.io> Arnaud Porterie <arnaud.porterie@docker.com> <icecrime@gmail.com>
Arko Dasgupta <arko@tetrate.io> <arko.dasgupta@docker.com>
Arko Dasgupta <arko@tetrate.io> <arkodg@users.noreply.github.com>
Arnaud Porterie <icecrime@gmail.com>
Arnaud Porterie <icecrime@gmail.com> <arnaud.porterie@docker.com>
Arnaud Rebillout <arnaud.rebillout@collabora.com>
Arnaud Rebillout <arnaud.rebillout@collabora.com> <elboulangero@gmail.com>
Arthur Gautier <baloo@gandi.net> <superbaloo+registrations.github@superbaloo.net> Arthur Gautier <baloo@gandi.net> <superbaloo+registrations.github@superbaloo.net>
Artur Meyster <arthurfbi@yahoo.com>
Avi Miller <avi.miller@oracle.com> <avi.miller@gmail.com> Avi Miller <avi.miller@oracle.com> <avi.miller@gmail.com>
Ben Bonnefoy <frenchben@docker.com> Ben Bonnefoy <frenchben@docker.com>
Ben Golub <ben.golub@dotcloud.com> Ben Golub <ben.golub@dotcloud.com>
Ben Toews <mastahyeti@gmail.com> <mastahyeti@users.noreply.github.com> Ben Toews <mastahyeti@gmail.com> <mastahyeti@users.noreply.github.com>
Benny Ng <benny.tpng@gmail.com>
Benoit Chesneau <bchesneau@gmail.com> Benoit Chesneau <bchesneau@gmail.com>
Bevisy Zhang <binbin36520@gmail.com> Bevisy Zhang <binbin36520@gmail.com>
Bhiraj Butala <abhiraj.butala@gmail.com> Bhiraj Butala <abhiraj.butala@gmail.com>
Bhumika Bayani <bhumikabayani@gmail.com> Bhumika Bayani <bhumikabayani@gmail.com>
Bilal Amarni <bilal.amarni@gmail.com> <bamarni@users.noreply.github.com> Bilal Amarni <bilal.amarni@gmail.com> <bamarni@users.noreply.github.com>
Bill Wang <ozbillwang@gmail.com> <SydOps@users.noreply.github.com>
Bily Zhang <xcoder@tenxcloud.com> Bily Zhang <xcoder@tenxcloud.com>
Bill Wang <ozbillwang@gmail.com> <SydOps@users.noreply.github.com>
Bin Liu <liubin0329@gmail.com> Bin Liu <liubin0329@gmail.com>
Bin Liu <liubin0329@gmail.com> <liubin0329@users.noreply.github.com> Bin Liu <liubin0329@gmail.com> <liubin0329@users.noreply.github.com>
Bingshen Wang <bingshen.wbs@alibaba-inc.com> Bingshen Wang <bingshen.wbs@alibaba-inc.com>
Bjorn Neergaard <bjorn@neersighted.com>
Bjorn Neergaard <bjorn@neersighted.com> <bjorn.neergaard@docker.com>
Bjorn Neergaard <bjorn@neersighted.com> <bneergaard@mirantis.com>
Boaz Shuster <ripcurld.github@gmail.com> Boaz Shuster <ripcurld.github@gmail.com>
Bojun Zhu <bojun.zhu@foxmail.com>
Boqin Qin <bobbqqin@gmail.com>
Boshi Lian <farmer1992@gmail.com>
Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.co> Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.co>
Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.org> Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.org>
Brent Salisbury <brent.salisbury@docker.com> <brent@docker.com> Brent Salisbury <brent.salisbury@docker.com> <brent@docker.com>
Brian Goff <cpuguy83@gmail.com> Brian Goff <cpuguy83@gmail.com>
Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.home> Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.home>
Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.local> Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.local>
Brian Goff <cpuguy83@gmail.com> <brian.goff@microsoft.com>
Brian Goff <cpuguy83@gmail.com> <cpuguy@hey.com>
Cameron Sparr <gh@sparr.email>
Carlos de Paula <me@carlosedp.com>
Chander Govindarajan <chandergovind@gmail.com> Chander Govindarajan <chandergovind@gmail.com>
Chao Wang <wangchao.fnst@cn.fujitsu.com> <chaowang@localhost.localdomain> Chao Wang <wangchao.fnst@cn.fujitsu.com> <chaowang@localhost.localdomain>
Charles Hooper <charles.hooper@dotcloud.com> <chooper@plumata.com> Charles Hooper <charles.hooper@dotcloud.com> <chooper@plumata.com>
@ -132,21 +86,13 @@ Chen Qiu <cheney-90@hotmail.com> <21321229@zju.edu.cn>
Chengfei Shang <cfshang@alauda.io> Chengfei Shang <cfshang@alauda.io>
Chris Dias <cdias@microsoft.com> Chris Dias <cdias@microsoft.com>
Chris McKinnel <chris.mckinnel@tangentlabs.co.uk> Chris McKinnel <chris.mckinnel@tangentlabs.co.uk>
Chris Price <cprice@mirantis.com>
Chris Price <cprice@mirantis.com> <chris.price@docker.com>
Chris Telfer <ctelfer@docker.com>
Chris Telfer <ctelfer@docker.com> <ctelfer@users.noreply.github.com>
Christopher Biscardi <biscarch@sketcht.com> Christopher Biscardi <biscarch@sketcht.com>
Christopher Latham <sudosurootdev@gmail.com> Christopher Latham <sudosurootdev@gmail.com>
Christy Norman <christy@linux.vnet.ibm.com>
Chun Chen <ramichen@tencent.com> <chenchun.feed@gmail.com> Chun Chen <ramichen@tencent.com> <chenchun.feed@gmail.com>
Corbin Coleman <corbin.coleman@docker.com> Corbin Coleman <corbin.coleman@docker.com>
Cristian Ariza <dev@cristianrz.com>
Cristian Staretu <cristian.staretu@gmail.com> Cristian Staretu <cristian.staretu@gmail.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejack@users.noreply.github.com> Cristian Staretu <cristian.staretu@gmail.com> <unclejack@users.noreply.github.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejacksons@gmail.com> Cristian Staretu <cristian.staretu@gmail.com> <unclejacksons@gmail.com>
cui fliter <imcusg@gmail.com>
cui fliter <imcusg@gmail.com> cuishuang <imcusg@gmail.com>
CUI Wei <ghostplant@qq.com> cuiwei13 <cuiwei13@pku.edu.cn> CUI Wei <ghostplant@qq.com> cuiwei13 <cuiwei13@pku.edu.cn>
Daehyeok Mun <daehyeok@gmail.com> Daehyeok Mun <daehyeok@gmail.com>
Daehyeok Mun <daehyeok@gmail.com> <daehyeok@daehyeok-ui-MacBook-Air.local> Daehyeok Mun <daehyeok@gmail.com> <daehyeok@daehyeok-ui-MacBook-Air.local>
@ -173,35 +119,22 @@ Dattatraya Kumbhar <dattatraya.kumbhar@gslab.com>
Dave Goodchild <buddhamagnet@gmail.com> Dave Goodchild <buddhamagnet@gmail.com>
Dave Henderson <dhenderson@gmail.com> <Dave.Henderson@ca.ibm.com> Dave Henderson <dhenderson@gmail.com> <Dave.Henderson@ca.ibm.com>
Dave Tucker <dt@docker.com> <dave@dtucker.co.uk> Dave Tucker <dt@docker.com> <dave@dtucker.co.uk>
David Dooling <dooling@gmail.com>
David Dooling <dooling@gmail.com> <david.dooling@docker.com>
David M. Karr <davidmichaelkarr@gmail.com> David M. Karr <davidmichaelkarr@gmail.com>
David Sheets <dsheets@docker.com> <sheets@alum.mit.edu> David Sheets <dsheets@docker.com> <sheets@alum.mit.edu>
David Sissitka <me@dsissitka.com> David Sissitka <me@dsissitka.com>
David Williamson <david.williamson@docker.com> <davidwilliamson@users.noreply.github.com> David Williamson <david.williamson@docker.com> <davidwilliamson@users.noreply.github.com>
Derek Ch <denc716@gmail.com>
Derek McGowan <derek@mcg.dev>
Derek McGowan <derek@mcg.dev> <derek@mcgstyle.net>
Deshi Xiao <dxiao@redhat.com> <dsxiao@dataman-inc.com> Deshi Xiao <dxiao@redhat.com> <dsxiao@dataman-inc.com>
Deshi Xiao <dxiao@redhat.com> <xiaods@gmail.com> Deshi Xiao <dxiao@redhat.com> <xiaods@gmail.com>
Dhilip Kumars <dhilip.kumar.s@huawei.com>
Diego Siqueira <dieg0@live.com> Diego Siqueira <dieg0@live.com>
Diogo Monica <diogo@docker.com> <diogo.monica@gmail.com> Diogo Monica <diogo@docker.com> <diogo.monica@gmail.com>
Dmitry Sharshakov <d3dx12.xx@gmail.com> Dmitry Sharshakov <d3dx12.xx@gmail.com>
Dmitry Sharshakov <d3dx12.xx@gmail.com> <sh7dm@outlook.com> Dmitry Sharshakov <d3dx12.xx@gmail.com> <sh7dm@outlook.com>
Dmytro Iakovliev <dmytro.iakovliev@zodiacsystems.com>
Dominic Yin <yindongchao@inspur.com>
Dominik Honnef <dominik@honnef.co> <dominikh@fork-bomb.org> Dominik Honnef <dominik@honnef.co> <dominikh@fork-bomb.org>
Doug Davis <dug@us.ibm.com> <duglin@users.noreply.github.com> Doug Davis <dug@us.ibm.com> <duglin@users.noreply.github.com>
Doug Tangren <d.tangren@gmail.com> Doug Tangren <d.tangren@gmail.com>
Drew Erny <derny@mirantis.com>
Drew Erny <derny@mirantis.com> <drew.erny@docker.com>
Elan Ruusamäe <glen@pld-linux.org> Elan Ruusamäe <glen@pld-linux.org>
Elan Ruusamäe <glen@pld-linux.org> <glen@delfi.ee> Elan Ruusamäe <glen@pld-linux.org> <glen@delfi.ee>
Elango Sivanandam <elango.siva@docker.com> Elango Sivanandam <elango.siva@docker.com>
Elango Sivanandam <elango.siva@docker.com> <elango@docker.com>
Eli Uriegas <seemethere101@gmail.com>
Eli Uriegas <seemethere101@gmail.com> <eli.uriegas@docker.com>
Eric G. Noriega <enoriega@vizuri.com> <egnoriega@users.noreply.github.com> Eric G. Noriega <enoriega@vizuri.com> <egnoriega@users.noreply.github.com>
Eric Hanchrow <ehanchrow@ine.com> <eric.hanchrow@gmail.com> Eric Hanchrow <ehanchrow@ine.com> <eric.hanchrow@gmail.com>
Eric Rosenberg <ehaydenr@gmail.com> <ehaydenr@users.noreply.github.com> Eric Rosenberg <ehaydenr@gmail.com> <ehaydenr@users.noreply.github.com>
@ -221,14 +154,10 @@ Felix Hupfeld <felix@quobyte.com> <quofelix@users.noreply.github.com>
Felix Ruess <felix.ruess@gmail.com> <felix.ruess@roboception.de> Felix Ruess <felix.ruess@gmail.com> <felix.ruess@roboception.de>
Feng Yan <fy2462@gmail.com> Feng Yan <fy2462@gmail.com>
Fengtu Wang <wangfengtu@huawei.com> <wangfengtu@huawei.com> Fengtu Wang <wangfengtu@huawei.com> <wangfengtu@huawei.com>
Filipe Pina <hzlu1ot0@duck.com>
Filipe Pina <hzlu1ot0@duck.com> <636320+fopina@users.noreply.github.com>
Francisco Carriedo <fcarriedo@gmail.com> Francisco Carriedo <fcarriedo@gmail.com>
Frank Rosquin <frank.rosquin+github@gmail.com> <frank.rosquin@gmail.com> Frank Rosquin <frank.rosquin+github@gmail.com> <frank.rosquin@gmail.com>
Frank Yang <yyb196@gmail.com>
Frederick F. Kautz IV <fkautz@redhat.com> <fkautz@alumni.cmu.edu> Frederick F. Kautz IV <fkautz@redhat.com> <fkautz@alumni.cmu.edu>
Fu JinLin <withlin@yeah.net> Fu JinLin <withlin@yeah.net>
Gabriel Goller <gabrielgoller123@gmail.com>
Gabriel Nicolas Avellaneda <avellaneda.gabriel@gmail.com> Gabriel Nicolas Avellaneda <avellaneda.gabriel@gmail.com>
Gaetan de Villele <gdevillele@gmail.com> Gaetan de Villele <gdevillele@gmail.com>
Gang Qiao <qiaohai8866@gmail.com> <1373319223@qq.com> Gang Qiao <qiaohai8866@gmail.com> <1373319223@qq.com>
@ -239,71 +168,43 @@ Giampaolo Mancini <giampaolo@trampolineup.com>
Giovan Isa Musthofa <giovanism@outlook.co.id> Giovan Isa Musthofa <giovanism@outlook.co.id>
Gopikannan Venugopalsamy <gopikannan.venugopalsamy@gmail.com> Gopikannan Venugopalsamy <gopikannan.venugopalsamy@gmail.com>
Gou Rao <gou@portworx.com> <gourao@users.noreply.github.com> Gou Rao <gou@portworx.com> <gourao@users.noreply.github.com>
Grant Millar <rid@cylo.io>
Grant Millar <rid@cylo.io> <grant@cylo.io>
Grant Millar <rid@cylo.io> <grant@seednet.eu>
Greg Stephens <greg@udon.org> Greg Stephens <greg@udon.org>
Guillaume J. Charmes <guillaume.charmes@docker.com> <charmes.guillaume@gmail.com> Guillaume J. Charmes <guillaume.charmes@docker.com> <charmes.guillaume@gmail.com>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume.charmes@dotcloud.com> Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume.charmes@dotcloud.com>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@charmes.net> Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@charmes.net>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@docker.com> Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@docker.com>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@dotcloud.com> Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@dotcloud.com>
Gunadhya S. <6939749+gunadhya@users.noreply.github.com>
Guoqiang QI <guoqiang.qi1@gmail.com>
Guri <odg0318@gmail.com> Guri <odg0318@gmail.com>
Gurjeet Singh <gurjeet@singh.im> <singh.gurjeet@gmail.com> Gurjeet Singh <gurjeet@singh.im> <singh.gurjeet@gmail.com>
Gustav Sinder <gustav.sinder@gmail.com> Gustav Sinder <gustav.sinder@gmail.com>
Günther Jungbluth <gunther@gameslabs.net> Günther Jungbluth <gunther@gameslabs.net>
Hakan Özler <hakan.ozler@kodcu.com> Hakan Özler <hakan.ozler@kodcu.com>
Hao Shu Wei <haoshuwei24@gmail.com> Hao Shu Wei <haosw@cn.ibm.com>
Hao Shu Wei <haoshuwei24@gmail.com> <haoshuwei1989@163.com> Hao Shu Wei <haosw@cn.ibm.com> <haoshuwei1989@163.com>
Hao Shu Wei <haoshuwei24@gmail.com> <haosw@cn.ibm.com>
Harald Albers <github@albersweb.de> <albers@users.noreply.github.com> Harald Albers <github@albersweb.de> <albers@users.noreply.github.com>
Harald Niesche <harald@niesche.de>
Harold Cooper <hrldcpr@gmail.com> Harold Cooper <hrldcpr@gmail.com>
Harry Zhang <harryz@hyper.sh>
Harry Zhang <harryz@hyper.sh> <harryzhang@zju.edu.cn> Harry Zhang <harryz@hyper.sh> <harryzhang@zju.edu.cn>
Harry Zhang <harryz@hyper.sh> <resouer@163.com> Harry Zhang <harryz@hyper.sh> <resouer@163.com>
Harry Zhang <harryz@hyper.sh> <resouer@gmail.com> Harry Zhang <harryz@hyper.sh> <resouer@gmail.com>
Harry Zhang <resouer@163.com>
Harshal Patil <harshal.patil@in.ibm.com> <harche@users.noreply.github.com> Harshal Patil <harshal.patil@in.ibm.com> <harche@users.noreply.github.com>
He Simei <hesimei@zju.edu.cn>
Helen Xie <chenjg@harmonycloud.cn> Helen Xie <chenjg@harmonycloud.cn>
Hiroyuki Sasagawa <hs19870702@gmail.com> Hiroyuki Sasagawa <hs19870702@gmail.com>
Hollie Teal <hollie@docker.com> Hollie Teal <hollie@docker.com>
Hollie Teal <hollie@docker.com> <hollie.teal@docker.com> Hollie Teal <hollie@docker.com> <hollie.teal@docker.com>
Hollie Teal <hollie@docker.com> <hollietealok@users.noreply.github.com> Hollie Teal <hollie@docker.com> <hollietealok@users.noreply.github.com>
hsinko <21551195@zju.edu.cn> <hsinko@users.noreply.github.com>
Hu Keping <hukeping@huawei.com> Hu Keping <hukeping@huawei.com>
Huajin Tong <fliterdashen@gmail.com>
Hui Kang <hkang.sunysb@gmail.com>
Hui Kang <hkang.sunysb@gmail.com> <kangh@us.ibm.com>
Huu Nguyen <huu@prismskylabs.com> <whoshuu@gmail.com> Huu Nguyen <huu@prismskylabs.com> <whoshuu@gmail.com>
Hyeongkyu Lee <hyeongkyu.lee@navercorp.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com> Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com> <1187766782@qq.com> Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com> <1187766782@qq.com>
Ian Campbell <ian.campbell@docker.com>
Ian Campbell <ian.campbell@docker.com> <ijc@docker.com>
Ilya Khlopotov <ilya.khlopotov@gmail.com> Ilya Khlopotov <ilya.khlopotov@gmail.com>
Iskander Sharipov <quasilyte@gmail.com> Iskander Sharipov <quasilyte@gmail.com>
Ivan Babrou <ibobrik@gmail.com>
Ivan Markin <sw@nogoegst.net> <twim@riseup.net> Ivan Markin <sw@nogoegst.net> <twim@riseup.net>
Jack Laxson <jackjrabbit@gmail.com> Jack Laxson <jackjrabbit@gmail.com>
Jacob Atzen <jacob@jacobatzen.dk> <jatzen@gmail.com> Jacob Atzen <jacob@jacobatzen.dk> <jatzen@gmail.com>
Jacob Tomlinson <jacob@tom.linson.uk> <jacobtomlinson@users.noreply.github.com> Jacob Tomlinson <jacob@tom.linson.uk> <jacobtomlinson@users.noreply.github.com>
Jaivish Kothari <janonymous.codevulture@gmail.com> Jaivish Kothari <janonymous.codevulture@gmail.com>
Jake Moshenko <jake@devtable.com>
Jakub Drahos <jdrahos@pulsepoint.com>
Jakub Drahos <jdrahos@pulsepoint.com> <jack.drahos@gmail.com>
James Nesbitt <jnesbitt@mirantis.com>
James Nesbitt <jnesbitt@mirantis.com> <james.nesbitt@wunderkraut.com>
Jamie Hannaford <jamie@limetree.org> <jamie.hannaford@rackspace.com> Jamie Hannaford <jamie@limetree.org> <jamie.hannaford@rackspace.com>
Jan Götte <jaseg@jaseg.net>
Jana Radhakrishnan <mrjana@docker.com>
Jana Radhakrishnan <mrjana@docker.com> <mrjana@socketplane.io>
Javier Bassi <javierbassi@gmail.com>
Javier Bassi <javierbassi@gmail.com> <CrimsonGlory@users.noreply.github.com>
Jay Lim <jay@imjching.com>
Jay Lim <jay@imjching.com> <imjching@hotmail.com>
Jean Rouge <rougej+github@gmail.com> <jer329@cornell.edu> Jean Rouge <rougej+github@gmail.com> <jer329@cornell.edu>
Jean-Baptiste Barth <jeanbaptiste.barth@gmail.com> Jean-Baptiste Barth <jeanbaptiste.barth@gmail.com>
Jean-Baptiste Dalido <jeanbaptiste@appgratis.com> Jean-Baptiste Dalido <jeanbaptiste@appgratis.com>
@ -311,16 +212,15 @@ Jean-Tiare Le Bigot <jt@yadutaf.fr> <admin@jtlebi.fr>
Jeff Anderson <jeff@docker.com> <jefferya@programmerq.net> Jeff Anderson <jeff@docker.com> <jefferya@programmerq.net>
Jeff Nickoloff <jeff.nickoloff@gmail.com> <jeff@allingeek.com> Jeff Nickoloff <jeff.nickoloff@gmail.com> <jeff@allingeek.com>
Jeroen Franse <jeroenfranse@gmail.com> Jeroen Franse <jeroenfranse@gmail.com>
Jessica Frazelle <jess@oxide.computer> Jessica Frazelle <acidburn@microsoft.com>
Jessica Frazelle <jess@oxide.computer> <acidburn@docker.com> Jessica Frazelle <acidburn@microsoft.com> <acidburn@docker.com>
Jessica Frazelle <jess@oxide.computer> <acidburn@google.com> Jessica Frazelle <acidburn@microsoft.com> <acidburn@google.com>
Jessica Frazelle <jess@oxide.computer> <acidburn@microsoft.com> Jessica Frazelle <acidburn@microsoft.com> <jess@docker.com>
Jessica Frazelle <jess@oxide.computer> <jess@docker.com> Jessica Frazelle <acidburn@microsoft.com> <jess@mesosphere.com>
Jessica Frazelle <jess@oxide.computer> <jess@mesosphere.com> Jessica Frazelle <acidburn@microsoft.com> <jessfraz@google.com>
Jessica Frazelle <jess@oxide.computer> <jessfraz@google.com> Jessica Frazelle <acidburn@microsoft.com> <jfrazelle@users.noreply.github.com>
Jessica Frazelle <jess@oxide.computer> <jfrazelle@users.noreply.github.com> Jessica Frazelle <acidburn@microsoft.com> <me@jessfraz.com>
Jessica Frazelle <jess@oxide.computer> <me@jessfraz.com> Jessica Frazelle <acidburn@microsoft.com> <princess@docker.com>
Jessica Frazelle <jess@oxide.computer> <princess@docker.com>
Jian Liao <jliao@alauda.io> Jian Liao <jliao@alauda.io>
Jiang Jinyang <jjyruby@gmail.com> Jiang Jinyang <jjyruby@gmail.com>
Jiang Jinyang <jjyruby@gmail.com> <jiangjinyang@outlook.com> Jiang Jinyang <jjyruby@gmail.com> <jiangjinyang@outlook.com>
@ -332,17 +232,15 @@ Joffrey F <joffrey@docker.com> <f.joffrey@gmail.com>
Joffrey F <joffrey@docker.com> <joffrey@dotcloud.com> Joffrey F <joffrey@docker.com> <joffrey@dotcloud.com>
Johan Euphrosine <proppy@google.com> <proppy@aminche.com> Johan Euphrosine <proppy@google.com> <proppy@aminche.com>
John Harris <john@johnharris.io> John Harris <john@johnharris.io>
John Howard <github@lowenna.com> John Howard (VM) <John.Howard@microsoft.com>
John Howard <github@lowenna.com> <10522484+lowenna@users.noreply.github.com> John Howard (VM) <John.Howard@microsoft.com> <jhoward@microsoft.com>
John Howard <github@lowenna.com> <jhoward@microsoft.com> John Howard (VM) <John.Howard@microsoft.com> <jhoward@ntdev.microsoft.com>
John Howard <github@lowenna.com> <jhoward@ntdev.microsoft.com> John Howard (VM) <John.Howard@microsoft.com> <jhowardmsft@users.noreply.github.com>
John Howard <github@lowenna.com> <jhowardmsft@users.noreply.github.com> John Howard (VM) <John.Howard@microsoft.com> <john.howard@microsoft.com>
John Howard <github@lowenna.com> <john.howard@microsoft.com>
John Howard <github@lowenna.com> <john@lowenna.com>
John Stephens <johnstep@docker.com> <johnstep@users.noreply.github.com> John Stephens <johnstep@docker.com> <johnstep@users.noreply.github.com>
Jon Surrell <jon.surrell@gmail.com> <jon.surrell@automattic.com>
Jonathan Choy <jonathan.j.choy@gmail.com> Jonathan Choy <jonathan.j.choy@gmail.com>
Jonathan Choy <jonathan.j.choy@gmail.com> <oni@tetsujinlabs.com> Jonathan Choy <jonathan.j.choy@gmail.com> <oni@tetsujinlabs.com>
Jon Surrell <jon.surrell@gmail.com> <jon.surrell@automattic.com>
Jordan Arentsen <blissdev@gmail.com> Jordan Arentsen <blissdev@gmail.com>
Jordan Jennings <jjn2009@gmail.com> <jjn2009@users.noreply.github.com> Jordan Jennings <jjn2009@gmail.com> <jjn2009@users.noreply.github.com>
Jorit Kleine-Möllhoff <joppich@bricknet.de> <joppich@users.noreply.github.com> Jorit Kleine-Möllhoff <joppich@bricknet.de> <joppich@users.noreply.github.com>
@ -358,12 +256,9 @@ Josh Wilson <josh.wilson@fivestars.com> <jcwilson@users.noreply.github.com>
Joyce Jang <mail@joycejang.com> Joyce Jang <mail@joycejang.com>
Julien Bordellier <julienbordellier@gmail.com> <git@julienbordellier.com> Julien Bordellier <julienbordellier@gmail.com> <git@julienbordellier.com>
Julien Bordellier <julienbordellier@gmail.com> <me@julienbordellier.com> Julien Bordellier <julienbordellier@gmail.com> <me@julienbordellier.com>
Jun Du <dujun5@huawei.com>
Justin Cormack <justin.cormack@docker.com> Justin Cormack <justin.cormack@docker.com>
Justin Cormack <justin.cormack@docker.com> <justin.cormack@unikernel.com> Justin Cormack <justin.cormack@docker.com> <justin.cormack@unikernel.com>
Justin Cormack <justin.cormack@docker.com> <justin@specialbusservice.com> Justin Cormack <justin.cormack@docker.com> <justin@specialbusservice.com>
Justin Keller <85903732+jk-vb@users.noreply.github.com>
Justin Keller <85903732+jk-vb@users.noreply.github.com> <jkeller@vb-jkeller-mbp.local>
Justin Simonelis <justin.p.simonelis@gmail.com> <justin.simonelis@PTS-JSIMON2.toronto.exclamation.com> Justin Simonelis <justin.p.simonelis@gmail.com> <justin.simonelis@PTS-JSIMON2.toronto.exclamation.com>
Justin Terry <juterry@microsoft.com> Justin Terry <juterry@microsoft.com>
Jérôme Petazzoni <jerome.petazzoni@docker.com> <jerome.petazzoni@dotcloud.com> Jérôme Petazzoni <jerome.petazzoni@docker.com> <jerome.petazzoni@dotcloud.com>
@ -380,9 +275,6 @@ Ken Cochrane <kencochrane@gmail.com> <KenCochrane@gmail.com>
Ken Herner <kherner@progress.com> <chosenken@gmail.com> Ken Herner <kherner@progress.com> <chosenken@gmail.com>
Ken Reese <krrgithub@gmail.com> Ken Reese <krrgithub@gmail.com>
Kenfe-Mickaël Laventure <mickael.laventure@gmail.com> Kenfe-Mickaël Laventure <mickael.laventure@gmail.com>
Kevin Alvarez <github@crazymax.dev>
Kevin Alvarez <github@crazymax.dev> <1951866+crazy-max@users.noreply.github.com>
Kevin Alvarez <github@crazymax.dev> <crazy-max@users.noreply.github.com>
Kevin Feyrer <kevin.feyrer@btinternet.com> <kevinfeyrer@users.noreply.github.com> Kevin Feyrer <kevin.feyrer@btinternet.com> <kevinfeyrer@users.noreply.github.com>
Kevin Kern <kaiwentan@harmonycloud.cn> Kevin Kern <kaiwentan@harmonycloud.cn>
Kevin Meredith <kevin.m.meredith@gmail.com> Kevin Meredith <kevin.m.meredith@gmail.com>
@ -393,17 +285,11 @@ Konrad Kleine <konrad.wilhelm.kleine@gmail.com> <kwk@users.noreply.github.com>
Konstantin Gribov <grossws@gmail.com> Konstantin Gribov <grossws@gmail.com>
Konstantin Pelykh <kpelykh@zettaset.com> Konstantin Pelykh <kpelykh@zettaset.com>
Kotaro Yoshimatsu <kotaro.yoshimatsu@gmail.com> Kotaro Yoshimatsu <kotaro.yoshimatsu@gmail.com>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp> <btkushuwahak@KUNAL-PC.swh.swh.nttdata.co.jp>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp> <kunal.kushwaha@gmail.com> Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp> <kunal.kushwaha@gmail.com>
Kyle Squizzato <ksquizz@gmail.com>
Kyle Squizzato <ksquizz@gmail.com> <kyle.squizzato@docker.com>
Lajos Papp <lajos.papp@sequenceiq.com> <lalyos@yahoo.com> Lajos Papp <lajos.papp@sequenceiq.com> <lalyos@yahoo.com>
Lei Gong <lgong@alauda.io> Lei Gong <lgong@alauda.io>
Lei Jitang <leijitang@huawei.com> Lei Jitang <leijitang@huawei.com>
Lei Jitang <leijitang@huawei.com> <leijitang@gmail.com> Lei Jitang <leijitang@huawei.com> <leijitang@gmail.com>
Lei Jitang <leijitang@huawei.com> <leijitang@outlook.com>
Leiiwang <u2takey@gmail.com>
Liang Mingqiang <mqliang.zju@gmail.com> Liang Mingqiang <mqliang.zju@gmail.com>
Liang-Chi Hsieh <viirya@gmail.com> Liang-Chi Hsieh <viirya@gmail.com>
Liao Qingwei <liaoqingwei@huawei.com> Liao Qingwei <liaoqingwei@huawei.com>
@ -420,11 +306,8 @@ Lyn <energylyn@zju.edu.cn>
Lynda O'Leary <lyndaoleary29@gmail.com> Lynda O'Leary <lyndaoleary29@gmail.com>
Lynda O'Leary <lyndaoleary29@gmail.com> <lyndaoleary@hotmail.com> Lynda O'Leary <lyndaoleary29@gmail.com> <lyndaoleary@hotmail.com>
Ma Müller <mueller-ma@users.noreply.github.com> Ma Müller <mueller-ma@users.noreply.github.com>
Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com>
Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com> <madhanm@corp.microsoft.com>
Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com> <madhanm@microsoft.com> Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com> <madhanm@microsoft.com>
Madhu Venugopal <mavenugo@gmail.com> <madhu@docker.com> Madhu Venugopal <madhu@socketplane.io> <madhu@docker.com>
Madhu Venugopal <mavenugo@gmail.com> <madhu@socketplane.io>
Mageee <fangpuyi@foxmail.com> <21521230.zju.edu.cn> Mageee <fangpuyi@foxmail.com> <21521230.zju.edu.cn>
Mansi Nahar <mmn4185@rit.edu> <mansi.nahar@macbookpro-mansinahar.local> Mansi Nahar <mmn4185@rit.edu> <mansi.nahar@macbookpro-mansinahar.local>
Mansi Nahar <mmn4185@rit.edu> <mansinahar@users.noreply.github.com> Mansi Nahar <mmn4185@rit.edu> <mansinahar@users.noreply.github.com>
@ -437,12 +320,10 @@ Markan Patel <mpatel678@gmail.com>
Markus Kortlang <hyp3rdino@googlemail.com> <markus.kortlang@lhsystems.com> Markus Kortlang <hyp3rdino@googlemail.com> <markus.kortlang@lhsystems.com>
Martin Redmond <redmond.martin@gmail.com> <martin@tinychat.com> Martin Redmond <redmond.martin@gmail.com> <martin@tinychat.com>
Martin Redmond <redmond.martin@gmail.com> <xgithub@redmond5.com> Martin Redmond <redmond.martin@gmail.com> <xgithub@redmond5.com>
Maru Newby <mnewby@thesprawl.net>
Mary Anthony <mary.anthony@docker.com> <mary@docker.com> Mary Anthony <mary.anthony@docker.com> <mary@docker.com>
Mary Anthony <mary.anthony@docker.com> <moxieandmore@gmail.com> Mary Anthony <mary.anthony@docker.com> <moxieandmore@gmail.com>
Mary Anthony <mary.anthony@docker.com> moxiegirl <mary@docker.com> Mary Anthony <mary.anthony@docker.com> moxiegirl <mary@docker.com>
Masato Ohba <over.rye@gmail.com> Masato Ohba <over.rye@gmail.com>
Mathieu Paturel <mathieu.paturel@gmail.com>
Matt Bentley <matt.bentley@docker.com> <mbentley@mbentley.net> Matt Bentley <matt.bentley@docker.com> <mbentley@mbentley.net>
Matt Schurenko <matt.schurenko@gmail.com> Matt Schurenko <matt.schurenko@gmail.com>
Matt Williams <mattyw@me.com> Matt Williams <mattyw@me.com>
@ -454,48 +335,30 @@ Matthias Kühnle <git.nivoc@neverbox.com> <kuehnle@online.de>
Mauricio Garavaglia <mauricio@medallia.com> <mauriciogaravaglia@gmail.com> Mauricio Garavaglia <mauricio@medallia.com> <mauriciogaravaglia@gmail.com>
Maxwell <csuhp007@gmail.com> Maxwell <csuhp007@gmail.com>
Maxwell <csuhp007@gmail.com> <csuhqg@foxmail.com> Maxwell <csuhp007@gmail.com> <csuhqg@foxmail.com>
Menghui Chen <menghui.chen@alibaba-inc.com> Michael Crosby <michael@docker.com> <crosby.michael@gmail.com>
Michael Beskin <mrbeskin@gmail.com> Michael Crosby <michael@docker.com> <crosbymichael@gmail.com>
Michael Crosby <crosbymichael@gmail.com> Michael Crosby <michael@docker.com> <michael@crosbymichael.com>
Michael Crosby <crosbymichael@gmail.com> <crosby.michael@gmail.com> Michał Gryko <github@odkurzacz.org>
Michael Crosby <crosbymichael@gmail.com> <michael@crosbymichael.com>
Michael Crosby <crosbymichael@gmail.com> <michael@docker.com>
Michael Crosby <crosbymichael@gmail.com> <michael@thepasture.io>
Michael Hudson-Doyle <michael.hudson@canonical.com> <michael.hudson@linaro.org> Michael Hudson-Doyle <michael.hudson@canonical.com> <michael.hudson@linaro.org>
Michael Huettermann <michael@huettermann.net> Michael Huettermann <michael@huettermann.net>
Michael Käufl <docker@c.michael-kaeufl.de> <michael-k@users.noreply.github.com> Michael Käufl <docker@c.michael-kaeufl.de> <michael-k@users.noreply.github.com>
Michael Nussbaum <michael.nussbaum@getbraintree.com> Michael Nussbaum <michael.nussbaum@getbraintree.com>
Michael Nussbaum <michael.nussbaum@getbraintree.com> <code@getbraintree.com> Michael Nussbaum <michael.nussbaum@getbraintree.com> <code@getbraintree.com>
Michael Spetsiotis <michael_spets@hotmail.com> Michael Spetsiotis <michael_spets@hotmail.com>
Michael Stapelberg <michael+gh@stapelberg.de>
Michael Stapelberg <michael+gh@stapelberg.de> <stapelberg@google.com>
Michal Kostrzewa <michal.kostrzewa@codilime.com>
Michal Kostrzewa <michal.kostrzewa@codilime.com> <kostrzewa.michal@o2.pl>
Michal Minář <miminar@redhat.com> Michal Minář <miminar@redhat.com>
Michał Gryko <github@odkurzacz.org>
Michiel de Jong <michiel@unhosted.org> Michiel de Jong <michiel@unhosted.org>
Mickaël Fortunato <morsi.morsicus@gmail.com> Mickaël Fortunato <morsi.morsicus@gmail.com>
Miguel Angel Alvarez Cabrerizo <doncicuto@gmail.com> <30386061+doncicuto@users.noreply.github.com> Miguel Angel Alvarez Cabrerizo <doncicuto@gmail.com> <30386061+doncicuto@users.noreply.github.com>
Miguel Angel Fernández <elmendalerenda@gmail.com> Miguel Angel Fernández <elmendalerenda@gmail.com>
Mihai Borobocea <MihaiBorob@gmail.com> <MihaiBorobocea@gmail.com> Mihai Borobocea <MihaiBorob@gmail.com> <MihaiBorobocea@gmail.com>
Mikael Davranche <mikael.davranche@corp.ovh.com>
Mikael Davranche <mikael.davranche@corp.ovh.com> <mikael.davranche@corp.ovh.net>
Mike Casas <mkcsas0@gmail.com> <mikecasas@users.noreply.github.com> Mike Casas <mkcsas0@gmail.com> <mikecasas@users.noreply.github.com>
Mike Goelzer <mike.goelzer@docker.com> <mgoelzer@docker.com> Mike Goelzer <mike.goelzer@docker.com> <mgoelzer@docker.com>
Milas Bowman <devnull@milas.dev>
Milas Bowman <devnull@milas.dev> <milasb@gmail.com>
Milas Bowman <devnull@milas.dev> <milas.bowman@docker.com>
Milind Chawre <milindchawre@gmail.com> Milind Chawre <milindchawre@gmail.com>
Misty Stanley-Jones <misty@docker.com> <misty@apache.org> Misty Stanley-Jones <misty@docker.com> <misty@apache.org>
Mohammad Banikazemi <MBanikazemi@gmail.com>
Mohammad Banikazemi <MBanikazemi@gmail.com> <mb@us.ibm.com>
Mohd Sadiq <mohdsadiq058@gmail.com> <mohdsadiq058@gmail.com>
Mohd Sadiq <mohdsadiq058@gmail.com> <42430865+msadiq058@users.noreply.github.com>
Mohit Soni <mosoni@ebay.com> <mohitsoni1989@gmail.com> Mohit Soni <mosoni@ebay.com> <mohitsoni1989@gmail.com>
Moorthy RS <rsmoorthy@gmail.com> <rsmoorthy@users.noreply.github.com> Moorthy RS <rsmoorthy@gmail.com> <rsmoorthy@users.noreply.github.com>
Moysés Borges <moysesb@gmail.com> Moysés Borges <moysesb@gmail.com>
Moysés Borges <moysesb@gmail.com> <moyses.furtado@wplex.com.br> Moysés Borges <moysesb@gmail.com> <moyses.furtado@wplex.com.br>
mrfly <mr.wrfly@gmail.com> <wrfly@users.noreply.github.com>
Nace Oroz <orkica@gmail.com> Nace Oroz <orkica@gmail.com>
Natasha Jarus <linuxmercedes@gmail.com> Natasha Jarus <linuxmercedes@gmail.com>
Nathan LeClaire <nathan.leclaire@docker.com> <nathan.leclaire@gmail.com> Nathan LeClaire <nathan.leclaire@docker.com> <nathan.leclaire@gmail.com>
@ -512,8 +375,6 @@ Oh Jinkyun <tintypemolly@gmail.com> <tintypemolly@Ohui-MacBook-Pro.local>
Oliver Reason <oli@overrateddev.co> Oliver Reason <oli@overrateddev.co>
Olli Janatuinen <olli.janatuinen@gmail.com> Olli Janatuinen <olli.janatuinen@gmail.com>
Olli Janatuinen <olli.janatuinen@gmail.com> <olljanat@users.noreply.github.com> Olli Janatuinen <olli.janatuinen@gmail.com> <olljanat@users.noreply.github.com>
Onur Filiz <onur.filiz@microsoft.com>
Onur Filiz <onur.filiz@microsoft.com> <ofiliz@users.noreply.github.com>
Ouyang Liduo <oyld0210@163.com> Ouyang Liduo <oyld0210@163.com>
Patrick Stapleton <github@gdi2290.com> Patrick Stapleton <github@gdi2290.com>
Paul Liljenberg <liljenberg.paul@gmail.com> <letters@paulnotcom.se> Paul Liljenberg <liljenberg.paul@gmail.com> <letters@paulnotcom.se>
@ -524,63 +385,41 @@ Peter Dave Hello <hsu@peterdavehello.org> <PeterDaveHello@users.noreply.github.c
Peter Jaffe <pjaffe@nevo.com> Peter Jaffe <pjaffe@nevo.com>
Peter Nagy <xificurC@gmail.com> <pnagy@gratex.com> Peter Nagy <xificurC@gmail.com> <pnagy@gratex.com>
Peter Waller <p@pwaller.net> <peter@scraperwiki.com> Peter Waller <p@pwaller.net> <peter@scraperwiki.com>
Phil Estes <estesp@gmail.com> Phil Estes <estesp@linux.vnet.ibm.com> <estesp@gmail.com>
Phil Estes <estesp@gmail.com> <estesp@amazon.com>
Phil Estes <estesp@gmail.com> <estesp@linux.vnet.ibm.com>
Philip Alexander Etling <paetling@gmail.com> Philip Alexander Etling <paetling@gmail.com>
Philipp Gillé <philipp.gille@gmail.com> <philippgille@users.noreply.github.com> Philipp Gillé <philipp.gille@gmail.com> <philippgille@users.noreply.github.com>
Prasanna Gautam <prasannagautam@gmail.com>
Puneet Pruthi <puneet.pruthi@oracle.com>
Puneet Pruthi <puneet.pruthi@oracle.com> <puneetpruthi@gmail.com>
Qiang Huang <h.huangqiang@huawei.com> Qiang Huang <h.huangqiang@huawei.com>
Qiang Huang <h.huangqiang@huawei.com> <qhuang@10.0.2.15> Qiang Huang <h.huangqiang@huawei.com> <qhuang@10.0.2.15>
Qin TianHuan <tianhuan@bingotree.cn>
Ray Tsang <rayt@google.com> <saturnism@users.noreply.github.com> Ray Tsang <rayt@google.com> <saturnism@users.noreply.github.com>
Renaud Gaubert <rgaubert@nvidia.com> <renaud.gaubert@gmail.com> Renaud Gaubert <rgaubert@nvidia.com> <renaud.gaubert@gmail.com>
Richard Scothern <richard.scothern@gmail.com>
Robert Terhaar <rterhaar@atlanticdynamic.com> <robbyt@users.noreply.github.com> Robert Terhaar <rterhaar@atlanticdynamic.com> <robbyt@users.noreply.github.com>
Roberto G. Hashioka <roberto.hashioka@docker.com> <roberto_hashioka@hotmail.com> Roberto G. Hashioka <roberto.hashioka@docker.com> <roberto_hashioka@hotmail.com>
Roberto Muñoz Fernández <robertomf@gmail.com> <roberto.munoz.fernandez.contractor@bbva.com> Roberto Muñoz Fernández <robertomf@gmail.com> <roberto.munoz.fernandez.contractor@bbva.com>
Robin Thoni <robin@rthoni.com>
Roman Dudin <katrmr@gmail.com> <decadent@users.noreply.github.com> Roman Dudin <katrmr@gmail.com> <decadent@users.noreply.github.com>
Rong Zhang <rongzhang@alauda.io> Rong Zhang <rongzhang@alauda.io>
Rongxiang Song <tinysong1226@gmail.com> Rongxiang Song <tinysong1226@gmail.com>
Rony Weng <ronyweng@synology.com>
Ross Boucher <rboucher@gmail.com> Ross Boucher <rboucher@gmail.com>
Rui Cao <ruicao@alauda.io> Rui Cao <ruicao@alauda.io>
Runshen Zhu <runshen.zhu@gmail.com> Runshen Zhu <runshen.zhu@gmail.com>
Ryan Stelly <ryan.stelly@live.com> Ryan Stelly <ryan.stelly@live.com>
Ryoga Saito <contact@proelbtn.com>
Ryoga Saito <contact@proelbtn.com> <proelbtn@users.noreply.github.com>
Sainath Grandhi <sainath.grandhi@intel.com>
Sainath Grandhi <sainath.grandhi@intel.com> <saiallforums@gmail.com>
Sakeven Jiang <jc5930@sina.cn> Sakeven Jiang <jc5930@sina.cn>
Samuel Karp <me@samuelkarp.com> <skarp@amazon.com>
Sandeep Bansal <sabansal@microsoft.com> Sandeep Bansal <sabansal@microsoft.com>
Sandeep Bansal <sabansal@microsoft.com> <msabansal@microsoft.com> Sandeep Bansal <sabansal@microsoft.com> <msabansal@microsoft.com>
Santhosh Manohar <santhosh@docker.com>
Sargun Dhillon <sargun@netflix.com> <sargun@sargun.me> Sargun Dhillon <sargun@netflix.com> <sargun@sargun.me>
Satoshi Tagomori <tagomoris@gmail.com>
Sean Lee <seanlee@tw.ibm.com> <scaleoutsean@users.noreply.github.com> Sean Lee <seanlee@tw.ibm.com> <scaleoutsean@users.noreply.github.com>
Sebastiaan van Stijn <github@gone.nl>
Sebastiaan van Stijn <github@gone.nl> <moby@example.com>
Sebastiaan van Stijn <github@gone.nl> <sebastiaan@ws-key-sebas3.dpi1.dpi> Sebastiaan van Stijn <github@gone.nl> <sebastiaan@ws-key-sebas3.dpi1.dpi>
Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com> Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com>
Sebastian Thomschke <sebthom@users.noreply.github.com>
Seongyeol Lim <seongyeol37@gmail.com>
Serhii Nakon <serhii.n@thescimus.com>
Shaun Kaasten <shaunk@gmail.com> Shaun Kaasten <shaunk@gmail.com>
Shawn Landden <shawn@churchofgit.com> <shawnlandden@gmail.com> Shawn Landden <shawn@churchofgit.com> <shawnlandden@gmail.com>
Shengbo Song <thomassong@tencent.com> Shengbo Song <thomassong@tencent.com>
Shengbo Song <thomassong@tencent.com> <mymneo@163.com> Shengbo Song <thomassong@tencent.com> <mymneo@163.com>
Shih-Yuan Lee <fourdollars@gmail.com> Shih-Yuan Lee <fourdollars@gmail.com>
Shishir Mahajan <shishir.mahajan@redhat.com> <smahajan@redhat.com> Shishir Mahajan <shishir.mahajan@redhat.com> <smahajan@redhat.com>
Shu-Wai Chow <shu-wai.chow@seattlechildrens.org>
Shukui Yang <yangshukui@huawei.com> Shukui Yang <yangshukui@huawei.com>
Shuwei Hao <haosw@cn.ibm.com>
Shuwei Hao <haosw@cn.ibm.com> <haoshuwei24@gmail.com>
Sidhartha Mani <sidharthamn@gmail.com> Sidhartha Mani <sidharthamn@gmail.com>
Sjoerd Langkemper <sjoerd-github@linuxonly.nl> <sjoerd@byte.nl> Sjoerd Langkemper <sjoerd-github@linuxonly.nl> <sjoerd@byte.nl>
Smark Meng <smark@freecoop.net>
Smark Meng <smark@freecoop.net> <smarkm@users.noreply.github.com>
Solomon Hykes <solomon@docker.com> <s@docker.com> Solomon Hykes <solomon@docker.com> <s@docker.com>
Solomon Hykes <solomon@docker.com> <solomon.hykes@dotcloud.com> Solomon Hykes <solomon@docker.com> <solomon.hykes@dotcloud.com>
Solomon Hykes <solomon@docker.com> <solomon@dotcloud.com> Solomon Hykes <solomon@docker.com> <solomon@dotcloud.com>
@ -594,12 +433,9 @@ Stefan Berger <stefanb@linux.vnet.ibm.com>
Stefan Berger <stefanb@linux.vnet.ibm.com> <stefanb@us.ibm.com> Stefan Berger <stefanb@linux.vnet.ibm.com> <stefanb@us.ibm.com>
Stefan J. Wernli <swernli@microsoft.com> <swernli@ntdev.microsoft.com> Stefan J. Wernli <swernli@microsoft.com> <swernli@ntdev.microsoft.com>
Stefan S. <tronicum@user.github.com> Stefan S. <tronicum@user.github.com>
Stefan Scherer <stefan.scherer@docker.com>
Stefan Scherer <stefan.scherer@docker.com> <scherer_stefan@icloud.com>
Stephan Spindler <shutefan@gmail.com> <shutefan@users.noreply.github.com> Stephan Spindler <shutefan@gmail.com> <shutefan@users.noreply.github.com>
Stephen Day <stevvooe@gmail.com> Stephen Day <stephen.day@docker.com>
Stephen Day <stevvooe@gmail.com> <stephen.day@docker.com> Stephen Day <stephen.day@docker.com> <stevvooe@users.noreply.github.com>
Stephen Day <stevvooe@gmail.com> <stevvooe@users.noreply.github.com>
Steve Desmond <steve@vtsv.ca> <stevedesmond-ca@users.noreply.github.com> Steve Desmond <steve@vtsv.ca> <stevedesmond-ca@users.noreply.github.com>
Sun Gengze <690388648@qq.com> Sun Gengze <690388648@qq.com>
Sun Jianbo <wonderflow.sun@gmail.com> Sun Jianbo <wonderflow.sun@gmail.com>
@ -611,48 +447,28 @@ Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@fosiki.com>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@home.org.au> Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@home.org.au>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@users.noreply.github.com> Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@users.noreply.github.com>
Sven Dowideit <SvenDowideit@home.org.au> <¨SvenDowideit@home.org.au¨> Sven Dowideit <SvenDowideit@home.org.au> <¨SvenDowideit@home.org.au¨>
Sylvain Baubeau <lebauce@gmail.com>
Sylvain Baubeau <lebauce@gmail.com> <sbaubeau@redhat.com>
Sylvain Bellemare <sylvain@ascribe.io> Sylvain Bellemare <sylvain@ascribe.io>
Sylvain Bellemare <sylvain@ascribe.io> <sylvain.bellemare@ezeep.com> Sylvain Bellemare <sylvain@ascribe.io> <sylvain.bellemare@ezeep.com>
Takuto Sato <tockn.jp@gmail.com>
Tangi Colin <tangicolin@gmail.com> Tangi Colin <tangicolin@gmail.com>
Tejesh Mehta <tejesh.mehta@gmail.com> <tj@init.me> Tejesh Mehta <tejesh.mehta@gmail.com> <tj@init.me>
Terry Chu <zue.hterry@gmail.com>
Terry Chu <zue.hterry@gmail.com> <jubosh.tw@gmail.com>
Thatcher Peskens <thatcher@docker.com> Thatcher Peskens <thatcher@docker.com>
Thatcher Peskens <thatcher@docker.com> <thatcher@dotcloud.com> Thatcher Peskens <thatcher@docker.com> <thatcher@dotcloud.com>
Thatcher Peskens <thatcher@docker.com> <thatcher@gmx.net> Thatcher Peskens <thatcher@docker.com> <thatcher@gmx.net>
Thiago Alves Silva <thiago.alves@aurea.com>
Thiago Alves Silva <thiago.alves@aurea.com> <thiagoalves@users.noreply.github.com>
Thomas Gazagnaire <thomas@gazagnaire.org> <thomas@gazagnaire.com> Thomas Gazagnaire <thomas@gazagnaire.org> <thomas@gazagnaire.com>
Thomas Ledos <thomas.ledos92@gmail.com>
Thomas Léveil <thomasleveil@gmail.com> Thomas Léveil <thomasleveil@gmail.com>
Thomas Léveil <thomasleveil@gmail.com> <thomasleveil@users.noreply.github.com> Thomas Léveil <thomasleveil@gmail.com> <thomasleveil@users.noreply.github.com>
Tibor Vass <teabee89@gmail.com> <tibor@docker.com> Tibor Vass <teabee89@gmail.com> <tibor@docker.com>
Tibor Vass <teabee89@gmail.com> <tiborvass@users.noreply.github.com> Tibor Vass <teabee89@gmail.com> <tiborvass@users.noreply.github.com>
Till Claassen <pixelistik@users.noreply.github.com>
Tim Bart <tim@fewagainstmany.com> Tim Bart <tim@fewagainstmany.com>
Tim Bosse <taim@bosboot.org> <maztaim@users.noreply.github.com> Tim Bosse <taim@bosboot.org> <maztaim@users.noreply.github.com>
Tim Potter <tpot@hpe.com>
Tim Potter <tpot@hpe.com> <tpot@Tims-MacBook-Pro.local>
Tim Ruffles <oi@truffles.me.uk> <timruffles@googlemail.com> Tim Ruffles <oi@truffles.me.uk> <timruffles@googlemail.com>
Tim Terhorst <mynamewastaken+git@gmail.com> Tim Terhorst <mynamewastaken+git@gmail.com>
Tim Wagner <tim.wagner@freenet.ag>
Tim Wagner <tim.wagner@freenet.ag> <33624860+herrwagner@users.noreply.github.com>
Tim Zju <21651152@zju.edu.cn> Tim Zju <21651152@zju.edu.cn>
Timothy Hobbs <timothyhobbs@seznam.cz> Timothy Hobbs <timothyhobbs@seznam.cz>
Toli Kuznets <toli@docker.com> Toli Kuznets <toli@docker.com>
Tom Barlow <tomwbarlow@gmail.com> Tom Barlow <tomwbarlow@gmail.com>
Tom Denham <tom@tomdee.co.uk>
Tom Denham <tom@tomdee.co.uk> <tom.denham@metaswitch.com>
Tom Sweeney <tsweeney@redhat.com> Tom Sweeney <tsweeney@redhat.com>
Tom Wilkie <tom.wilkie@gmail.com>
Tom Wilkie <tom.wilkie@gmail.com> <tom@weave.works>
Tõnis Tiigi <tonistiigi@gmail.com> Tõnis Tiigi <tonistiigi@gmail.com>
Trace Andreason <tandreason@gmail.com>
Trapier Marshall <tmarshall@mirantis.com>
Trapier Marshall <tmarshall@mirantis.com> <trapier.marshall@docker.com>
Trishna Guha <trishnaguha17@gmail.com> Trishna Guha <trishnaguha17@gmail.com>
Tristan Carel <tristan@cogniteev.com> Tristan Carel <tristan@cogniteev.com>
Tristan Carel <tristan@cogniteev.com> <tristan.carel@gmail.com> Tristan Carel <tristan@cogniteev.com> <tristan.carel@gmail.com>
@ -666,25 +482,15 @@ Victor Vieux <victor.vieux@docker.com> <victor@docker.com>
Victor Vieux <victor.vieux@docker.com> <victor@dotcloud.com> Victor Vieux <victor.vieux@docker.com> <victor@dotcloud.com>
Victor Vieux <victor.vieux@docker.com> <victorvieux@gmail.com> Victor Vieux <victor.vieux@docker.com> <victorvieux@gmail.com>
Victor Vieux <victor.vieux@docker.com> <vieux@docker.com> Victor Vieux <victor.vieux@docker.com> <vieux@docker.com>
Vikas Choudhary <choudharyvikas16@gmail.com>
Vikram bir Singh <vsingh@mirantis.com>
Vikram bir Singh <vsingh@mirantis.com> <vikrambir.singh@docker.com>
Viktor Vojnovski <viktor.vojnovski@amadeus.com> <vojnovski@gmail.com> Viktor Vojnovski <viktor.vojnovski@amadeus.com> <vojnovski@gmail.com>
Vincent Batts <vbatts@redhat.com> <vbatts@hashbangbash.com> Vincent Batts <vbatts@redhat.com> <vbatts@hashbangbash.com>
Vincent Bernat <vincent@bernat.ch> Vincent Bernat <Vincent.Bernat@exoscale.ch> <bernat@luffy.cx>
Vincent Bernat <vincent@bernat.ch> <bernat@luffy.cx> Vincent Bernat <Vincent.Bernat@exoscale.ch> <vincent@bernat.im>
Vincent Bernat <vincent@bernat.ch> <Vincent.Bernat@exoscale.ch>
Vincent Bernat <vincent@bernat.ch> <vincent@bernat.im>
Vincent Boulineau <vincent.boulineau@datadoghq.com>
Vincent Demeester <vincent.demeester@docker.com> <vincent+github@demeester.fr> Vincent Demeester <vincent.demeester@docker.com> <vincent+github@demeester.fr>
Vincent Demeester <vincent.demeester@docker.com> <vincent@demeester.fr> Vincent Demeester <vincent.demeester@docker.com> <vincent@demeester.fr>
Vincent Demeester <vincent.demeester@docker.com> <vincent@sbr.pm> Vincent Demeester <vincent.demeester@docker.com> <vincent@sbr.pm>
Vishnu Kannan <vishnuk@google.com> Vishnu Kannan <vishnuk@google.com>
Vitaly Ostrosablin <vostrosablin@virtuozzo.com>
Vitaly Ostrosablin <vostrosablin@virtuozzo.com> <tmp6154@yandex.ru>
Vladimir Rutsky <altsysrq@gmail.com> <iamironbob@gmail.com> Vladimir Rutsky <altsysrq@gmail.com> <iamironbob@gmail.com>
Vladislav Kolesnikov <vkolesnikov@beget.ru>
Vladislav Kolesnikov <vkolesnikov@beget.ru> <prime@vladqa.ru>
Walter Stanish <walter@pratyeka.org> Walter Stanish <walter@pratyeka.org>
Wang Chao <chao.wang@ucloud.cn> Wang Chao <chao.wang@ucloud.cn>
Wang Chao <chao.wang@ucloud.cn> <wcwxyz@gmail.com> Wang Chao <chao.wang@ucloud.cn> <wcwxyz@gmail.com>
@ -696,27 +502,16 @@ Wang Yuexiao <wang.yuexiao@zte.com.cn>
Wayne Chang <wayne@neverfear.org> Wayne Chang <wayne@neverfear.org>
Wayne Song <wsong@docker.com> <wsong@users.noreply.github.com> Wayne Song <wsong@docker.com> <wsong@users.noreply.github.com>
Wei Wu <wuwei4455@gmail.com> cizixs <cizixs@163.com> Wei Wu <wuwei4455@gmail.com> cizixs <cizixs@163.com>
Wei-Ting Kuo <waitingkuo0527@gmail.com>
Wen Cheng Ma <wenchma@cn.ibm.com>
Wenjun Tang <tangwj2@lenovo.com> <dodia@163.com> Wenjun Tang <tangwj2@lenovo.com> <dodia@163.com>
Wewang Xiaorenfine <wang.xiaoren@zte.com.cn> Wewang Xiaorenfine <wang.xiaoren@zte.com.cn>
Will Weaver <monkey@buildingbananas.com> Will Weaver <monkey@buildingbananas.com>
Wing-Kam Wong <wingkwong.code@gmail.com>
WuLonghui <wlh6666@qq.com>
Xian Chaobo <xianchaobo@huawei.com> Xian Chaobo <xianchaobo@huawei.com>
Xian Chaobo <xianchaobo@huawei.com> <jimmyxian2004@yahoo.com.cn> Xian Chaobo <xianchaobo@huawei.com> <jimmyxian2004@yahoo.com.cn>
Xianglin Gao <xlgao@zju.edu.cn> Xianglin Gao <xlgao@zju.edu.cn>
Xianjie <guxianjie@gmail.com>
Xianjie <guxianjie@gmail.com> <datastream@datastream-laptop.local>
Xianlu Bird <xianlubird@gmail.com> Xianlu Bird <xianlubird@gmail.com>
Xiao YongBiao <xyb4638@gmail.com> Xiao YongBiao <xyb4638@gmail.com>
Xiao Zhang <xiaozhang0210@hotmail.com>
Xiaodong Liu <liuxiaodong@loongson.cn>
Xiaodong Zhang <a4012017@sina.com> Xiaodong Zhang <a4012017@sina.com>
Xiaohua Ding <xiao_hua_ding@sina.cn>
Xiaoyu Zhang <zhang.xiaoyu33@zte.com.cn> Xiaoyu Zhang <zhang.xiaoyu33@zte.com.cn>
Xinfeng Liu <XinfengLiu@icloud.com>
Xinfeng Liu <XinfengLiu@icloud.com> <xinfeng.liu@gmail.com>
Xuecong Liao <satorulogic@gmail.com> Xuecong Liao <satorulogic@gmail.com>
Yamasaki Masahide <masahide.y@gmail.com> Yamasaki Masahide <masahide.y@gmail.com>
Yao Zaiyong <yaozaiyong@hotmail.com> Yao Zaiyong <yaozaiyong@hotmail.com>
@ -733,23 +528,12 @@ Yu Changchun <yuchangchun1@huawei.com>
Yu Chengxia <yuchengxia@huawei.com> Yu Chengxia <yuchengxia@huawei.com>
Yu Peng <yu.peng36@zte.com.cn> Yu Peng <yu.peng36@zte.com.cn>
Yu Peng <yu.peng36@zte.com.cn> <yupeng36@zte.com.cn> Yu Peng <yu.peng36@zte.com.cn> <yupeng36@zte.com.cn>
Yuan Sun <sunyuan3@huawei.com>
Yue Zhang <zy675793960@yeah.net> Yue Zhang <zy675793960@yeah.net>
Yufei Xiong <yufei.xiong@qq.com>
Zach Gershman <zachgersh@gmail.com>
Zach Gershman <zachgersh@gmail.com> <zachgersh@users.noreply.github.com>
Zachary Jaffee <zjaffee@us.ibm.com> <zij@case.edu> Zachary Jaffee <zjaffee@us.ibm.com> <zij@case.edu>
Zachary Jaffee <zjaffee@us.ibm.com> <zjaffee@apache.org> Zachary Jaffee <zjaffee@us.ibm.com> <zjaffee@apache.org>
Zhang Kun <zkazure@gmail.com>
Zhang Wentao <zhangwentao234@huawei.com>
ZhangHang <stevezhang2014@gmail.com> ZhangHang <stevezhang2014@gmail.com>
Zhenkun Bi <bi.zhenkun@zte.com.cn> Zhenkun Bi <bi.zhenkun@zte.com.cn>
Zhou Hao <zhouhao@cn.fujitsu.com>
Zhoulin Xie <zhoulin.xie@daocloud.io> Zhoulin Xie <zhoulin.xie@daocloud.io>
Zhou Hao <zhouhao@cn.fujitsu.com>
Zhu Kunjia <zhu.kunjia@zte.com.cn> Zhu Kunjia <zhu.kunjia@zte.com.cn>
Ziheng Liu <lzhfromustc@gmail.com>
Zou Yu <zouyu7@huawei.com> Zou Yu <zouyu7@huawei.com>
Zuhayr Elahi <zuhayr.elahi@docker.com>
Zuhayr Elahi <zuhayr.elahi@docker.com> <elahi.zuhayr@gmail.com>
정재영 <jjy600901@gmail.com>
정재영 <jjy600901@gmail.com> <43400316+J-jaeyoung@users.noreply.github.com>

460
AUTHORS

File diff suppressed because it is too large Load diff

3609
CHANGELOG.md Normal file

File diff suppressed because it is too large Load diff

View file

@ -27,10 +27,10 @@ issue, please bring it to their attention right away!
Please **DO NOT** file a public issue, instead send your report privately to Please **DO NOT** file a public issue, instead send your report privately to
[security@docker.com](mailto:security@docker.com). [security@docker.com](mailto:security@docker.com).
Security reports are greatly appreciated and we will publicly thank you for it, Security reports are greatly appreciated and we will publicly thank you for it.
although we keep your name confidential if you request it. We also like to send We also like to send gifts&mdash;if you're into schwag, make sure to let
gifts&mdash;if you're into schwag, make sure to let us know. We currently do not us know. We currently do not offer a paid security bounty program, but are not
offer a paid security bounty program, but are not ruling it out in the future. ruling it out in the future.
## Reporting other issues ## Reporting other issues
@ -72,7 +72,7 @@ anybody starts working on it.
We are always thrilled to receive pull requests. We do our best to process them We are always thrilled to receive pull requests. We do our best to process them
quickly. If your pull request is not accepted on the first try, quickly. If your pull request is not accepted on the first try,
don't get discouraged! Our contributor's guide explains [the review process we don't get discouraged! Our contributor's guide explains [the review process we
use for simple changes](https://docs.docker.com/contribute/overview/). use for simple changes](https://docs.docker.com/opensource/workflow/make-a-contribution/).
### Design and cleanup proposals ### Design and cleanup proposals
@ -101,8 +101,9 @@ the contributors guide.
<td> <td>
<p> <p>
Register for the Docker Community Slack at Register for the Docker Community Slack at
<a href="https://dockr.ly/comm-slack" target="_blank">https://dockr.ly/comm-slack</a>. <a href="https://community.docker.com/registrations/groups/4316" target="_blank">https://community.docker.com/registrations/groups/4316</a>.
We use the #moby-project channel for general discussion, and there are separate channels for other Moby projects such as #containerd. We use the #moby-project channel for general discussion, and there are separate channels for other Moby projects such as #containerd.
Archives are available at <a href="https://dockercommunity.slackarchive.io/" target="_blank">https://dockercommunity.slackarchive.io/</a>.
</p> </p>
</td> </td>
</tr> </tr>
@ -309,6 +310,36 @@ Don't forget: being a maintainer is a time investment. Make sure you
will have time to make yourself available. You don't have to be a will have time to make yourself available. You don't have to be a
maintainer to make a difference on the project! maintainer to make a difference on the project!
### Manage issues and pull requests using the Derek bot
If you want to help label, assign, close or reopen issues or pull requests
without commit rights, ask a maintainer to add your Github handle to the
`.DEREK.yml` file. [Derek](https://github.com/alexellis/derek) is a bot that extends
Github's user permissions to help non-committers to manage issues and pull requests simply by commenting.
For example:
* Labels
```
Derek add label: kind/question
Derek remove label: status/claimed
```
* Assign work
```
Derek assign: username
Derek unassign: me
```
* Manage issues and PRs
```
Derek close
Derek reopen
```
## Moby community guidelines ## Moby community guidelines
We want to keep the Moby community awesome, growing and collaborative. We need We want to keep the Moby community awesome, growing and collaborative. We need
@ -422,6 +453,6 @@ The rules:
guidelines. Since you've read all the rules, you now know that. guidelines. Since you've read all the rules, you now know that.
If you are having trouble getting into the mood of idiomatic Go, we recommend If you are having trouble getting into the mood of idiomatic Go, we recommend
reading through [Effective Go](https://go.dev/doc/effective_go). The reading through [Effective Go](https://golang.org/doc/effective_go.html). The
[Go Blog](https://go.dev/blog/) is also a great resource. Drinking the [Go Blog](https://blog.golang.org) is also a great resource. Drinking the
kool-aid is a lot easier than going thirsty. kool-aid is a lot easier than going thirsty.

View file

@ -1,671 +1,316 @@
# syntax=docker/dockerfile:1.7 # This file describes the standard way to build Docker, using docker
#
# Usage:
#
# # Use make to build a development environment image and run it in a container.
# # This is slow the first time.
# make BIND_DIR=. shell
#
# The following commands are executed inside the running container.
ARG GO_VERSION=1.21.9 # # Make a dockerd binary.
ARG BASE_DEBIAN_DISTRO="bookworm" # # hack/make.sh binary
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}" #
ARG XX_VERSION=1.4.0 # # Install dockerd to /usr/local/bin
# # make install
#
# # Run unit tests
# # hack/test/unit
#
# # Run tests e.g. integration, py
# # hack/make.sh binary test-integration test-docker-py
#
# Note: AppArmor used to mess with privileged mode, but this is no longer
# the case. Therefore, you don't have to disable it anymore.
#
ARG VPNKIT_VERSION=0.5.0 ARG CROSS="false"
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
ARG GO_VERSION=1.13.15
ARG DEBIAN_FRONTEND=noninteractive
ARG VPNKIT_DIGEST=e508a17cfacc8fd39261d5b4e397df2b953690da577e2c987a47630cd0c42f8e
ARG DOCKERCLI_REPOSITORY="https://github.com/docker/cli.git" FROM golang:${GO_VERSION}-buster AS base
ARG DOCKERCLI_VERSION=v26.0.0 ARG APT_MIRROR
# cli version used for integration-cli tests RUN sed -ri "s/(httpredir|deb).debian.org/${APT_MIRROR:-deb.debian.org}/g" /etc/apt/sources.list \
ARG DOCKERCLI_INTEGRATION_REPOSITORY="https://github.com/docker/cli.git" && sed -ri "s/(security).debian.org/${APT_MIRROR:-security.debian.org}/g" /etc/apt/sources.list
ARG DOCKERCLI_INTEGRATION_VERSION=v17.06.2-ce
ARG BUILDX_VERSION=0.13.1
ARG COMPOSE_VERSION=v2.25.0
ARG SYSTEMD="false"
ARG DOCKER_STATIC=1
# REGISTRY_VERSION specifies the version of the registry to download from
# https://hub.docker.com/r/distribution/distribution. This version of
# the registry is used to test schema 2 manifests. Generally, the version
# specified here should match a current release.
ARG REGISTRY_VERSION=2.8.3
# delve is currently only supported on linux/amd64 and linux/arm64;
# https://github.com/go-delve/delve/blob/v1.8.1/pkg/proc/native/support_sentinel.go#L1-L6
ARG DELVE_SUPPORTED=${TARGETPLATFORM#linux/amd64} DELVE_SUPPORTED=${DELVE_SUPPORTED#linux/arm64}
ARG DELVE_SUPPORTED=${DELVE_SUPPORTED:+"unsupported"}
ARG DELVE_SUPPORTED=${DELVE_SUPPORTED:-"supported"}
# cross compilation helper
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
# dummy stage to make sure the image is built for deps that don't support some
# architectures
FROM --platform=$BUILDPLATFORM busybox AS build-dummy
RUN mkdir -p /build
FROM scratch AS binary-dummy
COPY --from=build-dummy /build /build
# base
FROM --platform=$BUILDPLATFORM ${GOLANG_IMAGE} AS base
COPY --from=xx / /
RUN echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN apt-get update && apt-get install --no-install-recommends -y file
ENV GO111MODULE=off ENV GO111MODULE=off
ENV GOTOOLCHAIN=local
FROM base AS criu FROM base AS criu
ADD --chmod=0644 https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_11/Release.key /etc/apt/trusted.gpg.d/criu.gpg.asc ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-criu-aptlib,target=/var/lib/apt \ # Install dependency packages specific to criu
--mount=type=cache,sharing=locked,id=moby-criu-aptcache,target=/var/cache/apt \ RUN apt-get update && apt-get install -y --no-install-recommends \
echo 'deb https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_12/ /' > /etc/apt/sources.list.d/criu.list \ libcap-dev \
&& apt-get update \ libnet-dev \
&& apt-get install -y --no-install-recommends criu \ libnl-3-dev \
&& install -D /usr/sbin/criu /build/criu \ libprotobuf-c-dev \
&& /build/criu --version libprotobuf-dev \
protobuf-c-compiler \
protobuf-compiler \
python-protobuf \
&& rm -rf /var/lib/apt/lists/*
# registry # Install CRIU for checkpoint/restore support
FROM base AS registry-src ARG CRIU_VERSION=3.14
WORKDIR /usr/src/registry RUN mkdir -p /usr/src/criu \
RUN git init . && git remote add origin "https://github.com/distribution/distribution.git" && curl -sSL https://github.com/checkpoint-restore/criu/archive/v${CRIU_VERSION}.tar.gz | tar -C /usr/src/criu/ -xz --strip-components=1 \
&& cd /usr/src/criu \
&& make \
&& make PREFIX=/build/ install-criu
FROM base AS registry FROM base AS registry
WORKDIR /go/src/github.com/docker/distribution # Install two versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# REGISTRY_VERSION_SCHEMA1 specifies the version of the registry to build and # both. This allows integration-cli tests to cover push/pull with both schema1
# install from the https://github.com/docker/distribution repository. This is # and schema2 manifests.
# an older (pre v2.3.0) version of the registry that only supports schema1 ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
# manifests. This version of the registry is not working on arm64, so installation ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
# is skipped on that architecture. RUN set -x \
ARG REGISTRY_VERSION_SCHEMA1=v2.1.0 && export GOPATH="$(mktemp -d)" \
ARG TARGETPLATFORM && git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
RUN --mount=from=registry-src,src=/usr/src/registry,rw \ && (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT") \
--mount=type=cache,target=/root/.cache/go-build,id=registry-build-$TARGETPLATFORM \ && GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
--mount=type=cache,target=/go/pkg/mod \ go build -buildmode=pie -o /build/registry-v2 github.com/docker/distribution/cmd/registry \
--mount=type=tmpfs,target=/go/src <<EOT && case $(dpkg --print-architecture) in \
set -ex amd64|ppc64*|s390x) \
export GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1"); \
# Make the /build directory no matter what so that it doesn't fail on arm64 or GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"; \
# any other platform where we don't build this registry go build -buildmode=pie -o /build/registry-v2-schema1 github.com/docker/distribution/cmd/registry; \
mkdir /build ;; \
case $TARGETPLATFORM in esac \
linux/amd64|linux/arm/v7|linux/ppc64le|linux/s390x) && rm -rf "$GOPATH"
git fetch -q --depth 1 origin "${REGISTRY_VERSION_SCHEMA1}" +refs/tags/*:refs/tags/*
git checkout -q FETCH_HEAD
CGO_ENABLED=0 xx-go build -o /build/registry-v2-schema1 -v ./cmd/registry
xx-verify /build/registry-v2-schema1
;;
esac
EOT
FROM distribution/distribution:$REGISTRY_VERSION AS registry-v2
RUN mkdir /build && mv /bin/registry /build/registry-v2
# go-swagger
FROM base AS swagger-src
WORKDIR /usr/src/swagger
# Currently uses a fork from https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix
# TODO: move to under moby/ or fix upstream go-swagger to work for us.
RUN git init . && git remote add origin "https://github.com/kolyshkin/go-swagger.git"
# GO_SWAGGER_COMMIT specifies the version of the go-swagger binary to build and
# install. Go-swagger is used in CI for validating swagger.yaml in hack/validate/swagger-gen
ARG GO_SWAGGER_COMMIT=c56166c036004ba7a3a321e5951ba472b9ae298c
RUN git fetch -q --depth 1 origin "${GO_SWAGGER_COMMIT}" && git checkout -q FETCH_HEAD
FROM base AS swagger FROM base AS swagger
WORKDIR /go/src/github.com/go-swagger/go-swagger # Install go-swagger for validating swagger.yaml
ARG TARGETPLATFORM # This is https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix
RUN --mount=from=swagger-src,src=/usr/src/swagger,rw \ # TODO: move to under moby/ or fix upstream go-swagger to work for us.
--mount=type=cache,target=/root/.cache/go-build,id=swagger-build-$TARGETPLATFORM \ ENV GO_SWAGGER_COMMIT 5793aa66d4b4112c2602c716516e24710e4adbb5
--mount=type=cache,target=/go/pkg/mod \ RUN set -x \
--mount=type=tmpfs,target=/go/src/ <<EOT && export GOPATH="$(mktemp -d)" \
set -e && git clone https://github.com/kolyshkin/go-swagger.git "$GOPATH/src/github.com/go-swagger/go-swagger" \
xx-go build -o /build/swagger ./cmd/swagger && (cd "$GOPATH/src/github.com/go-swagger/go-swagger" && git checkout -q "$GO_SWAGGER_COMMIT") \
xx-verify /build/swagger && go build -o /build/swagger github.com/go-swagger/go-swagger/cmd/swagger \
EOT && rm -rf "$GOPATH"
# frozen-images FROM base AS frozen-images
# See also frozenImages in "testutil/environment/protect.go" (which needs to ARG DEBIAN_FRONTEND
# be updated when adding images to this list) RUN apt-get update && apt-get install -y --no-install-recommends \
FROM debian:${BASE_DEBIAN_DISTRO} AS frozen-images
RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-frozen-images-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \ ca-certificates \
curl \ jq \
jq && rm -rf /var/lib/apt/lists/*
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling # Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh / COPY contrib/download-frozen-image-v2.sh /
ARG TARGETARCH
ARG TARGETVARIANT
RUN /download-frozen-image-v2.sh /build \ RUN /download-frozen-image-v2.sh /build \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \ buildpack-deps:jessie@sha256:dd86dced7c9cd2a724e779730f0a53f93b7ef42228d4344b25ce9a42a1486251 \
busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \ busybox:latest@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 \
debian:bookworm-slim@sha256:2bc5c236e9b262645a323e9088dfa3bb1ecb16cc75811daf40a23a824d665be9 \ busybox:glibc@sha256:0b55a30394294ab23b9afd58fab94e61a923f5834fba7ddbae7f8e0c11ba85e6 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \ debian:jessie@sha256:287a20c5f73087ab406e6b364833e3fb7b3ae63ca0eb3486555dc27ed32c6e60 \
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1 hello-world:latest@sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c
# See also ensureFrozenImagesLinux() in "integration-cli/fixtures_linux_daemon_test.go" (which needs to be updated when adding images to this list)
# delve FROM base AS cross-false
FROM base AS delve-src
WORKDIR /usr/src/delve
RUN git init . && git remote add origin "https://github.com/go-delve/delve.git"
# DELVE_VERSION specifies the version of the Delve debugger binary
# from the https://github.com/go-delve/delve repository.
# It can be used to run Docker with a possibility of
# attaching debugger to it.
ARG DELVE_VERSION=v1.21.1
RUN git fetch -q --depth 1 origin "${DELVE_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS delve-supported FROM base AS cross-true
WORKDIR /usr/src/delve ARG DEBIAN_FRONTEND
ARG TARGETPLATFORM RUN dpkg --add-architecture arm64
RUN --mount=from=delve-src,src=/usr/src/delve,rw \ RUN dpkg --add-architecture armel
--mount=type=cache,target=/root/.cache/go-build,id=delve-build-$TARGETPLATFORM \ RUN dpkg --add-architecture armhf
--mount=type=cache,target=/go/pkg/mod <<EOT RUN if [ "$(go env GOHOSTARCH)" = "amd64" ]; then \
set -e apt-get update && apt-get install -y --no-install-recommends \
GO111MODULE=on xx-go build -o /build/dlv ./cmd/dlv crossbuild-essential-arm64 \
xx-verify /build/dlv crossbuild-essential-armel \
EOT crossbuild-essential-armhf \
&& rm -rf /var/lib/apt/lists/*; \
fi
FROM binary-dummy AS delve-unsupported FROM cross-${CROSS} as dev-base
FROM delve-${DELVE_SUPPORTED} AS delve
FROM base AS tomll FROM dev-base AS runtime-dev-cross-false
# GOTOML_VERSION specifies the version of the tomll binary to build and install ARG DEBIAN_FRONTEND
# from the https://github.com/pelletier/go-toml repository. This binary is used RUN apt-get update && apt-get install -y --no-install-recommends \
# in CI in the hack/validate/toml script. libapparmor-dev \
# libseccomp-dev \
# When updating this version, consider updating the github.com/pelletier/go-toml && rm -rf /var/lib/apt/lists/*
# dependency in vendor.mod accordingly.
ARG GOTOML_VERSION=v1.8.1
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "github.com/pelletier/go-toml/cmd/tomll@${GOTOML_VERSION}" \
&& /build/tomll --help
FROM base AS gowinres FROM cross-true AS runtime-dev-cross-true
# GOWINRES_VERSION defines go-winres tool version ARG DEBIAN_FRONTEND
ARG GOWINRES_VERSION=v0.3.1 # These crossbuild packages rely on gcc-<arch>, but this doesn't want to install
RUN --mount=type=cache,target=/root/.cache/go-build \ # on non-amd64 systems.
--mount=type=cache,target=/go/pkg/mod \ # Additionally, the crossbuild-amd64 is currently only on debian:buster, so
GOBIN=/build/ GO111MODULE=on go install "github.com/tc-hib/go-winres@${GOWINRES_VERSION}" \ # other architectures cannnot crossbuild amd64.
&& /build/go-winres --help RUN if [ "$(go env GOHOSTARCH)" = "amd64" ]; then \
apt-get update && apt-get install -y --no-install-recommends \
libapparmor-dev:arm64 \
libapparmor-dev:armel \
libapparmor-dev:armhf \
libseccomp-dev:arm64 \
libseccomp-dev:armel \
libseccomp-dev:armhf \
# install this arches seccomp here due to compat issues with the v0 builder
# This is as opposed to inheriting from runtime-dev-cross-false
libapparmor-dev \
libseccomp-dev \
&& rm -rf /var/lib/apt/lists/*; \
fi
# containerd FROM runtime-dev-cross-${CROSS} AS runtime-dev
FROM base AS containerd-src
WORKDIR /usr/src/containerd
RUN git init . && git remote add origin "https://github.com/containerd/containerd.git"
# CONTAINERD_VERSION is used to build containerd binaries, and used for the
# integration tests. The distributed docker .deb and .rpm packages depend on a
# separate (containerd.io) package, which may be a different version as is
# specified here. The containerd golang package is also pinned in vendor.mod.
# When updating the binary version you may also need to update the vendor
# version to pick up bug fixes or new APIs, however, usually the Go packages
# are built from a commit from the master branch.
ARG CONTAINERD_VERSION=v1.7.15
RUN git fetch -q --depth 1 origin "${CONTAINERD_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS containerd-build FROM base AS tomlv
WORKDIR /go/src/github.com/containerd/containerd ENV INSTALL_BINARY_NAME=tomlv
ARG TARGETPLATFORM ARG TOMLV_COMMIT
RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/apt \ COPY hack/dockerfile/install/install.sh ./install.sh
--mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \ COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
apt-get update && xx-apt-get install -y --no-install-recommends \ RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
gcc \
FROM base AS vndr
ENV INSTALL_BINARY_NAME=vndr
ARG VNDR_COMMIT
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
FROM dev-base AS containerd
ARG DEBIAN_FRONTEND
ARG CONTAINERD_COMMIT
RUN apt-get update && apt-get install -y --no-install-recommends \
libbtrfs-dev \ libbtrfs-dev \
libsecret-1-dev \ && rm -rf /var/lib/apt/lists/*
pkg-config ENV INSTALL_BINARY_NAME=containerd
ARG DOCKER_STATIC COPY hack/dockerfile/install/install.sh ./install.sh
RUN --mount=from=containerd-src,src=/usr/src/containerd,rw \ COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
--mount=type=cache,target=/root/.cache/go-build,id=containerd-build-$TARGETPLATFORM <<EOT RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
set -e
export CC=$(xx-info)-gcc
export CGO_ENABLED=$([ "$DOCKER_STATIC" = "1" ] && echo "0" || echo "1")
xx-go --wrap
make $([ "$DOCKER_STATIC" = "1" ] && echo "STATIC=1") binaries
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") bin/containerd
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") bin/containerd-shim-runc-v2
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") bin/ctr
mkdir /build
mv bin/containerd bin/containerd-shim-runc-v2 bin/ctr /build
EOT
FROM containerd-build AS containerd-linux FROM dev-base AS proxy
FROM binary-dummy AS containerd-windows ENV INSTALL_BINARY_NAME=proxy
FROM containerd-${TARGETOS} AS containerd ARG LIBNETWORK_COMMIT
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
FROM base AS golangci_lint FROM base AS gometalinter
ARG GOLANGCI_LINT_VERSION=v1.55.2 ENV INSTALL_BINARY_NAME=gometalinter
RUN --mount=type=cache,target=/root/.cache/go-build \ COPY hack/dockerfile/install/install.sh ./install.sh
--mount=type=cache,target=/go/pkg/mod \ COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
GOBIN=/build/ GO111MODULE=on go install "github.com/golangci/golangci-lint/cmd/golangci-lint@${GOLANGCI_LINT_VERSION}" \ RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
&& /build/golangci-lint --version
FROM base AS gotestsum FROM base AS gotestsum
ARG GOTESTSUM_VERSION=v1.8.2 ENV INSTALL_BINARY_NAME=gotestsum
RUN --mount=type=cache,target=/root/.cache/go-build \ ARG GOTESTSUM_COMMIT
--mount=type=cache,target=/go/pkg/mod \ COPY hack/dockerfile/install/install.sh ./install.sh
GOBIN=/build/ GO111MODULE=on go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" \ COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
&& /build/gotestsum --version RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
FROM base AS shfmt FROM dev-base AS dockercli
ARG SHFMT_VERSION=v3.8.0 ENV INSTALL_BINARY_NAME=dockercli
RUN --mount=type=cache,target=/root/.cache/go-build \ ARG DOCKERCLI_CHANNEL
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "mvdan.cc/sh/v3/cmd/shfmt@${SHFMT_VERSION}" \
&& /build/shfmt --version
FROM base AS gopls
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "golang.org/x/tools/gopls@latest" \
&& /build/gopls version
FROM base AS dockercli
WORKDIR /go/src/github.com/docker/cli
ARG DOCKERCLI_REPOSITORY
ARG DOCKERCLI_VERSION ARG DOCKERCLI_VERSION
ARG TARGETPLATFORM COPY hack/dockerfile/install/install.sh ./install.sh
RUN --mount=source=hack/dockerfile/cli.sh,target=/download-or-build-cli.sh \ COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
--mount=type=cache,id=dockercli-git-$TARGETPLATFORM,sharing=locked,target=./.git \ RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
--mount=type=cache,target=/root/.cache/go-build,id=dockercli-build-$TARGETPLATFORM \
rm -f ./.git/*.lock \
&& /download-or-build-cli.sh ${DOCKERCLI_VERSION} ${DOCKERCLI_REPOSITORY} /build \
&& /build/docker --version
FROM base AS dockercli-integration FROM runtime-dev AS runc
WORKDIR /go/src/github.com/docker/cli ENV INSTALL_BINARY_NAME=runc
ARG DOCKERCLI_INTEGRATION_REPOSITORY ARG RUNC_COMMIT
ARG DOCKERCLI_INTEGRATION_VERSION ARG RUNC_BUILDTAGS
ARG TARGETPLATFORM COPY hack/dockerfile/install/install.sh ./install.sh
RUN --mount=source=hack/dockerfile/cli.sh,target=/download-or-build-cli.sh \ COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
--mount=type=cache,id=dockercli-git-$TARGETPLATFORM,sharing=locked,target=./.git \ RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
--mount=type=cache,target=/root/.cache/go-build,id=dockercli-build-$TARGETPLATFORM \
rm -f ./.git/*.lock \
&& /download-or-build-cli.sh ${DOCKERCLI_INTEGRATION_VERSION} ${DOCKERCLI_INTEGRATION_REPOSITORY} /build \
&& /build/docker --version
# runc FROM dev-base AS tini
FROM base AS runc-src ARG DEBIAN_FRONTEND
WORKDIR /usr/src/runc ARG TINI_COMMIT
RUN git init . && git remote add origin "https://github.com/opencontainers/runc.git" RUN apt-get update && apt-get install -y --no-install-recommends \
# RUNC_VERSION should match the version that is used by the containerd version cmake \
# that is used. If you need to update runc, open a pull request in the containerd vim-common \
# project first, and update both after that is merged. When updating RUNC_VERSION, && rm -rf /var/lib/apt/lists/*
# consider updating runc in vendor.mod accordingly. COPY hack/dockerfile/install/install.sh ./install.sh
ARG RUNC_VERSION=v1.1.12 ENV INSTALL_BINARY_NAME=tini
RUN git fetch -q --depth 1 origin "${RUNC_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
FROM base AS runc-build FROM dev-base AS rootlesskit
WORKDIR /go/src/github.com/opencontainers/runc ENV INSTALL_BINARY_NAME=rootlesskit
ARG TARGETPLATFORM ARG ROOTLESSKIT_COMMIT
RUN --mount=type=cache,sharing=locked,id=moby-runc-aptlib,target=/var/lib/apt \ COPY hack/dockerfile/install/install.sh ./install.sh
--mount=type=cache,sharing=locked,id=moby-runc-aptcache,target=/var/cache/apt \ COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
apt-get update && xx-apt-get install -y --no-install-recommends \ RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
dpkg-dev \ COPY ./contrib/dockerd-rootless.sh /build
gcc \
libc6-dev \
libseccomp-dev \
pkg-config
ARG DOCKER_STATIC
RUN --mount=from=runc-src,src=/usr/src/runc,rw \
--mount=type=cache,target=/root/.cache/go-build,id=runc-build-$TARGETPLATFORM <<EOT
set -e
xx-go --wrap
CGO_ENABLED=1 make "$([ "$DOCKER_STATIC" = "1" ] && echo "static" || echo "runc")"
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") runc
mkdir /build
mv runc /build/
EOT
FROM runc-build AS runc-linux FROM djs55/vpnkit@sha256:${VPNKIT_DIGEST} AS vpnkit
FROM binary-dummy AS runc-windows
FROM runc-${TARGETOS} AS runc
# tini # TODO: Some of this is only really needed for testing, it would be nice to split this up
FROM base AS tini-src FROM runtime-dev AS dev
WORKDIR /usr/src/tini ARG DEBIAN_FRONTEND
RUN git init . && git remote add origin "https://github.com/krallin/tini.git"
# TINI_VERSION specifies the version of tini (docker-init) to build. This
# binary is used when starting containers with the `--init` option.
ARG TINI_VERSION=v0.19.0
RUN git fetch -q --depth 1 origin "${TINI_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS tini-build
WORKDIR /go/src/github.com/krallin/tini
RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends cmake
ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \
xx-apt-get install -y --no-install-recommends \
gcc \
libc6-dev \
pkg-config
RUN --mount=from=tini-src,src=/usr/src/tini,rw \
--mount=type=cache,target=/root/.cache/go-build,id=tini-build-$TARGETPLATFORM <<EOT
set -e
CC=$(xx-info)-gcc cmake .
make tini-static
xx-verify --static tini-static
mkdir /build
mv tini-static /build/docker-init
EOT
FROM tini-build AS tini-linux
FROM binary-dummy AS tini-windows
FROM tini-${TARGETOS} AS tini
# rootlesskit
FROM base AS rootlesskit-src
WORKDIR /usr/src/rootlesskit
RUN git init . && git remote add origin "https://github.com/rootless-containers/rootlesskit.git"
# When updating, also update vendor.mod and hack/dockerfile/install/rootlesskit.installer accordingly.
ARG ROOTLESSKIT_VERSION=v2.0.2
RUN git fetch -q --depth 1 origin "${ROOTLESSKIT_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS rootlesskit-build
WORKDIR /go/src/github.com/rootless-containers/rootlesskit
ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-rootlesskit-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-rootlesskit-aptcache,target=/var/cache/apt \
apt-get update && xx-apt-get install -y --no-install-recommends \
gcc \
libc6-dev \
pkg-config
ENV GO111MODULE=on
ARG DOCKER_STATIC
RUN --mount=from=rootlesskit-src,src=/usr/src/rootlesskit,rw \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build,id=rootlesskit-build-$TARGETPLATFORM <<EOT
set -e
export CGO_ENABLED=$([ "$DOCKER_STATIC" = "1" ] && echo "0" || echo "1")
xx-go build -o /build/rootlesskit -ldflags="$([ "$DOCKER_STATIC" != "1" ] && echo "-linkmode=external")" ./cmd/rootlesskit
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /build/rootlesskit
xx-go build -o /build/rootlesskit-docker-proxy -ldflags="$([ "$DOCKER_STATIC" != "1" ] && echo "-linkmode=external")" ./cmd/rootlesskit-docker-proxy
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /build/rootlesskit-docker-proxy
EOT
COPY --link ./contrib/dockerd-rootless.sh /build/
COPY --link ./contrib/dockerd-rootless-setuptool.sh /build/
FROM rootlesskit-build AS rootlesskit-linux
FROM binary-dummy AS rootlesskit-windows
FROM rootlesskit-${TARGETOS} AS rootlesskit
FROM base AS crun
ARG CRUN_VERSION=1.12
RUN --mount=type=cache,sharing=locked,id=moby-crun-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-crun-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
autoconf \
automake \
build-essential \
libcap-dev \
libprotobuf-c-dev \
libseccomp-dev \
libsystemd-dev \
libtool \
libudev-dev \
libyajl-dev \
python3 \
;
RUN --mount=type=tmpfs,target=/tmp/crun-build \
git clone https://github.com/containers/crun.git /tmp/crun-build && \
cd /tmp/crun-build && \
git checkout -q "${CRUN_VERSION}" && \
./autogen.sh && \
./configure --bindir=/build && \
make -j install
# vpnkit
# use dummy scratch stage to avoid build to fail for unsupported platforms
FROM scratch AS vpnkit-windows
FROM scratch AS vpnkit-linux-386
FROM scratch AS vpnkit-linux-arm
FROM scratch AS vpnkit-linux-ppc64le
FROM scratch AS vpnkit-linux-riscv64
FROM scratch AS vpnkit-linux-s390x
FROM djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-linux-amd64
FROM djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-linux-arm64
FROM vpnkit-linux-${TARGETARCH} AS vpnkit-linux
FROM vpnkit-${TARGETOS} AS vpnkit
# containerutility
FROM base AS containerutil-src
WORKDIR /usr/src/containerutil
RUN git init . && git remote add origin "https://github.com/docker-archive/windows-container-utility.git"
ARG CONTAINERUTILITY_VERSION=aa1ba87e99b68e0113bd27ec26c60b88f9d4ccd9
RUN git fetch -q --depth 1 origin "${CONTAINERUTILITY_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS containerutil-build
WORKDIR /usr/src/containerutil
ARG TARGETPLATFORM
RUN xx-apt-get install -y --no-install-recommends \
gcc \
g++ \
libc6-dev \
pkg-config
RUN --mount=from=containerutil-src,src=/usr/src/containerutil,rw \
--mount=type=cache,target=/root/.cache/go-build,id=containerutil-build-$TARGETPLATFORM <<EOT
set -e
CC="$(xx-info)-gcc" CXX="$(xx-info)-g++" make
xx-verify --static containerutility.exe
mkdir /build
mv containerutility.exe /build/
EOT
FROM binary-dummy AS containerutil-linux
FROM containerutil-build AS containerutil-windows-amd64
FROM containerutil-windows-${TARGETARCH} AS containerutil-windows
FROM containerutil-${TARGETOS} AS containerutil
FROM docker/buildx-bin:${BUILDX_VERSION} as buildx
FROM docker/compose-bin:${COMPOSE_VERSION} as compose
FROM base AS dev-systemd-false
COPY --link --from=frozen-images /build/ /docker-frozen-images
COPY --link --from=swagger /build/ /usr/local/bin/
COPY --link --from=delve /build/ /usr/local/bin/
COPY --link --from=tomll /build/ /usr/local/bin/
COPY --link --from=gowinres /build/ /usr/local/bin/
COPY --link --from=tini /build/ /usr/local/bin/
COPY --link --from=registry /build/ /usr/local/bin/
COPY --link --from=registry-v2 /build/ /usr/local/bin/
# Skip the CRIU stage for now, as the opensuse package repository is sometimes
# unstable, and we're currently not using it in CI.
#
# FIXME(thaJeztah): re-enable this stage when https://github.com/moby/moby/issues/38963 is resolved (see https://github.com/moby/moby/pull/38984)
# COPY --link --from=criu /build/ /usr/local/bin/
COPY --link --from=gotestsum /build/ /usr/local/bin/
COPY --link --from=golangci_lint /build/ /usr/local/bin/
COPY --link --from=shfmt /build/ /usr/local/bin/
COPY --link --from=runc /build/ /usr/local/bin/
COPY --link --from=containerd /build/ /usr/local/bin/
COPY --link --from=rootlesskit /build/ /usr/local/bin/
COPY --link --from=vpnkit / /usr/local/bin/
COPY --link --from=containerutil /build/ /usr/local/bin/
COPY --link --from=crun /build/ /usr/local/bin/
COPY --link hack/dockerfile/etc/docker/ /etc/docker/
COPY --link --from=buildx /buildx /usr/local/libexec/docker/cli-plugins/docker-buildx
COPY --link --from=compose /docker-compose /usr/libexec/docker/cli-plugins/docker-compose
ENV PATH=/usr/local/cli:$PATH
ENV TEST_CLIENT_BINARY=/usr/local/cli-integration/docker
ENV CONTAINERD_ADDRESS=/run/docker/containerd/containerd.sock
ENV CONTAINERD_NAMESPACE=moby
WORKDIR /go/src/github.com/docker/docker
VOLUME /var/lib/docker
VOLUME /home/unprivilegeduser/.local/share/docker
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]
FROM dev-systemd-false AS dev-systemd-true
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
dbus \
dbus-user-session \
systemd \
systemd-sysv
ENTRYPOINT ["hack/dind-systemd"]
FROM dev-systemd-${SYSTEMD} AS dev-base
RUN groupadd -r docker RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser \ RUN useradd --create-home --gid docker unprivilegeduser
&& mkdir -p /home/unprivilegeduser/.local/share/docker \
&& chown -R unprivilegeduser /home/unprivilegeduser
# Let us use a .bashrc file # Let us use a .bashrc file
RUN ln -sfv /go/src/github.com/docker/docker/.bashrc ~/.bashrc RUN ln -sfv /go/src/github.com/docker/docker/.bashrc ~/.bashrc
# Activate bash completion and include Docker's completion if mounted with DOCKER_BASH_COMPLETION_PATH # Activate bash completion and include Docker's completion if mounted with DOCKER_BASH_COMPLETION_PATH
RUN echo "source /usr/share/bash-completion/bash_completion" >> /etc/bash.bashrc RUN echo "source /usr/share/bash-completion/bash_completion" >> /etc/bash.bashrc
RUN ln -s /usr/local/completion/bash/docker /etc/bash_completion.d/docker RUN ln -s /usr/local/completion/bash/docker /etc/bash_completion.d/docker
RUN ldconfig RUN ldconfig
# Set dev environment as safe git directory to prevent "dubious ownership" errors
# when bind-mounting the source into the dev-container. See https://github.com/moby/moby/pull/44930
RUN git config --global --add safe.directory $GOPATH/src/github.com/docker/docker
# This should only install packages that are specifically needed for the dev environment and nothing else # This should only install packages that are specifically needed for the dev environment and nothing else
# Do you really need to add another package here? Can it be done in a different build stage? # Do you really need to add another package here? Can it be done in a different build stage?
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \ RUN apt-get update && apt-get install -y --no-install-recommends \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
apparmor \ apparmor \
aufs-tools \
bash-completion \ bash-completion \
binutils-mingw-w64 \
libbtrfs-dev \
bzip2 \ bzip2 \
inetutils-ping \ g++-mingw-w64-x86-64 \
iproute2 \
iptables \ iptables \
jq \ jq \
libcap2-bin \ libcap2-bin \
libdevmapper-dev \
libnet1 \ libnet1 \
libnl-3-200 \ libnl-3-200 \
libprotobuf-c1 \ libprotobuf-c1 \
libyajl2 \ libsystemd-dev \
libudev-dev \
net-tools \ net-tools \
patch \
pigz \ pigz \
sudo \ python3-pip \
systemd-journal-remote \ python3-setuptools \
python3-wheel \
thin-provisioning-tools \ thin-provisioning-tools \
uidmap \
vim \ vim \
vim-common \ vim-common \
xfsprogs \ xfsprogs \
xz-utils \ xz-utils \
zip \ zip \
zstd && rm -rf /var/lib/apt/lists/*
# Switch to use iptables instead of nftables (to match the CI hosts)
# TODO use some kind of runtime auto-detection instead if/when nftables is supported (https://github.com/moby/moby/issues/26824) # Switch to use iptables instead of nftables (to match the host machine)
RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \ RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \ && update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \
&& update-alternatives --set arptables /usr/sbin/arptables-legacy || true && update-alternatives --set arptables /usr/sbin/arptables-legacy || true
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install --no-install-recommends -y \
gcc \
pkg-config \
dpkg-dev \
libapparmor-dev \
libseccomp-dev \
libsecret-1-dev \
libsystemd-dev \
libudev-dev \
yamllint
COPY --link --from=dockercli /build/ /usr/local/cli
COPY --link --from=dockercli-integration /build/ /usr/local/cli-integration
FROM base AS build RUN pip3 install yamllint==1.16.0
COPY --from=gowinres /build/ /usr/local/bin/
COPY --from=dockercli /build/ /usr/local/cli
COPY --from=frozen-images /build/ /docker-frozen-images
COPY --from=swagger /build/ /usr/local/bin/
COPY --from=tomlv /build/ /usr/local/bin/
COPY --from=tini /build/ /usr/local/bin/
COPY --from=registry /build/ /usr/local/bin/
COPY --from=criu /build/ /usr/local/
COPY --from=vndr /build/ /usr/local/bin/
COPY --from=gotestsum /build/ /usr/local/bin/
COPY --from=gometalinter /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=rootlesskit /build/ /usr/local/bin/
COPY --from=vpnkit /vpnkit /usr/local/bin/vpnkit.x86_64
COPY --from=proxy /build/ /usr/local/bin/
ENV PATH=/usr/local/cli:$PATH
ENV DOCKER_BUILDTAGS apparmor seccomp selinux
WORKDIR /go/src/github.com/docker/docker WORKDIR /go/src/github.com/docker/docker
ENV GO111MODULE=off VOLUME /var/lib/docker
ENV CGO_ENABLED=1 # Wrap all commands in the "docker-in-docker" script to allow nested containers
RUN --mount=type=cache,sharing=locked,id=moby-build-aptlib,target=/var/lib/apt \ ENTRYPOINT ["hack/dind"]
--mount=type=cache,sharing=locked,id=moby-build-aptcache,target=/var/cache/apt \
apt-get update && apt-get install --no-install-recommends -y \
clang \
lld \
llvm
ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-build-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-build-aptcache,target=/var/cache/apt \
xx-apt-get install --no-install-recommends -y \
dpkg-dev \
gcc \
libapparmor-dev \
libc6-dev \
libseccomp-dev \
libsecret-1-dev \
libsystemd-dev \
libudev-dev \
pkg-config
ARG DOCKER_BUILDTAGS
ARG DOCKER_DEBUG
ARG DOCKER_GITCOMMIT=HEAD
ARG DOCKER_LDFLAGS
ARG DOCKER_STATIC
ARG VERSION
ARG PLATFORM
ARG PRODUCT
ARG DEFAULT_PRODUCT_LICENSE
ARG PACKAGER_NAME
# PREFIX overrides DEST dir in make.sh script otherwise it fails because of
# read only mount in current work dir
ENV PREFIX=/tmp
RUN <<EOT
# in bullseye arm64 target does not link with lld so configure it to use ld instead
if [ "$(xx-info arch)" = "arm64" ]; then
XX_CC_PREFER_LINKER=ld xx-clang --setup-target-triple
fi
EOT
RUN --mount=type=bind,target=.,rw \
--mount=type=tmpfs,target=cli/winresources/dockerd \
--mount=type=tmpfs,target=cli/winresources/docker-proxy \
--mount=type=cache,target=/root/.cache/go-build,id=moby-build-$TARGETPLATFORM <<EOT
set -e
target=$([ "$DOCKER_STATIC" = "1" ] && echo "binary" || echo "dynbinary")
xx-go --wrap
PKG_CONFIG=$(xx-go env PKG_CONFIG) ./hack/make.sh $target
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /tmp/bundles/${target}-daemon/dockerd$([ "$(xx-info os)" = "windows" ] && echo ".exe")
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /tmp/bundles/${target}-daemon/docker-proxy$([ "$(xx-info os)" = "windows" ] && echo ".exe")
mkdir /build
mv /tmp/bundles/${target}-daemon/* /build/
EOT
# usage: FROM dev AS final
# > docker buildx bake binary # Upload docker source
# > DOCKER_STATIC=0 docker buildx bake binary COPY . /go/src/github.com/docker/docker
# or
# > make binary
# > make dynbinary
FROM scratch AS binary
COPY --from=build /build/ /
# usage:
# > docker buildx bake all
FROM scratch AS all
COPY --link --from=tini /build/ /
COPY --link --from=runc /build/ /
COPY --link --from=containerd /build/ /
COPY --link --from=rootlesskit /build/ /
COPY --link --from=containerutil /build/ /
COPY --link --from=vpnkit / /
COPY --link --from=build /build /
# smoke tests
# usage:
# > docker buildx bake binary-smoketest
FROM --platform=$TARGETPLATFORM base AS smoketest
WORKDIR /usr/local/bin
COPY --from=build /build .
RUN <<EOT
set -ex
file dockerd
dockerd --version
file docker-proxy
docker-proxy --version
EOT
# devcontainer is a stage used by .devcontainer/devcontainer.json
FROM dev-base AS devcontainer
COPY --link . .
COPY --link --from=gopls /build/ /usr/local/bin/
# usage:
# > make shell
# > SYSTEMD=true make shell
FROM dev-base AS dev
COPY --link . .

84
Dockerfile.e2e Normal file
View file

@ -0,0 +1,84 @@
ARG GO_VERSION=1.13.15
FROM golang:${GO_VERSION}-alpine AS base
ENV GO111MODULE=off
RUN apk --no-cache add \
bash \
btrfs-progs-dev \
build-base \
curl \
lvm2-dev \
jq
RUN mkdir -p /build/
RUN mkdir -p /go/src/github.com/docker/docker/
WORKDIR /go/src/github.com/docker/docker/
FROM base AS frozen-images
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /
RUN /download-frozen-image-v2.sh /build \
buildpack-deps:jessie@sha256:dd86dced7c9cd2a724e779730f0a53f93b7ef42228d4344b25ce9a42a1486251 \
busybox:latest@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 \
busybox:glibc@sha256:0b55a30394294ab23b9afd58fab94e61a923f5834fba7ddbae7f8e0c11ba85e6 \
debian:jessie@sha256:287a20c5f73087ab406e6b364833e3fb7b3ae63ca0eb3486555dc27ed32c6e60 \
hello-world:latest@sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c
# See also ensureFrozenImagesLinux() in "integration-cli/fixtures_linux_daemon_test.go" (which needs to be updated when adding images to this list)
FROM base AS dockercli
ENV INSTALL_BINARY_NAME=dockercli
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
# Build DockerSuite.TestBuild* dependency
FROM base AS contrib
COPY contrib/syscall-test /build/syscall-test
COPY contrib/httpserver/Dockerfile /build/httpserver/Dockerfile
COPY contrib/httpserver contrib/httpserver
RUN CGO_ENABLED=0 go build -buildmode=pie -o /build/httpserver/httpserver github.com/docker/docker/contrib/httpserver
# Build the integration tests and copy the resulting binaries to /build/tests
FROM base AS builder
# Set tag and add sources
COPY . .
# Copy test sources tests that use assert can print errors
RUN mkdir -p /build${PWD} && find integration integration-cli -name \*_test.go -exec cp --parents '{}' /build${PWD} \;
# Build and install test binaries
ARG DOCKER_GITCOMMIT=undefined
RUN hack/make.sh build-integration-test-binary
RUN mkdir -p /build/tests && find . -name test.main -exec cp --parents '{}' /build/tests \;
## Generate testing image
FROM alpine:3.10 as runner
ENV DOCKER_REMOTE_DAEMON=1
ENV DOCKER_INTEGRATION_DAEMON_DEST=/
ENTRYPOINT ["/scripts/run.sh"]
# Add an unprivileged user to be used for tests which need it
RUN addgroup docker && adduser -D -G docker unprivilegeduser -s /bin/ash
# GNU tar is used for generating the emptyfs image
RUN apk --no-cache add \
bash \
ca-certificates \
g++ \
git \
iptables \
pigz \
tar \
xz
COPY hack/test/e2e-run.sh /scripts/run.sh
COPY hack/make/.ensure-emptyfs /scripts/ensure-emptyfs.sh
COPY integration/testdata /tests/integration/testdata
COPY integration/build/testdata /tests/integration/build/testdata
COPY integration-cli/fixtures /tests/integration-cli/fixtures
COPY --from=frozen-images /build/ /docker-frozen-images
COPY --from=dockercli /build/ /usr/bin/
COPY --from=contrib /build/ /tests/contrib/
COPY --from=builder /build/ /

View file

@ -5,24 +5,27 @@
# This represents the bare minimum required to build and test Docker. # This represents the bare minimum required to build and test Docker.
ARG GO_VERSION=1.21.9 ARG GO_VERSION=1.13.15
ARG BASE_DEBIAN_DISTRO="bookworm" FROM golang:${GO_VERSION}-stretch
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
FROM ${GOLANG_IMAGE}
ENV GO111MODULE=off ENV GO111MODULE=off
ENV GOTOOLCHAIN=local
# allow replacing httpredir or deb mirror
ARG APT_MIRROR=deb.debian.org
RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
# Compile and runtime deps # Compile and runtime deps
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#build-dependencies # https://github.com/docker/docker/blob/master/project/PACKAGERS.md#build-dependencies
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies # https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \ RUN apt-get update && apt-get install -y --no-install-recommends \
btrfs-tools \
build-essential \ build-essential \
curl \ curl \
cmake \ cmake \
gcc \
git \ git \
libapparmor-dev \ libapparmor-dev \
libdevmapper-dev \
libseccomp-dev \ libseccomp-dev \
ca-certificates \ ca-certificates \
e2fsprogs \ e2fsprogs \
@ -33,6 +36,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
xfsprogs \ xfsprogs \
xz-utils \ xz-utils \
\ \
aufs-tools \
vim-common \ vim-common \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*

View file

@ -154,31 +154,27 @@
# The number of build steps below are explicitly minimised to improve performance. # The number of build steps below are explicitly minimised to improve performance.
ARG WINDOWS_BASE_IMAGE=mcr.microsoft.com/windows/servercore # Extremely important - do not change the following line to reference a "specific" image,
ARG WINDOWS_BASE_IMAGE_TAG=ltsc2022 # such as `mcr.microsoft.com/windows/servercore:ltsc2019`. If using this Dockerfile in process
FROM ${WINDOWS_BASE_IMAGE}:${WINDOWS_BASE_IMAGE_TAG} # isolated containers, the kernel of the host must match the container image, and hence
# would fail between Windows Server 2016 (aka RS1) and Windows Server 2019 (aka RS5).
# It is expected that the image `microsoft/windowsservercore:latest` is present, and matches
# the hosts kernel version before doing a build.
FROM microsoft/windowsservercore
# Use PowerShell as the default shell # Use PowerShell as the default shell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"] SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ARG GO_VERSION=1.21.9 ARG GO_VERSION=1.13.15
ARG GOTESTSUM_VERSION=v1.8.2
ARG GOWINRES_VERSION=v0.3.1
ARG CONTAINERD_VERSION=v1.7.15
# Environment variable notes: # Environment variable notes:
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux. # - GO_VERSION must be consistent with 'Dockerfile' used by Linux.
# - CONTAINERD_VERSION must be consistent with 'hack/dockerfile/install/containerd.installer' used by Linux.
# - FROM_DOCKERFILE is used for detection of building within a container. # - FROM_DOCKERFILE is used for detection of building within a container.
ENV GO_VERSION=${GO_VERSION} ` ENV GO_VERSION=${GO_VERSION} `
CONTAINERD_VERSION=${CONTAINERD_VERSION} `
GIT_VERSION=2.11.1 ` GIT_VERSION=2.11.1 `
GOPATH=C:\gopath ` GOPATH=C:\gopath `
GO111MODULE=off ` GO111MODULE=off `
GOTOOLCHAIN=local ` FROM_DOCKERFILE=1
FROM_DOCKERFILE=1 `
GOTESTSUM_VERSION=${GOTESTSUM_VERSION} `
GOWINRES_VERSION=${GOWINRES_VERSION}
RUN ` RUN `
Function Test-Nano() { ` Function Test-Nano() { `
@ -207,21 +203,20 @@ RUN `
Throw ("Failed to download " + $source) ` Throw ("Failed to download " + $source) `
}` }`
} else { ` } else { `
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; `
$webClient = New-Object System.Net.WebClient; ` $webClient = New-Object System.Net.WebClient; `
$webClient.DownloadFile($source, $target); ` $webClient.DownloadFile($source, $target); `
} ` } `
} ` } `
` `
setx /M PATH $('C:\git\cmd;C:\git\usr\bin;'+$Env:PATH+';C:\gcc\bin;C:\go\bin;C:\containerd\bin'); ` setx /M PATH $('C:\git\cmd;C:\git\usr\bin;'+$Env:PATH+';C:\gcc\bin;C:\go\bin'); `
` `
Write-Host INFO: Downloading git...; ` Write-Host INFO: Downloading git...; `
$location='https://www.nuget.org/api/v2/package/GitForWindows/'+$Env:GIT_VERSION; ` $location='https://www.nuget.org/api/v2/package/GitForWindows/'+$Env:GIT_VERSION; `
Download-File $location C:\gitsetup.zip; ` Download-File $location C:\gitsetup.zip; `
` `
Write-Host INFO: Downloading go...; ` Write-Host INFO: Downloading go...; `
$dlGoVersion=$Env:GO_VERSION; ` $dlGoVersion=$Env:GO_VERSION -replace '\.0$',''; `
Download-File "https://go.dev/dl/go${dlGoVersion}.windows-amd64.zip" C:\go.zip; ` Download-File "https://golang.org/dl/go${dlGoVersion}.windows-amd64.zip" C:\go.zip; `
` `
Write-Host INFO: Downloading compiler 1 of 3...; ` Write-Host INFO: Downloading compiler 1 of 3...; `
Download-File https://raw.githubusercontent.com/moby/docker-tdmgcc/master/gcc.zip C:\gcc.zip; ` Download-File https://raw.githubusercontent.com/moby/docker-tdmgcc/master/gcc.zip C:\gcc.zip; `
@ -254,55 +249,13 @@ RUN `
Remove-Item C:\binutils.zip; ` Remove-Item C:\binutils.zip; `
Remove-Item C:\gitsetup.zip; ` Remove-Item C:\gitsetup.zip; `
` `
Write-Host INFO: Downloading containerd; ` Write-Host INFO: Creating source directory...; `
Install-Package -Force 7Zip4PowerShell; ` New-Item -ItemType Directory -Path ${GOPATH}\src\github.com\docker\docker | Out-Null; `
$location='https://github.com/containerd/containerd/releases/download/'+$Env:CONTAINERD_VERSION+'/containerd-'+$Env:CONTAINERD_VERSION.TrimStart('v')+'-windows-amd64.tar.gz'; `
Download-File $location C:\containerd.tar.gz; `
New-Item -Path C:\containerd -ItemType Directory; `
Expand-7Zip C:\containerd.tar.gz C:\; `
Expand-7Zip C:\containerd.tar C:\containerd; `
Remove-Item C:\containerd.tar.gz; `
Remove-Item C:\containerd.tar; `
`
# Ensure all directories exist that we will require below....
$srcDir = """$Env:GOPATH`\src\github.com\docker\docker\bundles"""; `
Write-Host INFO: Ensuring existence of directory $srcDir...; `
New-Item -Force -ItemType Directory -Path $srcDir | Out-Null; `
` `
Write-Host INFO: Configuring git core.autocrlf...; ` Write-Host INFO: Configuring git core.autocrlf...; `
C:\git\cmd\git config --global core.autocrlf true; C:\git\cmd\git config --global core.autocrlf true; `
RUN `
Function Install-GoTestSum() { `
$Env:GO111MODULE = 'on'; `
$tmpGobin = "${Env:GOBIN_TMP}"; `
$Env:GOBIN = """${Env:GOPATH}`\bin"""; `
Write-Host "INFO: Installing gotestsum version $Env:GOTESTSUM_VERSION in $Env:GOBIN"; `
&go install "gotest.tools/gotestsum@${Env:GOTESTSUM_VERSION}"; `
$Env:GOBIN = "${tmpGobin}"; `
$Env:GO111MODULE = 'off'; `
if ($LASTEXITCODE -ne 0) { `
Throw '"gotestsum install failed..."'; `
} `
} `
` `
Install-GoTestSum Write-Host INFO: Completed
RUN `
Function Install-GoWinres() { `
$Env:GO111MODULE = 'on'; `
$tmpGobin = "${Env:GOBIN_TMP}"; `
$Env:GOBIN = """${Env:GOPATH}`\bin"""; `
Write-Host "INFO: Installing go-winres version $Env:GOWINRES_VERSION in $Env:GOBIN"; `
&go install "github.com/tc-hib/go-winres@${Env:GOWINRES_VERSION}"; `
$Env:GOBIN = "${tmpGobin}"; `
$Env:GO111MODULE = 'off'; `
if ($LASTEXITCODE -ne 0) { `
Throw '"go-winres install failed..."'; `
} `
} `
`
Install-GoWinres
# Make PowerShell the default entrypoint # Make PowerShell the default entrypoint
ENTRYPOINT ["powershell.exe"] ENTRYPOINT ["powershell.exe"]

748
Jenkinsfile vendored
View file

@ -8,14 +8,20 @@ pipeline {
timestamps() timestamps()
} }
parameters { parameters {
booleanParam(name: 'arm64', defaultValue: true, description: 'ARM (arm64) Build/Test') booleanParam(name: 'unit_validate', defaultValue: true, description: 'amd64 (x86_64) unit tests and vendor check')
booleanParam(name: 'dco', defaultValue: true, description: 'Run the DCO check') booleanParam(name: 'amd64', defaultValue: true, description: 'amd64 (x86_64) Build/Test')
booleanParam(name: 's390x', defaultValue: true, description: 'IBM Z (s390x) Build/Test')
booleanParam(name: 'ppc64le', defaultValue: true, description: 'PowerPC (ppc64le) Build/Test')
booleanParam(name: 'windowsRS1', defaultValue: false, description: 'Windows 2016 (RS1) Build/Test')
booleanParam(name: 'windowsRS5', defaultValue: true, description: 'Windows 2019 (RS5) Build/Test')
booleanParam(name: 'skip_dco', defaultValue: false, description: 'Skip the DCO check')
} }
environment { environment {
DOCKER_BUILDKIT = '1' DOCKER_BUILDKIT = '1'
DOCKER_EXPERIMENTAL = '1' DOCKER_EXPERIMENTAL = '1'
DOCKER_GRAPHDRIVER = 'overlay2' DOCKER_GRAPHDRIVER = 'overlay2'
CHECK_CONFIG_COMMIT = '33a3680e08d1007e72c3b3f1454f823d8e9948ee' APT_MIRROR = 'cdn-fastly.deb.debian.org'
CHECK_CONFIG_COMMIT = '78405559cfe5987174aa2cb6463b9b2c1b917255'
TESTDEBUG = '0' TESTDEBUG = '0'
TIMEOUT = '120m' TIMEOUT = '120m'
} }
@ -34,30 +40,27 @@ pipeline {
stage('DCO-check') { stage('DCO-check') {
when { when {
beforeAgent true beforeAgent true
expression { params.dco } expression { !params.skip_dco }
} }
agent { label 'arm64 && ubuntu-2004' } agent { label 'amd64 && ubuntu-1804 && overlay2' }
steps { steps {
sh ''' sh '''
docker run --rm \ docker run --rm \
-v "$WORKSPACE:/workspace" \ -v "$WORKSPACE:/workspace" \
-e VALIDATE_REPO=${GIT_URL} \ -e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \ -e VALIDATE_BRANCH=${CHANGE_TARGET} \
alpine sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && cd /workspace && hack/validate/dco' alpine sh -c 'apk add --no-cache -q bash git openssh-client && cd /workspace && hack/validate/dco'
''' '''
} }
} }
stage('Build') { stage('Build') {
parallel { parallel {
stage('arm64') { stage('unit-validate') {
when { when {
beforeAgent true beforeAgent true
expression { params.arm64 } expression { params.unit_validate }
}
agent { label 'arm64 && ubuntu-2004' }
environment {
TEST_SKIP_INTEGRATION_CLI = '1'
} }
agent { label 'amd64 && ubuntu-1804 && overlay2' }
stages { stages {
stage("Print info") { stage("Print info") {
@ -73,14 +76,98 @@ pipeline {
} }
stage("Build dev image") { stage("Build dev image") {
steps { steps {
sh 'docker build --force-rm -t docker:${GIT_COMMIT} .' sh 'docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .'
} }
} }
stage("Unit tests") { stage("Validate") {
steps { steps {
sh ''' sh '''
sudo modprobe ip6table_filter docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/validate/default
''' '''
}
}
stage("Docker-py") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary-daemon \
test-docker-py
'''
}
post {
always {
junit testResults: 'bundles/test-docker-py/junit-report.xml', allowEmptyResults: true
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo 'Chowning /workspace to jenkins user'
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=docker-py
echo "Creating ${bundleName}-bundles.tar.gz"
tar -czf ${bundleName}-bundles.tar.gz bundles/test-docker-py/*.xml bundles/test-docker-py/*.log
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
}
}
stage("Static") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
docker:${GIT_COMMIT} \
hack/make.sh binary-daemon
'''
}
}
stage("Cross") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
docker:${GIT_COMMIT} \
hack/make.sh cross
'''
}
}
// needs to be last stage that calls make.sh for the junit report to work
stage("Unit tests") {
steps {
sh ''' sh '''
docker run --rm -t --privileged \ docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \ -v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
@ -96,7 +183,237 @@ pipeline {
} }
post { post {
always { always {
junit testResults: 'bundles/junit-report*.xml', allowEmptyResults: true junit testResults: 'bundles/junit-report.xml', allowEmptyResults: true
}
}
}
stage("Validate vendor") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/validate/vendor
'''
}
}
stage("Build e2e image") {
steps {
sh '''
echo "Building e2e image"
docker build --build-arg DOCKER_GITCOMMIT=${GIT_COMMIT} -t moby-e2e-test -f Dockerfile.e2e .
'''
}
}
}
post {
always {
sh '''
echo 'Ensuring container killed.'
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo 'Chowning /workspace to jenkins user'
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=unit
echo "Creating ${bundleName}-bundles.tar.gz"
tar -czvf ${bundleName}-bundles.tar.gz bundles/junit-report.xml bundles/go-test-report.json bundles/profile.out
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('amd64') {
when {
beforeAgent true
expression { params.amd64 }
}
agent { label 'amd64 && ubuntu-1804 && overlay2' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh '''
# todo: include ip_vs in base image
sudo modprobe ip_vs
docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Run tests") {
steps {
sh '''#!/bin/bash
# bash is needed so 'jobs -p' works properly
# it also accepts setting inline envvars for functions without explicitly exporting
set -x
run_tests() {
[ -n "$TESTDEBUG" ] && rm= || rm=--rm;
docker run $rm -t --privileged \
-v "$WORKSPACE/bundles/${TEST_INTEGRATION_DEST}:/go/src/github.com/docker/docker/bundles" \
-v "$WORKSPACE/bundles/dynbinary-daemon:/go/src/github.com/docker/docker/bundles/dynbinary-daemon" \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name "$CONTAINER_NAME" \
-e KEEPBUNDLE=1 \
-e TESTDEBUG \
-e TESTFLAGS \
-e TEST_SKIP_INTEGRATION \
-e TEST_SKIP_INTEGRATION_CLI \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
"$1" \
test-integration
}
trap "exit" INT TERM
trap 'pids=$(jobs -p); echo "Remaining pids to kill: [$pids]"; [ -z "$pids" ] || kill $pids' EXIT
CONTAINER_NAME=docker-pr$BUILD_NUMBER
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name ${CONTAINER_NAME}-build \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary-daemon
# flaky + integration
TEST_INTEGRATION_DEST=1 CONTAINER_NAME=${CONTAINER_NAME}-1 TEST_SKIP_INTEGRATION_CLI=1 run_tests test-integration-flaky &
# integration-cli first set
TEST_INTEGRATION_DEST=2 CONTAINER_NAME=${CONTAINER_NAME}-2 TEST_SKIP_INTEGRATION=1 TESTFLAGS="-test.run Test(DockerSuite|DockerNetworkSuite|DockerHubPullSuite|DockerRegistrySuite|DockerSchema1RegistrySuite|DockerRegistryAuthTokenSuite|DockerRegistryAuthHtpasswdSuite)/" run_tests &
# integration-cli second set
TEST_INTEGRATION_DEST=3 CONTAINER_NAME=${CONTAINER_NAME}-3 TEST_SKIP_INTEGRATION=1 TESTFLAGS="-test.run Test(DockerSwarmSuite|DockerDaemonSuite|DockerExternalVolumeSuite)/" run_tests &
c=0
for job in $(jobs -p); do
wait ${job} || c=$?
done
exit $c
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
cids=$(docker ps -aq -f name=docker-pr${BUILD_NUMBER}-*)
[ -n "$cids" ] && docker rm -vf $cids || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=amd64
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('s390x') {
when {
beforeAgent true
expression { params.s390x }
}
agent { label 's390x-ubuntu-1804' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh '''
docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Unit tests") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/test/unit
'''
}
post {
always {
junit testResults: 'bundles/junit-report.xml', allowEmptyResults: true
} }
} }
} }
@ -111,7 +428,6 @@ pipeline {
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \ -e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \ -e DOCKER_GRAPHDRIVER \
-e TESTDEBUG \ -e TESTDEBUG \
-e TEST_INTEGRATION_USE_SNAPSHOTTER \
-e TEST_SKIP_INTEGRATION_CLI \ -e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \ -e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \ -e VALIDATE_REPO=${GIT_URL} \
@ -144,7 +460,7 @@ pipeline {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') { catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh ''' sh '''
bundleName=arm64-integration bundleName=s390x-integration
echo "Creating ${bundleName}-bundles.tar.gz" echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories # exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
@ -159,6 +475,402 @@ pipeline {
} }
} }
} }
stage('s390x integration-cli') {
when {
beforeAgent true
not { changeRequest() }
expression { params.s390x }
}
agent { label 's390x-ubuntu-1804' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh '''
docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Integration-cli tests") {
environment { TEST_SKIP_INTEGRATION = '1' }
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TEST_SKIP_INTEGRATION \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=s390x-integration-cli
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('ppc64le') {
when {
beforeAgent true
expression { params.ppc64le }
}
agent { label 'ppc64le-ubuntu-1604' }
// ppc64le machines run on Docker 18.06, and buildkit has some bugs on that version
environment { DOCKER_BUILDKIT = '0' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh 'docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .'
}
}
stage("Unit tests") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/test/unit
'''
}
post {
always {
junit testResults: 'bundles/junit-report.xml', allowEmptyResults: true
}
}
}
stage("Integration tests") {
environment { TEST_SKIP_INTEGRATION_CLI = '1' }
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TESTDEBUG \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=ppc64le-integration
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('ppc64le integration-cli') {
when {
beforeAgent true
not { changeRequest() }
expression { params.ppc64le }
}
agent { label 'ppc64le-ubuntu-1604' }
// ppc64le machines run on Docker 18.06, and buildkit has some bugs on that version
environment { DOCKER_BUILDKIT = '0' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh 'docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .'
}
}
stage("Integration-cli tests") {
environment { TEST_SKIP_INTEGRATION = '1' }
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TEST_SKIP_INTEGRATION \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=ppc64le-integration-cli
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('win-RS1') {
when {
beforeAgent true
// Skip this stage on PRs unless the windowsRS1 checkbox is selected
anyOf {
not { changeRequest() }
expression { params.windowsRS1 }
}
}
environment {
DOCKER_BUILDKIT = '0'
DOCKER_DUT_DEBUG = '1'
SKIP_VALIDATION_TESTS = '1'
SOURCES_DRIVE = 'd'
SOURCES_SUBDIR = 'gopath'
TESTRUN_DRIVE = 'd'
TESTRUN_SUBDIR = "CI"
WINDOWS_BASE_IMAGE = 'mcr.microsoft.com/windows/servercore'
WINDOWS_BASE_IMAGE_TAG = 'ltsc2016'
}
agent {
node {
customWorkspace 'd:\\gopath\\src\\github.com\\docker\\docker'
label 'windows-2016'
}
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
}
}
stage("Run tests") {
steps {
powershell '''
$ErrorActionPreference = 'Stop'
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Invoke-WebRequest https://github.com/moby/docker-ci-zap/blob/master/docker-ci-zap.exe?raw=true -OutFile C:/Windows/System32/docker-ci-zap.exe
./hack/ci/windows.ps1
exit $LastExitCode
'''
}
}
}
post {
always {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
powershell '''
$bundleName="windowsRS1-integration"
Write-Host -ForegroundColor Green "Creating ${bundleName}-bundles.zip"
# archiveArtifacts does not support env-vars to , so save the artifacts in a fixed location
Compress-Archive -Path "${env:TEMP}/CIDUT.out", "${env:TEMP}/CIDUT.err" -CompressionLevel Optimal -DestinationPath "${bundleName}-bundles.zip"
'''
archiveArtifacts artifacts: '*-bundles.zip', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('win-RS5') {
when {
beforeAgent true
expression { params.windowsRS5 }
}
environment {
DOCKER_BUILDKIT = '0'
DOCKER_DUT_DEBUG = '1'
SKIP_VALIDATION_TESTS = '1'
SOURCES_DRIVE = 'd'
SOURCES_SUBDIR = 'gopath'
TESTRUN_DRIVE = 'd'
TESTRUN_SUBDIR = "CI"
WINDOWS_BASE_IMAGE = 'mcr.microsoft.com/windows/servercore'
WINDOWS_BASE_IMAGE_TAG = 'ltsc2019'
}
agent {
node {
customWorkspace 'd:\\gopath\\src\\github.com\\docker\\docker'
label 'windows-2019'
}
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
}
}
stage("Run tests") {
steps {
powershell '''
$ErrorActionPreference = 'Stop'
Invoke-WebRequest https://github.com/moby/docker-ci-zap/blob/master/docker-ci-zap.exe?raw=true -OutFile C:/Windows/System32/docker-ci-zap.exe
./hack/ci/windows.ps1
exit $LastExitCode
'''
}
}
}
post {
always {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
powershell '''
$bundleName="windowsRS5-integration"
Write-Host -ForegroundColor Green "Creating ${bundleName}-bundles.zip"
# archiveArtifacts does not support env-vars to , so save the artifacts in a fixed location
Compress-Archive -Path "${env:TEMP}/CIDUT.out", "${env:TEMP}/CIDUT.err" -CompressionLevel Optimal -DestinationPath "${bundleName}-bundles.zip"
'''
archiveArtifacts artifacts: '*-bundles.zip', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
} }
} }
} }

View file

@ -24,23 +24,22 @@
# subsystem maintainers accountable. If ownership is unclear, they are the de facto owners. # subsystem maintainers accountable. If ownership is unclear, they are the de facto owners.
people = [ people = [
"akerouanton", "aaronlehmann",
"akihirosuda", "akihirosuda",
"anusha", "anusha",
"coolljt0725", "coolljt0725",
"corhere",
"cpuguy83", "cpuguy83",
"crazy-max", "crosbymichael",
"dnephin",
"duglin",
"estesp", "estesp",
"jhowardmsft",
"johnstep", "johnstep",
"justincormack", "justincormack",
"kolyshkin", "kolyshkin",
"laurazard",
"mhbauer", "mhbauer",
"neersighted", "mlaventure",
"rumpl",
"runcom", "runcom",
"samuelkarp",
"stevvooe", "stevvooe",
"thajeztah", "thajeztah",
"tianon", "tianon",
@ -49,7 +48,6 @@
"unclejack", "unclejack",
"vdemeester", "vdemeester",
"vieux", "vieux",
"vvoland",
"yongtang" "yongtang"
] ]
@ -68,16 +66,14 @@
people = [ people = [
"alexellis", "alexellis",
"andrewhsu", "andrewhsu",
"bsousaa", "anonymuse",
"dmcgowan", "chanwit",
"fntlnz", "fntlnz",
"gianarb", "gianarb",
"olljanat", "olljanat",
"programmerq", "programmerq",
"rheinwein",
"ripcurld", "ripcurld",
"robmry",
"sam-thibault",
"samwhited",
"thajeztah" "thajeztah"
] ]
@ -88,12 +84,6 @@
# Thank you! # Thank you!
people = [ people = [
# Aaron Lehmann was a maintainer for swarmkit, the registry, and the engine,
# and contributed many improvements, features, and bugfixes in those areas,
# among which "automated service rollbacks", templated secrets and configs,
# and resumable image layer downloads.
"aaronlehmann",
# Harald Albers is the mastermind behind the bash completion scripts for the # Harald Albers is the mastermind behind the bash completion scripts for the
# Docker CLI. The completion scripts moved to the Docker CLI repository, so # Docker CLI. The completion scripts moved to the Docker CLI repository, so
# you can now find him perform his magic in the https://github.com/docker/cli repository. # you can now find him perform his magic in the https://github.com/docker/cli repository.
@ -113,30 +103,6 @@
# and tweets as @calavera. # and tweets as @calavera.
"calavera", "calavera",
# Michael Crosby was "chief maintainer" of the Docker project.
# During his time as a maintainer, Michael contributed to many
# milestones of the project; he was release captain of Docker v1.0.0,
# started the development of "libcontainer" (what later became runc)
# and containerd, as well as demoing cool hacks such as live migrating
# a game server container with checkpoint/restore.
#
# Michael is currently a maintainer of containerd, but you may see
# him around in other projects on GitHub.
"crosbymichael",
# Before becoming a maintainer, Daniel Nephin was a core contributor
# to "Fig" (now known as Docker Compose). As a maintainer for both the
# Engine and Docker CLI, Daniel contributed many features, among which
# the `docker stack` commands, allowing users to deploy their Docker
# Compose projects as a Swarm service.
"dnephin",
# Doug Davis contributed many features and fixes for the classic builder,
# such as "wildcard" copy, the dockerignore file, custom paths/names
# for the Dockerfile, as well as enhancements to the API and documentation.
# Follow Doug on Twitter, where he tweets as @duginabox.
"duglin",
# As a maintainer, Erik was responsible for the "builder", and # As a maintainer, Erik was responsible for the "builder", and
# started the first designs for the new networking model in # started the first designs for the new networking model in
# Docker. Erik is now working on all kinds of plugins for Docker # Docker. Erik is now working on all kinds of plugins for Docker
@ -187,16 +153,6 @@
# check out her open source projects on GitHub https://github.com/jessfraz (a must-try). # check out her open source projects on GitHub https://github.com/jessfraz (a must-try).
"jessfraz", "jessfraz",
# As a maintainer, John Howard managed to make the impossible possible;
# to run Docker on Windows. After facing many challenges, teaching
# fellow-maintainers that 'Windows is not Linux', and many changes in
# Windows Server to facilitate containers, native Windows containers
# saw the light of day in 2015.
#
# John is now enjoying life without containers: playing piano, painting,
# and walking his dogs, but you may occasionally see him drop by on GitHub.
"lowenna",
# Alexander Morozov contributed many features to Docker, worked on the premise of # Alexander Morozov contributed many features to Docker, worked on the premise of
# what later became containerd (and worked on that too), and made a "stupid" Go # what later became containerd (and worked on that too), and made a "stupid" Go
# vendor tool specifically for docker/docker needs: vndr (https://github.com/LK4D4/vndr). # vendor tool specifically for docker/docker needs: vndr (https://github.com/LK4D4/vndr).
@ -210,13 +166,6 @@
# Swarm mode networking. # Swarm mode networking.
"mavenugo", "mavenugo",
# As a maintainer, Kenfe-Mickaël Laventure worked on the container runtime,
# integrating containerd 1.0 with the daemon, and adding support for custom
# OCI runtimes, as well as implementing the `docker prune` subcommands,
# which was a welcome feature to be added. You can keep up with Mickaél on
# Twitter (@kmlaventure).
"mlaventure",
# As a docs maintainer, Mary Anthony contributed greatly to the Docker # As a docs maintainer, Mary Anthony contributed greatly to the Docker
# docs. She wrote the Docker Contributor Guide and Getting Started # docs. She wrote the Docker Contributor Guide and Getting Started
# Guides. She helped create a doc build system independent of # Guides. She helped create a doc build system independent of
@ -284,11 +233,6 @@
Email = "aaron.lehmann@docker.com" Email = "aaron.lehmann@docker.com"
GitHub = "aaronlehmann" GitHub = "aaronlehmann"
[people.akerouanton]
Name = "Albin Kerouanton"
Email = "albinker@gmail.com"
GitHub = "akerouanton"
[people.alexellis] [people.alexellis]
Name = "Alex Ellis" Name = "Alex Ellis"
Email = "alexellis2@gmail.com" Email = "alexellis2@gmail.com"
@ -314,16 +258,16 @@
Email = "andrewhsu@docker.com" Email = "andrewhsu@docker.com"
GitHub = "andrewhsu" GitHub = "andrewhsu"
[people.anonymuse]
Name = "Jesse White"
Email = "anonymuse@gmail.com"
GitHub = "anonymuse"
[people.anusha] [people.anusha]
Name = "Anusha Ragunathan" Name = "Anusha Ragunathan"
Email = "anusha@docker.com" Email = "anusha@docker.com"
GitHub = "anusha-ragunathan" GitHub = "anusha-ragunathan"
[people.bsousaa]
Name = "Bruno de Sousa"
Email = "bruno.sousa@docker.com"
GitHub = "bsousaa"
[people.calavera] [people.calavera]
Name = "David Calavera" Name = "David Calavera"
Email = "david.calavera@gmail.com" Email = "david.calavera@gmail.com"
@ -334,20 +278,15 @@
Email = "leijitang@huawei.com" Email = "leijitang@huawei.com"
GitHub = "coolljt0725" GitHub = "coolljt0725"
[people.corhere]
Name = "Cory Snider"
Email = "csnider@mirantis.com"
GitHub = "corhere"
[people.cpuguy83] [people.cpuguy83]
Name = "Brian Goff" Name = "Brian Goff"
Email = "cpuguy83@gmail.com" Email = "cpuguy83@gmail.com"
GitHub = "cpuguy83" GitHub = "cpuguy83"
[people.crazy-max] [people.chanwit]
Name = "Kevin Alvarez" Name = "Chanwit Kaewkasi"
Email = "contact@crazymax.dev" Email = "chanwit@gmail.com"
GitHub = "crazy-max" GitHub = "chanwit"
[people.crosbymichael] [people.crosbymichael]
Name = "Michael Crosby" Name = "Michael Crosby"
@ -359,11 +298,6 @@
Email = "dnephin@gmail.com" Email = "dnephin@gmail.com"
GitHub = "dnephin" GitHub = "dnephin"
[people.dmcgowan]
Name = "Derek McGowan"
Email = "derek@mcgstyle.net"
GitHub = "dmcgowan"
[people.duglin] [people.duglin]
Name = "Doug Davis" Name = "Doug Davis"
Email = "dug@us.ibm.com" Email = "dug@us.ibm.com"
@ -404,6 +338,11 @@
Email = "james@lovedthanlost.net" Email = "james@lovedthanlost.net"
GitHub = "jamtur01" GitHub = "jamtur01"
[people.jhowardmsft]
Name = "John Howard"
Email = "jhoward@microsoft.com"
GitHub = "jhowardmsft"
[people.jessfraz] [people.jessfraz]
Name = "Jessie Frazelle" Name = "Jessie Frazelle"
Email = "jess@linux.com" Email = "jess@linux.com"
@ -424,21 +363,11 @@
Email = "kolyshkin@gmail.com" Email = "kolyshkin@gmail.com"
GitHub = "kolyshkin" GitHub = "kolyshkin"
[people.laurazard]
Name = "Laura Brehm"
Email = "laura.brehm@docker.com"
GitHub = "laurazard"
[people.lk4d4] [people.lk4d4]
Name = "Alexander Morozov" Name = "Alexander Morozov"
Email = "lk4d4@docker.com" Email = "lk4d4@docker.com"
GitHub = "lk4d4" GitHub = "lk4d4"
[people.lowenna]
Name = "John Howard"
Email = "github@lowenna.com"
GitHub = "lowenna"
[people.mavenugo] [people.mavenugo]
Name = "Madhu Venugopal" Name = "Madhu Venugopal"
Email = "madhu@docker.com" Email = "madhu@docker.com"
@ -464,11 +393,6 @@
Email = "mrjana@docker.com" Email = "mrjana@docker.com"
GitHub = "mrjana" GitHub = "mrjana"
[people.neersighted]
Name = "Bjorn Neergaard"
Email = "bjorn@neersighted.com"
GitHub = "neersighted"
[people.olljanat] [people.olljanat]
Name = "Olli Janatuinen" Name = "Olli Janatuinen"
Email = "olli.janatuinen@gmail.com" Email = "olli.janatuinen@gmail.com"
@ -479,41 +403,21 @@
Email = "jeff@docker.com" Email = "jeff@docker.com"
GitHub = "programmerq" GitHub = "programmerq"
[people.robmry] [people.rheinwein]
Name = "Rob Murray" Name = "Laura Frank"
Email = "rob.murray@docker.com" Email = "laura@codeship.com"
GitHub = "robmry" GitHub = "rheinwein"
[people.ripcurld] [people.ripcurld]
Name = "Boaz Shuster" Name = "Boaz Shuster"
Email = "ripcurld.github@gmail.com" Email = "ripcurld.github@gmail.com"
GitHub = "ripcurld" GitHub = "ripcurld"
[people.rumpl]
Name = "Djordje Lukic"
Email = "djordje.lukic@docker.com"
GitHub = "rumpl"
[people.runcom] [people.runcom]
Name = "Antonio Murdaca" Name = "Antonio Murdaca"
Email = "runcom@redhat.com" Email = "runcom@redhat.com"
GitHub = "runcom" GitHub = "runcom"
[people.sam-thibault]
Name = "Sam Thibault"
Email = "sam.thibault@docker.com"
GitHub = "sam-thibault"
[people.samuelkarp]
Name = "Samuel Karp"
Email = "me@samuelkarp.com"
GitHub = "samuelkarp"
[people.samwhited]
Name = "Sam Whited"
Email = "sam@samwhited.com"
GitHub = "samwhited"
[people.shykes] [people.shykes]
Name = "Solomon Hykes" Name = "Solomon Hykes"
Email = "solomon@docker.com" Email = "solomon@docker.com"
@ -574,11 +478,6 @@
Email = "vishnuk@google.com" Email = "vishnuk@google.com"
GitHub = "vishh" GitHub = "vishh"
[people.vvoland]
Name = "Paweł Gronowski"
Email = "pawel.gronowski@docker.com"
GitHub = "vvoland"
[people.yongtang] [people.yongtang]
Name = "Yong Tang" Name = "Yong Tang"
Email = "yong.tang.github@outlook.com" Email = "yong.tang.github@outlook.com"

152
Makefile
View file

@ -1,13 +1,17 @@
.PHONY: all binary dynbinary build cross help install manpages run shell test test-docker-py test-integration test-unit validate validate-% win .PHONY: all binary dynbinary build cross help install manpages run shell test test-docker-py test-integration test-unit validate win
DOCKER ?= docker
BUILDX ?= $(DOCKER) buildx
# set the graph driver as the current graphdriver if not set # set the graph driver as the current graphdriver if not set
DOCKER_GRAPHDRIVER := $(if $(DOCKER_GRAPHDRIVER),$(DOCKER_GRAPHDRIVER),$(shell docker info -f '{{ .Driver }}' 2>&1)) DOCKER_GRAPHDRIVER := $(if $(DOCKER_GRAPHDRIVER),$(DOCKER_GRAPHDRIVER),$(shell docker info 2>&1 | grep "Storage Driver" | sed 's/.*: //'))
export DOCKER_GRAPHDRIVER export DOCKER_GRAPHDRIVER
DOCKER_GITCOMMIT := $(shell git rev-parse HEAD) # enable/disable cross-compile
DOCKER_CROSS ?= false
# get OS/Arch of docker engine
DOCKER_OSARCH := $(shell bash -c 'source hack/make/.detect-daemon-osarch && echo $${DOCKER_ENGINE_OSARCH}')
DOCKERFILE := $(shell bash -c 'source hack/make/.detect-daemon-osarch && echo $${DOCKERFILE}')
DOCKER_GITCOMMIT := $(shell git rev-parse --short HEAD || echo unsupported)
export DOCKER_GITCOMMIT export DOCKER_GITCOMMIT
# allow overriding the repository and branch that validation scripts are running # allow overriding the repository and branch that validation scripts are running
@ -16,9 +20,6 @@ export VALIDATE_REPO
export VALIDATE_BRANCH export VALIDATE_BRANCH
export VALIDATE_ORIGIN_BRANCH export VALIDATE_ORIGIN_BRANCH
export PAGER
export GIT_PAGER
# env vars passed through directly to Docker's build scripts # env vars passed through directly to Docker's build scripts
# to allow things like `make KEEPBUNDLE=1 binary` easily # to allow things like `make KEEPBUNDLE=1 binary` easily
# `project/PACKAGERS.md` have some limited documentation of some of these # `project/PACKAGERS.md` have some limited documentation of some of these
@ -27,9 +28,11 @@ export GIT_PAGER
# option of "go build". For example, a built-in graphdriver priority list # option of "go build". For example, a built-in graphdriver priority list
# can be changed during build time like this: # can be changed during build time like this:
# #
# make DOCKER_LDFLAGS="-X github.com/docker/docker/daemon/graphdriver.priority=overlay2,zfs" dynbinary # make DOCKER_LDFLAGS="-X github.com/docker/docker/daemon/graphdriver.priority=overlay2,devicemapper" dynbinary
# #
DOCKER_ENVS := \ DOCKER_ENVS := \
-e DOCKER_CROSSPLATFORMS \
-e BUILD_APT_MIRROR \
-e BUILDFLAGS \ -e BUILDFLAGS \
-e KEEPBUNDLE \ -e KEEPBUNDLE \
-e DOCKER_BUILD_ARGS \ -e DOCKER_BUILD_ARGS \
@ -39,10 +42,6 @@ DOCKER_ENVS := \
-e DOCKER_BUILDKIT \ -e DOCKER_BUILDKIT \
-e DOCKER_BASH_COMPLETION_PATH \ -e DOCKER_BASH_COMPLETION_PATH \
-e DOCKER_CLI_PATH \ -e DOCKER_CLI_PATH \
-e DOCKERCLI_VERSION \
-e DOCKERCLI_REPOSITORY \
-e DOCKERCLI_INTEGRATION_VERSION \
-e DOCKERCLI_INTEGRATION_REPOSITORY \
-e DOCKER_DEBUG \ -e DOCKER_DEBUG \
-e DOCKER_EXPERIMENTAL \ -e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT \ -e DOCKER_GITCOMMIT \
@ -50,21 +49,13 @@ DOCKER_ENVS := \
-e DOCKER_LDFLAGS \ -e DOCKER_LDFLAGS \
-e DOCKER_PORT \ -e DOCKER_PORT \
-e DOCKER_REMAP_ROOT \ -e DOCKER_REMAP_ROOT \
-e DOCKER_ROOTLESS \
-e DOCKER_STORAGE_OPTS \ -e DOCKER_STORAGE_OPTS \
-e DOCKER_TEST_HOST \ -e DOCKER_TEST_HOST \
-e DOCKER_USERLANDPROXY \ -e DOCKER_USERLANDPROXY \
-e DOCKERD_ARGS \ -e DOCKERD_ARGS \
-e DELVE_PORT \
-e GITHUB_ACTIONS \
-e TEST_FORCE_VALIDATE \
-e TEST_INTEGRATION_DIR \ -e TEST_INTEGRATION_DIR \
-e TEST_INTEGRATION_USE_SNAPSHOTTER \
-e TEST_INTEGRATION_FAIL_FAST \
-e TEST_SKIP_INTEGRATION \ -e TEST_SKIP_INTEGRATION \
-e TEST_SKIP_INTEGRATION_CLI \ -e TEST_SKIP_INTEGRATION_CLI \
-e TEST_IGNORE_CGROUP_CHECK \
-e TESTCOVERAGE \
-e TESTDEBUG \ -e TESTDEBUG \
-e TESTDIRS \ -e TESTDIRS \
-e TESTFLAGS \ -e TESTFLAGS \
@ -75,16 +66,16 @@ DOCKER_ENVS := \
-e VALIDATE_REPO \ -e VALIDATE_REPO \
-e VALIDATE_BRANCH \ -e VALIDATE_BRANCH \
-e VALIDATE_ORIGIN_BRANCH \ -e VALIDATE_ORIGIN_BRANCH \
-e HTTP_PROXY \
-e HTTPS_PROXY \
-e NO_PROXY \
-e http_proxy \
-e https_proxy \
-e no_proxy \
-e VERSION \ -e VERSION \
-e PLATFORM \ -e PLATFORM \
-e DEFAULT_PRODUCT_LICENSE \ -e DEFAULT_PRODUCT_LICENSE \
-e PRODUCT \ -e PRODUCT
-e PACKAGER_NAME \
-e PAGER \
-e GIT_PAGER \
-e OTEL_EXPORTER_OTLP_ENDPOINT \
-e OTEL_EXPORTER_OTLP_PROTOCOL \
-e OTEL_SERVICE_NAME
# note: we _cannot_ add "-e DOCKER_BUILDTAGS" here because even if it's unset in the shell, that would shadow the "ENV DOCKER_BUILDTAGS" set in our Dockerfile, which is very important for our official builds # note: we _cannot_ add "-e DOCKER_BUILDTAGS" here because even if it's unset in the shell, that would shadow the "ENV DOCKER_BUILDTAGS" set in our Dockerfile, which is very important for our official builds
# to allow `make BIND_DIR=. shell` or `make BIND_DIR= test` # to allow `make BIND_DIR=. shell` or `make BIND_DIR= test`
@ -102,7 +93,7 @@ DOCKER_MOUNT := $(if $(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT):$(DOCKER_BINDD
# Note that `BIND_DIR` will already be set to `bundles` if `DOCKER_HOST` is not set (see above BIND_DIR line), in such case this will do nothing since `DOCKER_MOUNT` will already be set. # Note that `BIND_DIR` will already be set to `bundles` if `DOCKER_HOST` is not set (see above BIND_DIR line), in such case this will do nothing since `DOCKER_MOUNT` will already be set.
DOCKER_MOUNT := $(if $(DOCKER_MOUNT),$(DOCKER_MOUNT),-v /go/src/github.com/docker/docker/bundles) -v "$(CURDIR)/.git:/go/src/github.com/docker/docker/.git" DOCKER_MOUNT := $(if $(DOCKER_MOUNT),$(DOCKER_MOUNT),-v /go/src/github.com/docker/docker/bundles) -v "$(CURDIR)/.git:/go/src/github.com/docker/docker/.git"
DOCKER_MOUNT_CACHE := -v docker-dev-cache:/root/.cache -v docker-mod-cache:/go/pkg/mod/ DOCKER_MOUNT_CACHE := -v docker-dev-cache:/root/.cache
DOCKER_MOUNT_CLI := $(if $(DOCKER_CLI_PATH),-v $(shell dirname $(DOCKER_CLI_PATH)):/usr/local/cli,) DOCKER_MOUNT_CLI := $(if $(DOCKER_CLI_PATH),-v $(shell dirname $(DOCKER_CLI_PATH)):/usr/local/cli,)
DOCKER_MOUNT_BASH_COMPLETION := $(if $(DOCKER_BASH_COMPLETION_PATH),-v $(shell dirname $(DOCKER_BASH_COMPLETION_PATH)):/usr/local/completion/bash,) DOCKER_MOUNT_BASH_COMPLETION := $(if $(DOCKER_BASH_COMPLETION_PATH),-v $(shell dirname $(DOCKER_BASH_COMPLETION_PATH)):/usr/local/completion/bash,)
DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_CACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION) DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_CACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION)
@ -111,11 +102,14 @@ endif # ifndef DOCKER_MOUNT
# This allows to set the docker-dev container name # This allows to set the docker-dev container name
DOCKER_CONTAINER_NAME := $(if $(CONTAINER_NAME),--name $(CONTAINER_NAME),) DOCKER_CONTAINER_NAME := $(if $(CONTAINER_NAME),--name $(CONTAINER_NAME),)
DOCKER_IMAGE := docker-dev GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g")
DOCKER_IMAGE := docker-dev$(if $(GIT_BRANCH_CLEAN),:$(GIT_BRANCH_CLEAN))
DOCKER_PORT_FORWARD := $(if $(DOCKER_PORT),-p "$(DOCKER_PORT)",) DOCKER_PORT_FORWARD := $(if $(DOCKER_PORT),-p "$(DOCKER_PORT)",)
DELVE_PORT_FORWARD := $(if $(DELVE_PORT),-p "$(DELVE_PORT)",)
DOCKER_FLAGS := $(DOCKER) run --rm --privileged $(DOCKER_CONTAINER_NAME) $(DOCKER_ENVS) $(DOCKER_MOUNT) $(DOCKER_PORT_FORWARD) $(DELVE_PORT_FORWARD) DOCKER_FLAGS := docker run --rm -i --privileged $(DOCKER_CONTAINER_NAME) $(DOCKER_ENVS) $(DOCKER_MOUNT) $(DOCKER_PORT_FORWARD)
BUILD_APT_MIRROR := $(if $(DOCKER_BUILD_APT_MIRROR),--build-arg APT_MIRROR=$(DOCKER_BUILD_APT_MIRROR))
export BUILD_APT_MIRROR
SWAGGER_DOCS_PORT ?= 9000 SWAGGER_DOCS_PORT ?= 9000
@ -132,42 +126,36 @@ ifeq ($(INTERACTIVE), 1)
DOCKER_FLAGS += -t DOCKER_FLAGS += -t
endif endif
# on GitHub Runners input device is not a TTY but we allocate a pseudo-one,
# otherwise keep STDIN open even if not attached if not a GitHub Runner.
ifeq ($(GITHUB_ACTIONS),true)
DOCKER_FLAGS += -t
else
DOCKER_FLAGS += -i
endif
DOCKER_RUN_DOCKER := $(DOCKER_FLAGS) "$(DOCKER_IMAGE)" DOCKER_RUN_DOCKER := $(DOCKER_FLAGS) "$(DOCKER_IMAGE)"
DOCKER_BUILD_ARGS += --build-arg=GO_VERSION
DOCKER_BUILD_ARGS += --build-arg=DOCKERCLI_VERSION
DOCKER_BUILD_ARGS += --build-arg=DOCKERCLI_REPOSITORY
DOCKER_BUILD_ARGS += --build-arg=DOCKERCLI_INTEGRATION_VERSION
DOCKER_BUILD_ARGS += --build-arg=DOCKERCLI_INTEGRATION_REPOSITORY
ifdef DOCKER_SYSTEMD
DOCKER_BUILD_ARGS += --build-arg=SYSTEMD=true
endif
BUILD_OPTS := ${DOCKER_BUILD_ARGS} ${DOCKER_BUILD_OPTS}
BUILD_CMD := $(BUILDX) build
BAKE_CMD := $(BUILDX) bake
default: binary default: binary
all: build ## validate all checks, build linux binaries, run all tests,\ncross build non-linux binaries, and generate archives all: build ## validate all checks, build linux binaries, run all tests\ncross build non-linux binaries and generate archives
$(DOCKER_RUN_DOCKER) bash -c 'hack/validate/default && hack/make.sh' $(DOCKER_RUN_DOCKER) bash -c 'hack/validate/default && hack/make.sh'
binary: bundles ## build statically linked linux binaries binary: build ## build the linux binaries
$(BAKE_CMD) binary $(DOCKER_RUN_DOCKER) hack/make.sh binary
dynbinary: bundles ## build dynamically linked linux binaries dynbinary: build ## build the linux dynbinaries
$(BAKE_CMD) dynbinary $(DOCKER_RUN_DOCKER) hack/make.sh dynbinary
cross: bundles ## cross build the binaries
$(BAKE_CMD) binary-cross
cross: DOCKER_CROSS := true
cross: build ## cross build the binaries for darwin, freebsd and\nwindows
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary binary cross
ifdef DOCKER_CROSSPLATFORMS
build: DOCKER_CROSS := true
endif
ifeq ($(BIND_DIR), .)
build: DOCKER_BUILD_OPTS += --target=dev
endif
build: DOCKER_BUILD_ARGS += --build-arg=CROSS=$(DOCKER_CROSS)
build: DOCKER_BUILDKIT ?= 1
build: bundles
$(warning The docker client CLI has moved to github.com/docker/cli. For a dev-test cycle involving the CLI, run:${\n} DOCKER_CLI_PATH=/host/path/to/cli/binary make shell ${\n} then change the cli and compile into a binary at the same location.${\n})
DOCKER_BUILDKIT="${DOCKER_BUILDKIT}" docker build --build-arg=GO_VERSION ${BUILD_APT_MIRROR} ${DOCKER_BUILD_ARGS} ${DOCKER_BUILD_OPTS} -t "$(DOCKER_IMAGE)" -f "$(DOCKERFILE)" .
bundles: bundles:
mkdir bundles mkdir bundles
@ -176,8 +164,8 @@ bundles:
clean: clean-cache clean: clean-cache
.PHONY: clean-cache .PHONY: clean-cache
clean-cache: ## remove the docker volumes that are used for caching in the dev-container clean-cache:
docker volume rm -f docker-dev-cache docker-mod-cache docker volume rm -f docker-dev-cache
help: ## this help help: ## this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z0-9_-]+:.*?## / {gsub("\\\\n",sprintf("\n%22c",""), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST) @awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z0-9_-]+:.*?## / {gsub("\\\\n",sprintf("\n%22c",""), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
@ -188,20 +176,11 @@ install: ## install the linux binaries
run: build ## run the docker daemon in a container run: build ## run the docker daemon in a container
$(DOCKER_RUN_DOCKER) sh -c "KEEPBUNDLE=1 hack/make.sh install-binary run" $(DOCKER_RUN_DOCKER) sh -c "KEEPBUNDLE=1 hack/make.sh install-binary run"
.PHONY: build
ifeq ($(BIND_DIR), .)
build: shell_target := --target=dev-base
else
build: shell_target := --target=dev
endif
build: bundles
$(BUILD_CMD) $(BUILD_OPTS) $(shell_target) --load -t "$(DOCKER_IMAGE)" .
shell: build ## start a shell inside the build env shell: build ## start a shell inside the build env
$(DOCKER_RUN_DOCKER) bash $(DOCKER_RUN_DOCKER) bash
test: build test-unit ## run the unit, integration and docker-py tests test: build test-unit ## run the unit, integration and docker-py tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration test-docker-py $(DOCKER_RUN_DOCKER) hack/make.sh dynbinary cross test-integration test-docker-py
test-docker-py: build ## run the docker-py tests test-docker-py: build ## run the docker-py tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-docker-py $(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-docker-py
@ -225,16 +204,8 @@ test-unit: build ## run the unit tests
validate: build ## validate DCO, Seccomp profile generation, gofmt,\n./pkg/ isolation, golint, tests, tomls, go vet and vendor validate: build ## validate DCO, Seccomp profile generation, gofmt,\n./pkg/ isolation, golint, tests, tomls, go vet and vendor
$(DOCKER_RUN_DOCKER) hack/validate/all $(DOCKER_RUN_DOCKER) hack/validate/all
validate-generate-files: win: build ## cross build the binary for windows
$(BUILD_CMD) --target "validate" \ $(DOCKER_RUN_DOCKER) DOCKER_CROSSPLATFORMS=windows/amd64 hack/make.sh cross
--output "type=cacheonly" \
--file "./hack/dockerfiles/generate-files.Dockerfile" .
validate-%: build ## validate specific check
$(DOCKER_RUN_DOCKER) hack/validate/$*
win: bundles ## cross build the binary for windows
$(BAKE_CMD) --set *.platform=windows/amd64 binary
.PHONY: swagger-gen .PHONY: swagger-gen
swagger-gen: swagger-gen:
@ -250,17 +221,4 @@ swagger-docs: ## preview the API documentation
@docker run --rm -v $(PWD)/api/swagger.yaml:/usr/share/nginx/html/swagger.yaml \ @docker run --rm -v $(PWD)/api/swagger.yaml:/usr/share/nginx/html/swagger.yaml \
-e 'REDOC_OPTIONS=hide-hostname="true" lazy-rendering' \ -e 'REDOC_OPTIONS=hide-hostname="true" lazy-rendering' \
-p $(SWAGGER_DOCS_PORT):80 \ -p $(SWAGGER_DOCS_PORT):80 \
bfirsh/redoc:1.14.0 bfirsh/redoc:1.6.2
.PHONY: generate-files
generate-files:
$(eval $@_TMP_OUT := $(shell mktemp -d -t moby-output.XXXXXXXXXX))
@if [ -z "$($@_TMP_OUT)" ]; then \
echo "Temp dir is not set"; \
exit 1; \
fi
$(BUILD_CMD) --target "update" \
--output "type=local,dest=$($@_TMP_OUT)" \
--file "./hack/dockerfiles/generate-files.Dockerfile" .
cp -R "$($@_TMP_OUT)"/. .
rm -rf "$($@_TMP_OUT)"/*

View file

@ -14,7 +14,7 @@ Moby is an open project guided by strong principles, aiming to be modular, flexi
It is open to the community to help set its direction. It is open to the community to help set its direction.
- Modular: the project includes lots of components that have well-defined functions and APIs that work together. - Modular: the project includes lots of components that have well-defined functions and APIs that work together.
- Batteries included but swappable: Moby includes enough components to build fully featured container systems, but its modular architecture ensures that most of the components can be swapped by different implementations. - Batteries included but swappable: Moby includes enough components to build fully featured container system, but its modular architecture ensures that most of the components can be swapped by different implementations.
- Usable security: Moby provides secure defaults without compromising usability. - Usable security: Moby provides secure defaults without compromising usability.
- Developer focused: The APIs are intended to be functional and useful to build powerful tools. - Developer focused: The APIs are intended to be functional and useful to build powerful tools.
They are not necessarily intended as end user tools but as components aimed at developers. They are not necessarily intended as end user tools but as components aimed at developers.

View file

@ -1,9 +0,0 @@
# Reporting security issues
The Moby maintainers take security seriously. If you discover a security issue, please bring it to their attention right away!
### Reporting a Vulnerability
Please **DO NOT** file a public issue, instead send your report privately to security@docker.com.
Security reports are greatly appreciated and we will publicly thank you for it, although we keep your name confidential if you request it. We also like to send gifts—if you're into schwag, make sure to let us know. We currently do not offer a paid security bounty program, but are not ruling it out in the future.

View file

@ -28,7 +28,7 @@ Most code changes will fall into one of the following categories.
### Writing tests for new features ### Writing tests for new features
New code should be covered by unit tests. If the code is difficult to test with New code should be covered by unit tests. If the code is difficult to test with
unit tests, then that is a good sign that it should be refactored to make it a unit tests then that is a good sign that it should be refactored to make it
easier to reuse and maintain. Consider accepting unexported interfaces instead easier to reuse and maintain. Consider accepting unexported interfaces instead
of structs so that fakes can be provided for dependencies. of structs so that fakes can be provided for dependencies.
@ -44,23 +44,16 @@ case. Error cases should be handled by unit tests.
Bugs fixes should include a unit test case which exercises the bug. Bugs fixes should include a unit test case which exercises the bug.
A bug fix may also include new assertions in existing integration tests for the A bug fix may also include new assertions in an existing integration tests for the
API endpoint. API endpoint.
### Writing new integration tests
Note the `integration-cli` tests are deprecated; new tests will be rejected by
the CI.
Instead, implement new tests under `integration/`.
### Integration tests environment considerations ### Integration tests environment considerations
When adding new tests or modifying existing tests under `integration/`, testing When adding new tests or modifying existing test under `integration/`, testing
environment should be properly considered. `skip.If` from environment should be properly considered. `skip.If` from
[gotest.tools/skip](https://godoc.org/gotest.tools/skip) can be used to make the [gotest.tools/skip](https://godoc.org/gotest.tools/skip) can be used to make the
test run conditionally. Full testing environment conditions can be found at test run conditionally. Full testing environment conditions can be found at
[environment.go](https://github.com/moby/moby/blob/6b6eeed03b963a27085ea670f40cd5ff8a61f32e/testutil/environment/environment.go) [environment.go](https://github.com/moby/moby/blob/cb37987ee11655ed6bbef663d245e55922354c68/internal/test/environment/environment.go)
Here is a quick example. If the test needs to interact with a docker daemon on Here is a quick example. If the test needs to interact with a docker daemon on
the same host, the following condition should be checked within the test code the same host, the following condition should be checked within the test code
@ -111,9 +104,9 @@ TEST_SKIP_INTEGRATION and/or TEST_SKIP_INTEGRATION_CLI environment variables.
Flags specific to each suite can be set in the TESTFLAGS_INTEGRATION and Flags specific to each suite can be set in the TESTFLAGS_INTEGRATION and
TESTFLAGS_INTEGRATION_CLI environment variables. TESTFLAGS_INTEGRATION_CLI environment variables.
If all you want is to specify a test filter to run, you can set the If all you want is to specity a test filter to run, you can set the
`TEST_FILTER` environment variable. This ends up getting passed directly to `go `TEST_FILTER` environment variable. This ends up getting passed directly to `go
test -run` (or `go test -check-f`, depending on the test suite). It will also test -run` (or `go test -check-f`, dpenending on the test suite). It will also
automatically set the other above mentioned environment variables accordingly. automatically set the other above mentioned environment variables accordingly.
### Go Version ### Go Version

View file

@ -37,6 +37,6 @@ There is hopefully enough example material in the file for you to copy a similar
When you make edits to `swagger.yaml`, you may want to check the generated API documentation to ensure it renders correctly. When you make edits to `swagger.yaml`, you may want to check the generated API documentation to ensure it renders correctly.
Run `make swagger-docs` and a preview will be running at `http://localhost:9000`. Some of the styling may be incorrect, but you'll be able to ensure that it is generating the correct documentation. Run `make swagger-docs` and a preview will be running at `http://localhost`. Some of the styling may be incorrect, but you'll be able to ensure that it is generating the correct documentation.
The production documentation is generated by vendoring `swagger.yaml` into [docker/docker.github.io](https://github.com/docker/docker.github.io). The production documentation is generated by vendoring `swagger.yaml` into [docker/docker.github.io](https://github.com/docker/docker.github.io).

View file

@ -2,17 +2,8 @@ package api // import "github.com/docker/docker/api"
// Common constants for daemon and client. // Common constants for daemon and client.
const ( const (
// DefaultVersion of the current REST API. // DefaultVersion of Current REST API
DefaultVersion = "1.45" DefaultVersion = "1.40"
// MinSupportedAPIVersion is the minimum API version that can be supported
// by the API server, specified as "major.minor". Note that the daemon
// may be configured with a different minimum API version, as returned
// in [github.com/docker/docker/api/types.Version.MinAPIVersion].
//
// API requests for API versions lower than the configured version produce
// an error.
MinSupportedAPIVersion = "1.24"
// NoBaseImageSpecifier is the symbol used by the FROM // NoBaseImageSpecifier is the symbol used by the FROM
// command to specify that no base image is to be used. // command to specify that no base image is to be used.

6
api/common_unix.go Normal file
View file

@ -0,0 +1,6 @@
// +build !windows
package api // import "github.com/docker/docker/api"
// MinVersion represents Minimum REST API version supported
const MinVersion = "1.12"

8
api/common_windows.go Normal file
View file

@ -0,0 +1,8 @@
package api // import "github.com/docker/docker/api"
// MinVersion represents Minimum REST API version supported
// Technically the first daemon API version released on Windows is v1.25 in
// engine version 1.13. However, some clients are explicitly using downlevel
// APIs (e.g. docker-compose v2.1 file format) and that is just too restrictive.
// Hence also allowing 1.24 on Windows.
const MinVersion string = "1.24"

View file

@ -3,25 +3,24 @@ package build // import "github.com/docker/docker/api/server/backend/build"
import ( import (
"context" "context"
"fmt" "fmt"
"strconv"
"github.com/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend" "github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/builder" "github.com/docker/docker/builder"
buildkit "github.com/docker/docker/builder/builder-next" buildkit "github.com/docker/docker/builder/builder-next"
daemonevents "github.com/docker/docker/daemon/events" "github.com/docker/docker/builder/fscache"
"github.com/docker/docker/image" "github.com/docker/docker/image"
"github.com/docker/docker/pkg/stringid" "github.com/docker/docker/pkg/stringid"
"github.com/pkg/errors" "github.com/pkg/errors"
"golang.org/x/sync/errgroup"
"google.golang.org/grpc" "google.golang.org/grpc"
) )
// ImageComponent provides an interface for working with images // ImageComponent provides an interface for working with images
type ImageComponent interface { type ImageComponent interface {
SquashImage(from string, to string) (string, error) SquashImage(from string, to string) (string, error)
TagImage(context.Context, image.ID, reference.Named) error TagImageWithReference(image.ID, reference.Named) error
} }
// Builder defines interface for running a build // Builder defines interface for running a build
@ -32,14 +31,14 @@ type Builder interface {
// Backend provides build functionality to the API router // Backend provides build functionality to the API router
type Backend struct { type Backend struct {
builder Builder builder Builder
fsCache *fscache.FSCache
imageComponent ImageComponent imageComponent ImageComponent
buildkit *buildkit.Builder buildkit *buildkit.Builder
eventsService *daemonevents.Events
} }
// NewBackend creates a new build backend from components // NewBackend creates a new build backend from components
func NewBackend(components ImageComponent, builder Builder, buildkit *buildkit.Builder, es *daemonevents.Events) (*Backend, error) { func NewBackend(components ImageComponent, builder Builder, fsCache *fscache.FSCache, buildkit *buildkit.Builder) (*Backend, error) {
return &Backend{imageComponent: components, builder: builder, buildkit: buildkit, eventsService: es}, nil return &Backend{imageComponent: components, builder: builder, fsCache: fsCache, buildkit: buildkit}, nil
} }
// RegisterGRPC registers buildkit controller to the grpc server. // RegisterGRPC registers buildkit controller to the grpc server.
@ -54,7 +53,7 @@ func (b *Backend) Build(ctx context.Context, config backend.BuildConfig) (string
options := config.Options options := config.Options
useBuildKit := options.Version == types.BuilderBuildKit useBuildKit := options.Version == types.BuilderBuildKit
tags, err := sanitizeRepoAndTags(options.Tags) tagger, err := NewTagger(b.imageComponent, config.ProgressWriter.StdoutFormatter, options.Tags)
if err != nil { if err != nil {
return "", err return "", err
} }
@ -76,7 +75,7 @@ func (b *Backend) Build(ctx context.Context, config backend.BuildConfig) (string
return "", nil return "", nil
} }
imageID := build.ImageID var imageID = build.ImageID
if options.Squash { if options.Squash {
if imageID, err = squashBuild(build, b.imageComponent); err != nil { if imageID, err = squashBuild(build, b.imageComponent); err != nil {
return "", err return "", err
@ -92,24 +91,42 @@ func (b *Backend) Build(ctx context.Context, config backend.BuildConfig) (string
stdout := config.ProgressWriter.StdoutFormatter stdout := config.ProgressWriter.StdoutFormatter
fmt.Fprintf(stdout, "Successfully built %s\n", stringid.TruncateID(imageID)) fmt.Fprintf(stdout, "Successfully built %s\n", stringid.TruncateID(imageID))
} }
if imageID != "" && !useBuildKit { if imageID != "" {
err = tagImages(ctx, b.imageComponent, config.ProgressWriter.StdoutFormatter, image.ID(imageID), tags) err = tagger.TagImages(image.ID(imageID))
} }
return imageID, err return imageID, err
} }
// PruneCache removes all cached build sources // PruneCache removes all cached build sources
func (b *Backend) PruneCache(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error) { func (b *Backend) PruneCache(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error) {
buildCacheSize, cacheIDs, err := b.buildkit.Prune(ctx, opts) eg, ctx := errgroup.WithContext(ctx)
var fsCacheSize uint64
eg.Go(func() error {
var err error
fsCacheSize, err = b.fsCache.Prune(ctx)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to prune build cache") return errors.Wrap(err, "failed to prune fscache")
} }
b.eventsService.Log(events.ActionPrune, events.BuilderEventType, events.Actor{ return nil
Attributes: map[string]string{
"reclaimed": strconv.FormatInt(buildCacheSize, 10),
},
}) })
return &types.BuildCachePruneReport{SpaceReclaimed: uint64(buildCacheSize), CachesDeleted: cacheIDs}, nil
var buildCacheSize int64
var cacheIDs []string
eg.Go(func() error {
var err error
buildCacheSize, cacheIDs, err = b.buildkit.Prune(ctx, opts)
if err != nil {
return errors.Wrap(err, "failed to prune build cache")
}
return nil
})
if err := eg.Wait(); err != nil {
return nil, err
}
return &types.BuildCachePruneReport{SpaceReclaimed: fsCacheSize + uint64(buildCacheSize), CachesDeleted: cacheIDs}, nil
} }
// Cancel cancels the build by ID // Cancel cancels the build by ID

View file

@ -1,31 +1,55 @@
package build // import "github.com/docker/docker/api/server/backend/build" package build // import "github.com/docker/docker/api/server/backend/build"
import ( import (
"context"
"fmt" "fmt"
"io" "io"
"github.com/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/docker/image" "github.com/docker/docker/image"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
// tagImages creates image tags for the imageID. // Tagger is responsible for tagging an image created by a builder
func tagImages(ctx context.Context, ic ImageComponent, stdout io.Writer, imageID image.ID, repoAndTags []reference.Named) error { type Tagger struct {
for _, rt := range repoAndTags { imageComponent ImageComponent
if err := ic.TagImage(ctx, imageID, rt); err != nil { stdout io.Writer
repoAndTags []reference.Named
}
// NewTagger returns a new Tagger for tagging the images of a build.
// If any of the names are invalid tags an error is returned.
func NewTagger(backend ImageComponent, stdout io.Writer, names []string) (*Tagger, error) {
reposAndTags, err := sanitizeRepoAndTags(names)
if err != nil {
return nil, err
}
return &Tagger{
imageComponent: backend,
stdout: stdout,
repoAndTags: reposAndTags,
}, nil
}
// TagImages creates image tags for the imageID
func (bt *Tagger) TagImages(imageID image.ID) error {
for _, rt := range bt.repoAndTags {
if err := bt.imageComponent.TagImageWithReference(imageID, rt); err != nil {
return err return err
} }
_, _ = fmt.Fprintln(stdout, "Successfully tagged", reference.FamiliarString(rt)) fmt.Fprintf(bt.stdout, "Successfully tagged %s\n", reference.FamiliarString(rt))
} }
return nil return nil
} }
// sanitizeRepoAndTags parses the raw "t" parameter received from the client // sanitizeRepoAndTags parses the raw "t" parameter received from the client
// to a slice of repoAndTag. It removes duplicates, and validates each name // to a slice of repoAndTag.
// to not contain a digest. // It also validates each repoName and tag.
func sanitizeRepoAndTags(names []string) (repoAndTags []reference.Named, err error) { func sanitizeRepoAndTags(names []string) ([]reference.Named, error) {
uniqNames := map[string]struct{}{} var (
repoAndTags []reference.Named
// This map is used for deduplicating the "-t" parameter.
uniqNames = make(map[string]struct{})
)
for _, repo := range names { for _, repo := range names {
if repo == "" { if repo == "" {
continue continue
@ -36,12 +60,14 @@ func sanitizeRepoAndTags(names []string) (repoAndTags []reference.Named, err err
return nil, err return nil, err
} }
if _, ok := ref.(reference.Digested); ok { if _, isCanonical := ref.(reference.Canonical); isCanonical {
return nil, errors.New("build tag cannot contain a digest") return nil, errors.New("build tag cannot contain a digest")
} }
ref = reference.TagNameOnly(ref) ref = reference.TagNameOnly(ref)
nameWithTag := ref.String() nameWithTag := ref.String()
if _, exists := uniqNames[nameWithTag]; !exists { if _, exists := uniqNames[nameWithTag]; !exists {
uniqNames[nameWithTag] = struct{}{} uniqNames[nameWithTag] = struct{}{}
repoAndTags = append(repoAndTags, ref) repoAndTags = append(repoAndTags, ref)

View file

@ -1,152 +0,0 @@
package httpstatus // import "github.com/docker/docker/api/server/httpstatus"
import (
"context"
"fmt"
"net/http"
cerrdefs "github.com/containerd/containerd/errdefs"
"github.com/containerd/log"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/docker/errdefs"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
type causer interface {
Cause() error
}
// FromError retrieves status code from error message.
func FromError(err error) int {
if err == nil {
log.G(context.TODO()).WithError(err).Error("unexpected HTTP error handling")
return http.StatusInternalServerError
}
var statusCode int
// Stop right there
// Are you sure you should be adding a new error class here? Do one of the existing ones work?
// Note that the below functions are already checking the error causal chain for matches.
switch {
case errdefs.IsNotFound(err):
statusCode = http.StatusNotFound
case errdefs.IsInvalidParameter(err):
statusCode = http.StatusBadRequest
case errdefs.IsConflict(err):
statusCode = http.StatusConflict
case errdefs.IsUnauthorized(err):
statusCode = http.StatusUnauthorized
case errdefs.IsUnavailable(err):
statusCode = http.StatusServiceUnavailable
case errdefs.IsForbidden(err):
statusCode = http.StatusForbidden
case errdefs.IsNotModified(err):
statusCode = http.StatusNotModified
case errdefs.IsNotImplemented(err):
statusCode = http.StatusNotImplemented
case errdefs.IsSystem(err) || errdefs.IsUnknown(err) || errdefs.IsDataLoss(err) || errdefs.IsDeadline(err) || errdefs.IsCancelled(err):
statusCode = http.StatusInternalServerError
default:
statusCode = statusCodeFromGRPCError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromContainerdError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromDistributionError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
if e, ok := err.(causer); ok {
return FromError(e.Cause())
}
log.G(context.TODO()).WithFields(log.Fields{
"module": "api",
"error": err,
"error_type": fmt.Sprintf("%T", err),
}).Debug("FIXME: Got an API for which error does not match any expected type!!!")
}
if statusCode == 0 {
statusCode = http.StatusInternalServerError
}
return statusCode
}
// statusCodeFromGRPCError returns status code according to gRPC error
func statusCodeFromGRPCError(err error) int {
switch status.Code(err) {
case codes.InvalidArgument: // code 3
return http.StatusBadRequest
case codes.NotFound: // code 5
return http.StatusNotFound
case codes.AlreadyExists: // code 6
return http.StatusConflict
case codes.PermissionDenied: // code 7
return http.StatusForbidden
case codes.FailedPrecondition: // code 9
return http.StatusBadRequest
case codes.Unauthenticated: // code 16
return http.StatusUnauthorized
case codes.OutOfRange: // code 11
return http.StatusBadRequest
case codes.Unimplemented: // code 12
return http.StatusNotImplemented
case codes.Unavailable: // code 14
return http.StatusServiceUnavailable
default:
// codes.Canceled(1)
// codes.Unknown(2)
// codes.DeadlineExceeded(4)
// codes.ResourceExhausted(8)
// codes.Aborted(10)
// codes.Internal(13)
// codes.DataLoss(15)
return http.StatusInternalServerError
}
}
// statusCodeFromDistributionError returns status code according to registry errcode
// code is loosely based on errcode.ServeJSON() in docker/distribution
func statusCodeFromDistributionError(err error) int {
switch errs := err.(type) {
case errcode.Errors:
if len(errs) < 1 {
return http.StatusInternalServerError
}
if _, ok := errs[0].(errcode.ErrorCoder); ok {
return statusCodeFromDistributionError(errs[0])
}
case errcode.ErrorCoder:
return errs.ErrorCode().Descriptor().HTTPStatusCode
}
return http.StatusInternalServerError
}
// statusCodeFromContainerdError returns status code for containerd errors when
// consumed directly (not through gRPC)
func statusCodeFromContainerdError(err error) int {
switch {
case cerrdefs.IsInvalidArgument(err):
return http.StatusBadRequest
case cerrdefs.IsNotFound(err):
return http.StatusNotFound
case cerrdefs.IsAlreadyExists(err):
return http.StatusConflict
case cerrdefs.IsFailedPrecondition(err):
return http.StatusPreconditionFailed
case cerrdefs.IsUnavailable(err):
return http.StatusServiceUnavailable
case cerrdefs.IsNotImplemented(err):
return http.StatusNotImplemented
default:
return http.StatusInternalServerError
}
}

View file

@ -12,4 +12,5 @@ import (
// container configuration. // container configuration.
type ContainerDecoder interface { type ContainerDecoder interface {
DecodeConfig(src io.Reader) (*container.Config, *container.HostConfig, *network.NetworkingConfig, error) DecodeConfig(src io.Reader) (*container.Config, *container.HostConfig, *network.NetworkingConfig, error)
DecodeHostConfig(src io.Reader) (*container.HostConfig, error)
} }

View file

@ -0,0 +1,9 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import "github.com/docker/docker/errdefs"
// GetHTTPErrorStatusCode retrieves status code from error message.
//
// Deprecated: use errdefs.GetHTTPErrorStatusCode
func GetHTTPErrorStatusCode(err error) int {
return errdefs.GetHTTPErrorStatusCode(err)
}

View file

@ -1,12 +1,9 @@
package httputils // import "github.com/docker/docker/api/server/httputils" package httputils // import "github.com/docker/docker/api/server/httputils"
import ( import (
"fmt"
"net/http" "net/http"
"strconv" "strconv"
"strings" "strings"
"github.com/distribution/reference"
) )
// BoolValue transforms a form value in different formats into a boolean type. // BoolValue transforms a form value in different formats into a boolean type.
@ -44,38 +41,6 @@ func Int64ValueOrDefault(r *http.Request, field string, def int64) (int64, error
return def, nil return def, nil
} }
// RepoTagReference parses form values "repo" and "tag" and returns a valid
// reference with repository and tag.
// If repo is empty, then a nil reference is returned.
// If no tag is given, then the default "latest" tag is set.
func RepoTagReference(repo, tag string) (reference.NamedTagged, error) {
if repo == "" {
return nil, nil
}
ref, err := reference.ParseNormalizedNamed(repo)
if err != nil {
return nil, err
}
if _, isDigested := ref.(reference.Digested); isDigested {
return nil, fmt.Errorf("cannot import digest reference")
}
if tag != "" {
return reference.WithTag(ref, tag)
}
withDefaultTag := reference.TagNameOnly(ref)
namedTagged, ok := withDefaultTag.(reference.NamedTagged)
if !ok {
return nil, fmt.Errorf("unexpected reference: %q", ref.String())
}
return namedTagged, nil
}
// ArchiveOptions stores archive information for different operations. // ArchiveOptions stores archive information for different operations.
type ArchiveOptions struct { type ArchiveOptions struct {
Name string Name string

View file

@ -23,7 +23,7 @@ func TestBoolValue(t *testing.T) {
for c, e := range cases { for c, e := range cases {
v := url.Values{} v := url.Values{}
v.Set("test", c) v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil) r, _ := http.NewRequest("POST", "", nil)
r.Form = v r.Form = v
a := BoolValue(r, "test") a := BoolValue(r, "test")
@ -34,14 +34,14 @@ func TestBoolValue(t *testing.T) {
} }
func TestBoolValueOrDefault(t *testing.T) { func TestBoolValueOrDefault(t *testing.T) {
r, _ := http.NewRequest(http.MethodGet, "", nil) r, _ := http.NewRequest("GET", "", nil)
if !BoolValueOrDefault(r, "queryparam", true) { if !BoolValueOrDefault(r, "queryparam", true) {
t.Fatal("Expected to get true default value, got false") t.Fatal("Expected to get true default value, got false")
} }
v := url.Values{} v := url.Values{}
v.Set("param", "") v.Set("param", "")
r, _ = http.NewRequest(http.MethodGet, "", nil) r, _ = http.NewRequest("GET", "", nil)
r.Form = v r.Form = v
if BoolValueOrDefault(r, "param", true) { if BoolValueOrDefault(r, "param", true) {
t.Fatal("Expected not to get true") t.Fatal("Expected not to get true")
@ -59,7 +59,7 @@ func TestInt64ValueOrZero(t *testing.T) {
for c, e := range cases { for c, e := range cases {
v := url.Values{} v := url.Values{}
v.Set("test", c) v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil) r, _ := http.NewRequest("POST", "", nil)
r.Form = v r.Form = v
a := Int64ValueOrZero(r, "test") a := Int64ValueOrZero(r, "test")
@ -79,7 +79,7 @@ func TestInt64ValueOrDefault(t *testing.T) {
for c, e := range cases { for c, e := range cases {
v := url.Values{} v := url.Values{}
v.Set("test", c) v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil) r, _ := http.NewRequest("POST", "", nil)
r.Form = v r.Form = v
a, err := Int64ValueOrDefault(r, "test", -1) a, err := Int64ValueOrDefault(r, "test", -1)
@ -95,7 +95,7 @@ func TestInt64ValueOrDefault(t *testing.T) {
func TestInt64ValueOrDefaultWithError(t *testing.T) { func TestInt64ValueOrDefaultWithError(t *testing.T) {
v := url.Values{} v := url.Values{}
v.Set("test", "invalid") v.Set("test", "invalid")
r, _ := http.NewRequest(http.MethodPost, "", nil) r, _ := http.NewRequest("POST", "", nil)
r.Form = v r.Form = v
_, err := Int64ValueOrDefault(r, "test", -1) _, err := Int64ValueOrDefault(r, "test", -1)

View file

@ -2,14 +2,18 @@ package httputils // import "github.com/docker/docker/api/server/httputils"
import ( import (
"context" "context"
"encoding/json"
"io" "io"
"mime" "mime"
"net/http" "net/http"
"strings" "strings"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs" "github.com/docker/docker/errdefs"
"github.com/gorilla/mux"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/status"
) )
// APIVersionKey is the client's requested API version. // APIVersionKey is the client's requested API version.
@ -27,7 +31,7 @@ func HijackConnection(w http.ResponseWriter) (io.ReadCloser, io.Writer, error) {
return nil, nil, err return nil, nil, err
} }
// Flush the options to make sure the client sets the raw mode // Flush the options to make sure the client sets the raw mode
_, _ = conn.Write([]byte{}) conn.Write([]byte{})
return conn, conn, nil return conn, conn, nil
} }
@ -37,9 +41,9 @@ func CloseStreams(streams ...interface{}) {
if tcpc, ok := stream.(interface { if tcpc, ok := stream.(interface {
CloseWrite() error CloseWrite() error
}); ok { }); ok {
_ = tcpc.CloseWrite() tcpc.CloseWrite()
} else if closer, ok := stream.(io.Closer); ok { } else if closer, ok := stream.(io.Closer); ok {
_ = closer.Close() closer.Close()
} }
} }
} }
@ -49,50 +53,17 @@ func CheckForJSON(r *http.Request) error {
ct := r.Header.Get("Content-Type") ct := r.Header.Get("Content-Type")
// No Content-Type header is ok as long as there's no Body // No Content-Type header is ok as long as there's no Body
if ct == "" && (r.Body == nil || r.ContentLength == 0) { if ct == "" {
if r.Body == nil || r.ContentLength == 0 {
return nil return nil
} }
}
// Otherwise it better be json // Otherwise it better be json
return matchesContentType(ct, "application/json") if matchesContentType(ct, "application/json") {
}
// ReadJSON validates the request to have the correct content-type, and decodes
// the request's Body into out.
func ReadJSON(r *http.Request, out interface{}) error {
err := CheckForJSON(r)
if err != nil {
return err
}
if r.Body == nil || r.ContentLength == 0 {
// an empty body is not invalid, so don't return an error; see
// https://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0272.html
return nil return nil
} }
return errdefs.InvalidParameter(errors.Errorf("Content-Type specified (%s) must be 'application/json'", ct))
dec := json.NewDecoder(r.Body)
err = dec.Decode(out)
defer r.Body.Close()
if err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("invalid JSON: got EOF while reading request body"))
}
return errdefs.InvalidParameter(errors.Wrap(err, "invalid JSON"))
}
if dec.More() {
return errdefs.InvalidParameter(errors.New("unexpected content after JSON"))
}
return nil
}
// WriteJSON writes the value v to the http response stream as json with standard json encoding.
func WriteJSON(w http.ResponseWriter, code int, v interface{}) error {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
enc := json.NewEncoder(w)
enc.SetEscapeHTML(false)
return enc.Encode(v)
} }
// ParseForm ensures the request form is parsed even with invalid content types. // ParseForm ensures the request form is parsed even with invalid content types.
@ -121,14 +92,33 @@ func VersionFromContext(ctx context.Context) string {
return "" return ""
} }
// MakeErrorHandler makes an HTTP handler that decodes a Docker error and
// returns it in the response.
func MakeErrorHandler(err error) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
statusCode := errdefs.GetHTTPErrorStatusCode(err)
vars := mux.Vars(r)
if apiVersionSupportsJSONErrors(vars["version"]) {
response := &types.ErrorResponse{
Message: err.Error(),
}
WriteJSON(w, statusCode, response)
} else {
http.Error(w, status.Convert(err).Message(), statusCode)
}
}
}
func apiVersionSupportsJSONErrors(version string) bool {
const firstAPIVersionWithJSONErrors = "1.23"
return version == "" || versions.GreaterThan(version, firstAPIVersionWithJSONErrors)
}
// matchesContentType validates the content type against the expected one // matchesContentType validates the content type against the expected one
func matchesContentType(contentType, expectedType string) error { func matchesContentType(contentType, expectedType string) bool {
mimetype, _, err := mime.ParseMediaType(contentType) mimetype, _, err := mime.ParseMediaType(contentType)
if err != nil { if err != nil {
return errdefs.InvalidParameter(errors.Wrapf(err, "malformed Content-Type header (%s)", contentType)) logrus.Errorf("Error parsing media type: %s error: %v", contentType, err)
} }
if mimetype != expectedType { return err == nil && mimetype == expectedType
return errdefs.InvalidParameter(errors.Errorf("unsupported Content-Type header (%s): must be '%s'", contentType, expectedType))
}
return nil
} }

View file

@ -1,130 +1,18 @@
package httputils // import "github.com/docker/docker/api/server/httputils" package httputils // import "github.com/docker/docker/api/server/httputils"
import ( import "testing"
"net/http"
"strings"
"testing"
)
// matchesContentType // matchesContentType
func TestJsonContentType(t *testing.T) { func TestJsonContentType(t *testing.T) {
err := matchesContentType("application/json", "application/json") if !matchesContentType("application/json", "application/json") {
if err != nil { t.Fail()
t.Error(err)
} }
err = matchesContentType("application/json; charset=utf-8", "application/json") if !matchesContentType("application/json; charset=utf-8", "application/json") {
if err != nil { t.Fail()
t.Error(err)
} }
expected := "unsupported Content-Type header (dockerapplication/json): must be 'application/json'" if matchesContentType("dockerapplication/json", "application/json") {
err = matchesContentType("dockerapplication/json", "application/json") t.Fail()
if err == nil || err.Error() != expected {
t.Errorf(`expected "%s", got "%v"`, expected, err)
}
expected = "malformed Content-Type header (foo;;;bar): mime: invalid media parameter"
err = matchesContentType("foo;;;bar", "application/json")
if err == nil || err.Error() != expected {
t.Errorf(`expected "%s", got "%v"`, expected, err)
} }
} }
func TestReadJSON(t *testing.T) {
t.Run("nil body", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", nil)
if err != nil {
t.Error(err)
}
foo := struct{}{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
})
t.Run("empty body", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", strings.NewReader(""))
if err != nil {
t.Error(err)
}
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
if foo.SomeField != "" {
t.Errorf("expected: '', got: %s", foo.SomeField)
}
})
t.Run("with valid request", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", strings.NewReader(`{"SomeField":"some value"}`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
if foo.SomeField != "some value" {
t.Errorf("expected: 'some value', got: %s", foo.SomeField)
}
})
t.Run("with whitespace", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", strings.NewReader(`
{"SomeField":"some value"}
`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
if foo.SomeField != "some value" {
t.Errorf("expected: 'some value', got: %s", foo.SomeField)
}
})
t.Run("with extra content", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", strings.NewReader(`{"SomeField":"some value"} and more content`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err == nil {
t.Error("expected an error, got none")
}
expected := "unexpected content after JSON"
if err.Error() != expected {
t.Errorf("expected: '%s', got: %s", expected, err.Error())
}
})
t.Run("invalid JSON", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", strings.NewReader(`{invalid json`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err == nil {
t.Error("expected an error, got none")
}
expected := "invalid JSON: invalid character 'i' looking for beginning of object key string"
if err.Error() != expected {
t.Errorf("expected: '%s', got: %s", expected, err.Error())
}
})
}

View file

@ -0,0 +1,15 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"encoding/json"
"net/http"
)
// WriteJSON writes the value v to the http response stream as json with standard json encoding.
func WriteJSON(w http.ResponseWriter, code int, v interface{}) error {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
enc := json.NewEncoder(w)
enc.SetEscapeHTML(false)
return enc.Encode(v)
}

View file

@ -7,8 +7,8 @@ import (
"net/url" "net/url"
"sort" "sort"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend" "github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/jsonmessage" "github.com/docker/docker/pkg/jsonmessage"
"github.com/docker/docker/pkg/stdcopy" "github.com/docker/docker/pkg/stdcopy"
@ -16,7 +16,7 @@ import (
// WriteLogStream writes an encoded byte stream of log messages from the // WriteLogStream writes an encoded byte stream of log messages from the
// messages channel, multiplexing them with a stdcopy.Writer if mux is true // messages channel, multiplexing them with a stdcopy.Writer if mux is true
func WriteLogStream(_ context.Context, w io.Writer, msgs <-chan *backend.LogMessage, config *container.LogsOptions, mux bool) { func WriteLogStream(_ context.Context, w io.Writer, msgs <-chan *backend.LogMessage, config *types.ContainerLogsOptions, mux bool) {
wf := ioutils.NewWriteFlusher(w) wf := ioutils.NewWriteFlusher(w)
defer wf.Close() defer wf.Close()
@ -51,10 +51,10 @@ func WriteLogStream(_ context.Context, w io.Writer, msgs <-chan *backend.LogMess
logLine = append([]byte(msg.Timestamp.Format(jsonmessage.RFC3339NanoFixed)+" "), logLine...) logLine = append([]byte(msg.Timestamp.Format(jsonmessage.RFC3339NanoFixed)+" "), logLine...)
} }
if msg.Source == "stdout" && config.ShowStdout { if msg.Source == "stdout" && config.ShowStdout {
_, _ = outStream.Write(logLine) outStream.Write(logLine)
} }
if msg.Source == "stderr" && config.ShowStderr { if msg.Source == "stderr" && config.ShowStderr {
_, _ = errStream.Write(logLine) errStream.Write(logLine)
} }
} }
} }

View file

@ -1,9 +1,9 @@
package server // import "github.com/docker/docker/api/server" package server // import "github.com/docker/docker/api/server"
import ( import (
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/middleware" "github.com/docker/docker/api/server/middleware"
"github.com/sirupsen/logrus"
) )
// handlerWithGlobalMiddlewares wraps the handler function for a request with // handlerWithGlobalMiddlewares wraps the handler function for a request with
@ -16,7 +16,7 @@ func (s *Server) handlerWithGlobalMiddlewares(handler httputils.APIFunc) httputi
next = m.WrapHandler(next) next = m.WrapHandler(next)
} }
if log.GetLevel() == log.DebugLevel { if s.cfg.Logging && logrus.GetLevel() == logrus.DebugLevel {
next = middleware.DebugRequestMiddleware(next) next = middleware.DebugRequestMiddleware(next)
} }

View file

@ -4,8 +4,7 @@ import (
"context" "context"
"net/http" "net/http"
"github.com/containerd/log" "github.com/sirupsen/logrus"
"github.com/docker/docker/api/types/registry"
) )
// CORSMiddleware injects CORS headers to each request // CORSMiddleware injects CORS headers to each request
@ -29,9 +28,9 @@ func (c CORSMiddleware) WrapHandler(handler func(ctx context.Context, w http.Res
corsHeaders = "*" corsHeaders = "*"
} }
log.G(ctx).Debugf("CORS header is enabled and set to: %s", corsHeaders) logrus.Debugf("CORS header is enabled and set to: %s", corsHeaders)
w.Header().Add("Access-Control-Allow-Origin", corsHeaders) w.Header().Add("Access-Control-Allow-Origin", corsHeaders)
w.Header().Add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept, "+registry.AuthHeader) w.Header().Add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept, X-Registry-Auth")
w.Header().Add("Access-Control-Allow-Methods", "HEAD, GET, POST, DELETE, PUT, OPTIONS") w.Header().Add("Access-Control-Allow-Methods", "HEAD, GET, POST, DELETE, PUT, OPTIONS")
return handler(ctx, w, r, vars) return handler(ctx, w, r, vars)
} }

View file

@ -8,17 +8,17 @@ import (
"net/http" "net/http"
"strings" "strings"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/ioutils"
"github.com/sirupsen/logrus"
) )
// DebugRequestMiddleware dumps the request to logger // DebugRequestMiddleware dumps the request to logger
func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
log.G(ctx).Debugf("Calling %s %s", r.Method, r.RequestURI) logrus.Debugf("Calling %s %s", r.Method, r.RequestURI)
if r.Method != http.MethodPost { if r.Method != "POST" {
return handler(ctx, w, r, vars) return handler(ctx, w, r, vars)
} }
if err := httputils.CheckForJSON(r); err != nil { if err := httputils.CheckForJSON(r); err != nil {
@ -44,9 +44,9 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
maskSecretKeys(postForm) maskSecretKeys(postForm)
formStr, errMarshal := json.Marshal(postForm) formStr, errMarshal := json.Marshal(postForm)
if errMarshal == nil { if errMarshal == nil {
log.G(ctx).Debugf("form data: %s", string(formStr)) logrus.Debugf("form data: %s", string(formStr))
} else { } else {
log.G(ctx).Debugf("form data: %q", postForm) logrus.Debugf("form data: %q", postForm)
} }
} }

View file

@ -3,8 +3,8 @@ package middleware // import "github.com/docker/docker/api/server/middleware"
import ( import (
"testing" "testing"
"gotest.tools/v3/assert" "gotest.tools/assert"
is "gotest.tools/v3/assert/cmp" is "gotest.tools/assert/cmp"
) )
func TestMaskSecretKeys(t *testing.T) { func TestMaskSecretKeys(t *testing.T) {

View file

@ -6,7 +6,6 @@ import (
"net/http" "net/http"
"runtime" "runtime"
"github.com/docker/docker/api"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/versions" "github.com/docker/docker/api/types/versions"
) )
@ -15,39 +14,18 @@ import (
// validates the client and server versions. // validates the client and server versions.
type VersionMiddleware struct { type VersionMiddleware struct {
serverVersion string serverVersion string
defaultVersion string
// defaultAPIVersion is the default API version provided by the API server, minVersion string
// specified as "major.minor". It is usually configured to the latest API
// version [github.com/docker/docker/api.DefaultVersion].
//
// API requests for API versions greater than this version are rejected by
// the server and produce a [versionUnsupportedError].
defaultAPIVersion string
// minAPIVersion is the minimum API version provided by the API server,
// specified as "major.minor".
//
// API requests for API versions lower than this version are rejected by
// the server and produce a [versionUnsupportedError].
minAPIVersion string
} }
// NewVersionMiddleware creates a VersionMiddleware with the given versions. // NewVersionMiddleware creates a new VersionMiddleware
func NewVersionMiddleware(serverVersion, defaultAPIVersion, minAPIVersion string) (*VersionMiddleware, error) { // with the default versions.
if versions.LessThan(defaultAPIVersion, api.MinSupportedAPIVersion) || versions.GreaterThan(defaultAPIVersion, api.DefaultVersion) { func NewVersionMiddleware(s, d, m string) VersionMiddleware {
return nil, fmt.Errorf("invalid default API version (%s): must be between %s and %s", defaultAPIVersion, api.MinSupportedAPIVersion, api.DefaultVersion) return VersionMiddleware{
serverVersion: s,
defaultVersion: d,
minVersion: m,
} }
if versions.LessThan(minAPIVersion, api.MinSupportedAPIVersion) || versions.GreaterThan(minAPIVersion, api.DefaultVersion) {
return nil, fmt.Errorf("invalid minimum API version (%s): must be between %s and %s", minAPIVersion, api.MinSupportedAPIVersion, api.DefaultVersion)
}
if versions.GreaterThan(minAPIVersion, defaultAPIVersion) {
return nil, fmt.Errorf("invalid API version: the minimum API version (%s) is higher than the default version (%s)", minAPIVersion, defaultAPIVersion)
}
return &VersionMiddleware{
serverVersion: serverVersion,
defaultAPIVersion: defaultAPIVersion,
minAPIVersion: minAPIVersion,
}, nil
} }
type versionUnsupportedError struct { type versionUnsupportedError struct {
@ -67,20 +45,21 @@ func (e versionUnsupportedError) InvalidParameter() {}
func (v VersionMiddleware) WrapHandler(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (v VersionMiddleware) WrapHandler(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.Header().Set("Server", fmt.Sprintf("Docker/%s (%s)", v.serverVersion, runtime.GOOS)) w.Header().Set("Server", fmt.Sprintf("Docker/%s (%s)", v.serverVersion, runtime.GOOS))
w.Header().Set("API-Version", v.defaultAPIVersion) w.Header().Set("API-Version", v.defaultVersion)
w.Header().Set("OSType", runtime.GOOS) w.Header().Set("OSType", runtime.GOOS)
apiVersion := vars["version"] apiVersion := vars["version"]
if apiVersion == "" { if apiVersion == "" {
apiVersion = v.defaultAPIVersion apiVersion = v.defaultVersion
} }
if versions.LessThan(apiVersion, v.minAPIVersion) { if versions.LessThan(apiVersion, v.minVersion) {
return versionUnsupportedError{version: apiVersion, minVersion: v.minAPIVersion} return versionUnsupportedError{version: apiVersion, minVersion: v.minVersion}
} }
if versions.GreaterThan(apiVersion, v.defaultAPIVersion) { if versions.GreaterThan(apiVersion, v.defaultVersion) {
return versionUnsupportedError{version: apiVersion, maxVersion: v.defaultAPIVersion} return versionUnsupportedError{version: apiVersion, maxVersion: v.defaultVersion}
} }
ctx = context.WithValue(ctx, httputils.APIVersionKey{}, apiVersion) ctx = context.WithValue(ctx, httputils.APIVersionKey{}, apiVersion)
return handler(ctx, w, r, vars) return handler(ctx, w, r, vars)
} }
} }

View file

@ -2,85 +2,30 @@ package middleware // import "github.com/docker/docker/api/server/middleware"
import ( import (
"context" "context"
"fmt"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"runtime" "runtime"
"testing" "testing"
"github.com/docker/docker/api"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"gotest.tools/v3/assert" "gotest.tools/assert"
is "gotest.tools/v3/assert/cmp" is "gotest.tools/assert/cmp"
) )
func TestNewVersionMiddlewareValidation(t *testing.T) {
tests := []struct {
doc, defaultVersion, minVersion, expectedErr string
}{
{
doc: "defaults",
defaultVersion: api.DefaultVersion,
minVersion: api.MinSupportedAPIVersion,
},
{
doc: "invalid default lower than min",
defaultVersion: api.MinSupportedAPIVersion,
minVersion: api.DefaultVersion,
expectedErr: fmt.Sprintf("invalid API version: the minimum API version (%s) is higher than the default version (%s)", api.DefaultVersion, api.MinSupportedAPIVersion),
},
{
doc: "invalid default too low",
defaultVersion: "0.1",
minVersion: api.MinSupportedAPIVersion,
expectedErr: fmt.Sprintf("invalid default API version (0.1): must be between %s and %s", api.MinSupportedAPIVersion, api.DefaultVersion),
},
{
doc: "invalid default too high",
defaultVersion: "9999.9999",
minVersion: api.DefaultVersion,
expectedErr: fmt.Sprintf("invalid default API version (9999.9999): must be between %s and %s", api.MinSupportedAPIVersion, api.DefaultVersion),
},
{
doc: "invalid minimum too low",
defaultVersion: api.MinSupportedAPIVersion,
minVersion: "0.1",
expectedErr: fmt.Sprintf("invalid minimum API version (0.1): must be between %s and %s", api.MinSupportedAPIVersion, api.DefaultVersion),
},
{
doc: "invalid minimum too high",
defaultVersion: api.DefaultVersion,
minVersion: "9999.9999",
expectedErr: fmt.Sprintf("invalid minimum API version (9999.9999): must be between %s and %s", api.MinSupportedAPIVersion, api.DefaultVersion),
},
}
for _, tc := range tests {
tc := tc
t.Run(tc.doc, func(t *testing.T) {
_, err := NewVersionMiddleware("1.2.3", tc.defaultVersion, tc.minVersion)
if tc.expectedErr == "" {
assert.Check(t, err)
} else {
assert.Check(t, is.Error(err, tc.expectedErr))
}
})
}
}
func TestVersionMiddlewareVersion(t *testing.T) { func TestVersionMiddlewareVersion(t *testing.T) {
expectedVersion := "<not set>" defaultVersion := "1.10.0"
minVersion := "1.2.0"
expectedVersion := defaultVersion
handler := func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { handler := func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v := httputils.VersionFromContext(ctx) v := httputils.VersionFromContext(ctx)
assert.Check(t, is.Equal(expectedVersion, v)) assert.Check(t, is.Equal(expectedVersion, v))
return nil return nil
} }
m, err := NewVersionMiddleware("1.2.3", api.DefaultVersion, api.MinSupportedAPIVersion) m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
assert.NilError(t, err)
h := m.WrapHandler(handler) h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil) req, _ := http.NewRequest("GET", "/containers/json", nil)
resp := httptest.NewRecorder() resp := httptest.NewRecorder()
ctx := context.Background() ctx := context.Background()
@ -90,19 +35,19 @@ func TestVersionMiddlewareVersion(t *testing.T) {
errString string errString string
}{ }{
{ {
expectedVersion: api.DefaultVersion, expectedVersion: "1.10.0",
}, },
{ {
reqVersion: api.MinSupportedAPIVersion, reqVersion: "1.9.0",
expectedVersion: api.MinSupportedAPIVersion, expectedVersion: "1.9.0",
}, },
{ {
reqVersion: "0.1", reqVersion: "0.1",
errString: fmt.Sprintf("client version 0.1 is too old. Minimum supported API version is %s, please upgrade your client to a newer version", api.MinSupportedAPIVersion), errString: "client version 0.1 is too old. Minimum supported API version is 1.2.0, please upgrade your client to a newer version",
}, },
{ {
reqVersion: "9999.9999", reqVersion: "9999.9999",
errString: fmt.Sprintf("client version 9999.9999 is too new. Maximum supported API version is %s", api.DefaultVersion), errString: "client version 9999.9999 is too new. Maximum supported API version is 1.10.0",
}, },
} }
@ -126,21 +71,22 @@ func TestVersionMiddlewareWithErrorsReturnsHeaders(t *testing.T) {
return nil return nil
} }
m, err := NewVersionMiddleware("1.2.3", api.DefaultVersion, api.MinSupportedAPIVersion) defaultVersion := "1.10.0"
assert.NilError(t, err) minVersion := "1.2.0"
m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
h := m.WrapHandler(handler) h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil) req, _ := http.NewRequest("GET", "/containers/json", nil)
resp := httptest.NewRecorder() resp := httptest.NewRecorder()
ctx := context.Background() ctx := context.Background()
vars := map[string]string{"version": "0.1"} vars := map[string]string{"version": "0.1"}
err = h(ctx, resp, req, vars) err := h(ctx, resp, req, vars)
assert.Check(t, is.ErrorContains(err, "")) assert.Check(t, is.ErrorContains(err, ""))
hdr := resp.Result().Header hdr := resp.Result().Header
assert.Check(t, is.Contains(hdr.Get("Server"), "Docker/1.2.3")) assert.Check(t, is.Contains(hdr.Get("Server"), "Docker/"+defaultVersion))
assert.Check(t, is.Contains(hdr.Get("Server"), runtime.GOOS)) assert.Check(t, is.Contains(hdr.Get("Server"), runtime.GOOS))
assert.Check(t, is.Equal(hdr.Get("API-Version"), api.DefaultVersion)) assert.Check(t, is.Equal(hdr.Get("API-Version"), defaultVersion))
assert.Check(t, is.Equal(hdr.Get("OSType"), runtime.GOOS)) assert.Check(t, is.Equal(hdr.Get("OSType"), runtime.GOOS))
} }

View file

@ -15,6 +15,7 @@ type Backend interface {
// Prune build cache // Prune build cache
PruneCache(context.Context, types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error) PruneCache(context.Context, types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error)
Cancel(context.Context, string) error Cancel(context.Context, string) error
} }

View file

@ -1,8 +1,6 @@
package build // import "github.com/docker/docker/api/server/router/build" package build // import "github.com/docker/docker/api/server/router/build"
import ( import (
"runtime"
"github.com/docker/docker/api/server/router" "github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
) )
@ -12,13 +10,15 @@ type buildRouter struct {
backend Backend backend Backend
daemon experimentalProvider daemon experimentalProvider
routes []router.Route routes []router.Route
features *map[string]bool
} }
// NewRouter initializes a new build router // NewRouter initializes a new build router
func NewRouter(b Backend, d experimentalProvider) router.Router { func NewRouter(b Backend, d experimentalProvider, features *map[string]bool) router.Router {
r := &buildRouter{ r := &buildRouter{
backend: b, backend: b,
daemon: d, daemon: d,
features: features,
} }
r.initRoutes() r.initRoutes()
return r return r
@ -37,24 +37,17 @@ func (r *buildRouter) initRoutes() {
} }
} }
// BuilderVersion derives the default docker builder version from the config. // BuilderVersion derives the default docker builder version from the config
// // Note: it is valid to have BuilderVersion unset which means it is up to the
// The default on Linux is version "2" (BuildKit), but the daemon can be // client to choose which builder to use.
// configured to recommend version "1" (classic Builder). Windows does not
// yet support BuildKit for native Windows images, and uses "1" (classic builder)
// as a default.
//
// This value is only a recommendation as advertised by the daemon, and it is
// up to the client to choose which builder to use.
func BuilderVersion(features map[string]bool) types.BuilderVersion { func BuilderVersion(features map[string]bool) types.BuilderVersion {
// TODO(thaJeztah) move the default to daemon/config var bv types.BuilderVersion
if runtime.GOOS == "windows" { if v, ok := features["buildkit"]; ok {
return types.BuilderV1 if v {
} bv = types.BuilderBuildKit
} else {
bv := types.BuilderBuildKit
if v, ok := features["buildkit"]; ok && !v {
bv = types.BuilderV1 bv = types.BuilderV1
} }
}
return bv return bv
} }

View file

@ -14,100 +14,90 @@ import (
"strings" "strings"
"sync" "sync"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend" "github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/versions" "github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/progress" "github.com/docker/docker/pkg/progress"
"github.com/docker/docker/pkg/streamformatter" "github.com/docker/docker/pkg/streamformatter"
units "github.com/docker/go-units" units "github.com/docker/go-units"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus"
) )
type invalidParam struct { type invalidIsolationError string
error
func (e invalidIsolationError) Error() string {
return fmt.Sprintf("Unsupported isolation: %q", string(e))
} }
func (e invalidParam) InvalidParameter() {} func (e invalidIsolationError) InvalidParameter() {}
func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBuildOptions, error) { func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBuildOptions, error) {
options := &types.ImageBuildOptions{ version := httputils.VersionFromContext(ctx)
Version: types.BuilderV1, // Builder V1 is the default, but can be overridden options := &types.ImageBuildOptions{}
Dockerfile: r.FormValue("dockerfile"), if httputils.BoolValue(r, "forcerm") && versions.GreaterThanOrEqualTo(version, "1.12") {
SuppressOutput: httputils.BoolValue(r, "q"),
NoCache: httputils.BoolValue(r, "nocache"),
ForceRemove: httputils.BoolValue(r, "forcerm"),
PullParent: httputils.BoolValue(r, "pull"),
MemorySwap: httputils.Int64ValueOrZero(r, "memswap"),
Memory: httputils.Int64ValueOrZero(r, "memory"),
CPUShares: httputils.Int64ValueOrZero(r, "cpushares"),
CPUPeriod: httputils.Int64ValueOrZero(r, "cpuperiod"),
CPUQuota: httputils.Int64ValueOrZero(r, "cpuquota"),
CPUSetCPUs: r.FormValue("cpusetcpus"),
CPUSetMems: r.FormValue("cpusetmems"),
CgroupParent: r.FormValue("cgroupparent"),
NetworkMode: r.FormValue("networkmode"),
Tags: r.Form["t"],
ExtraHosts: r.Form["extrahosts"],
SecurityOpt: r.Form["securityopt"],
Squash: httputils.BoolValue(r, "squash"),
Target: r.FormValue("target"),
RemoteContext: r.FormValue("remote"),
SessionID: r.FormValue("session"),
BuildID: r.FormValue("buildid"),
}
if runtime.GOOS != "windows" && options.SecurityOpt != nil {
// SecurityOpt only supports "credentials-spec" on Windows, and not used on other platforms.
return nil, invalidParam{errors.New("security options are not supported on " + runtime.GOOS)}
}
if httputils.BoolValue(r, "forcerm") {
options.Remove = true options.Remove = true
} else if r.FormValue("rm") == "" { } else if r.FormValue("rm") == "" && versions.GreaterThanOrEqualTo(version, "1.12") {
options.Remove = true options.Remove = true
} else { } else {
options.Remove = httputils.BoolValue(r, "rm") options.Remove = httputils.BoolValue(r, "rm")
} }
version := httputils.VersionFromContext(ctx) if httputils.BoolValue(r, "pull") && versions.GreaterThanOrEqualTo(version, "1.16") {
options.PullParent = true
}
options.Dockerfile = r.FormValue("dockerfile")
options.SuppressOutput = httputils.BoolValue(r, "q")
options.NoCache = httputils.BoolValue(r, "nocache")
options.ForceRemove = httputils.BoolValue(r, "forcerm")
options.MemorySwap = httputils.Int64ValueOrZero(r, "memswap")
options.Memory = httputils.Int64ValueOrZero(r, "memory")
options.CPUShares = httputils.Int64ValueOrZero(r, "cpushares")
options.CPUPeriod = httputils.Int64ValueOrZero(r, "cpuperiod")
options.CPUQuota = httputils.Int64ValueOrZero(r, "cpuquota")
options.CPUSetCPUs = r.FormValue("cpusetcpus")
options.CPUSetMems = r.FormValue("cpusetmems")
options.CgroupParent = r.FormValue("cgroupparent")
options.NetworkMode = r.FormValue("networkmode")
options.Tags = r.Form["t"]
options.ExtraHosts = r.Form["extrahosts"]
options.SecurityOpt = r.Form["securityopt"]
options.Squash = httputils.BoolValue(r, "squash")
options.Target = r.FormValue("target")
options.RemoteContext = r.FormValue("remote")
if versions.GreaterThanOrEqualTo(version, "1.32") { if versions.GreaterThanOrEqualTo(version, "1.32") {
options.Platform = r.FormValue("platform") options.Platform = r.FormValue("platform")
} }
if versions.GreaterThanOrEqualTo(version, "1.40") {
outputsJSON := r.FormValue("outputs")
if outputsJSON != "" {
var outputs []types.ImageBuildOutput
if err := json.Unmarshal([]byte(outputsJSON), &outputs); err != nil {
return nil, invalidParam{errors.Wrap(err, "invalid outputs specified")}
}
options.Outputs = outputs
}
}
if s := r.Form.Get("shmsize"); s != "" { if r.Form.Get("shmsize") != "" {
shmSize, err := strconv.ParseInt(s, 10, 64) shmSize, err := strconv.ParseInt(r.Form.Get("shmsize"), 10, 64)
if err != nil { if err != nil {
return nil, err return nil, err
} }
options.ShmSize = shmSize options.ShmSize = shmSize
} }
if i := r.FormValue("isolation"); i != "" { if i := container.Isolation(r.FormValue("isolation")); i != "" {
options.Isolation = container.Isolation(i) if !container.Isolation.IsValid(i) {
if !options.Isolation.IsValid() { return nil, invalidIsolationError(i)
return nil, invalidParam{errors.Errorf("unsupported isolation: %q", i)}
} }
options.Isolation = i
} }
if ulimitsJSON := r.FormValue("ulimits"); ulimitsJSON != "" { if runtime.GOOS != "windows" && options.SecurityOpt != nil {
buildUlimits := []*units.Ulimit{} return nil, errdefs.InvalidParameter(errors.New("The daemon on this platform does not support setting security options on build"))
}
var buildUlimits = []*units.Ulimit{}
ulimitsJSON := r.FormValue("ulimits")
if ulimitsJSON != "" {
if err := json.Unmarshal([]byte(ulimitsJSON), &buildUlimits); err != nil { if err := json.Unmarshal([]byte(ulimitsJSON), &buildUlimits); err != nil {
return nil, invalidParam{errors.Wrap(err, "error reading ulimit settings")} return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading ulimit settings")
} }
options.Ulimits = buildUlimits options.Ulimits = buildUlimits
} }
@ -124,59 +114,71 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
// the fact they mentioned it, we need to pass that along to the builder // the fact they mentioned it, we need to pass that along to the builder
// so that it can print a warning about "foo" being unused if there is // so that it can print a warning about "foo" being unused if there is
// no "ARG foo" in the Dockerfile. // no "ARG foo" in the Dockerfile.
if buildArgsJSON := r.FormValue("buildargs"); buildArgsJSON != "" { buildArgsJSON := r.FormValue("buildargs")
buildArgs := map[string]*string{} if buildArgsJSON != "" {
var buildArgs = map[string]*string{}
if err := json.Unmarshal([]byte(buildArgsJSON), &buildArgs); err != nil { if err := json.Unmarshal([]byte(buildArgsJSON), &buildArgs); err != nil {
return nil, invalidParam{errors.Wrap(err, "error reading build args")} return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading build args")
} }
options.BuildArgs = buildArgs options.BuildArgs = buildArgs
} }
if labelsJSON := r.FormValue("labels"); labelsJSON != "" { labelsJSON := r.FormValue("labels")
labels := map[string]string{} if labelsJSON != "" {
var labels = map[string]string{}
if err := json.Unmarshal([]byte(labelsJSON), &labels); err != nil { if err := json.Unmarshal([]byte(labelsJSON), &labels); err != nil {
return nil, invalidParam{errors.Wrap(err, "error reading labels")} return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading labels")
} }
options.Labels = labels options.Labels = labels
} }
if cacheFromJSON := r.FormValue("cachefrom"); cacheFromJSON != "" { cacheFromJSON := r.FormValue("cachefrom")
cacheFrom := []string{} if cacheFromJSON != "" {
var cacheFrom = []string{}
if err := json.Unmarshal([]byte(cacheFromJSON), &cacheFrom); err != nil { if err := json.Unmarshal([]byte(cacheFromJSON), &cacheFrom); err != nil {
return nil, invalidParam{errors.Wrap(err, "error reading cache-from")} return nil, err
} }
options.CacheFrom = cacheFrom options.CacheFrom = cacheFrom
} }
options.SessionID = r.FormValue("session")
if bv := r.FormValue("version"); bv != "" { options.BuildID = r.FormValue("buildid")
v, err := parseVersion(bv) builderVersion, err := parseVersion(r.FormValue("version"))
if err != nil { if err != nil {
return nil, err return nil, err
} }
options.Version = v options.Version = builderVersion
if versions.GreaterThanOrEqualTo(version, "1.40") {
outputsJSON := r.FormValue("outputs")
if outputsJSON != "" {
var outputs []types.ImageBuildOutput
if err := json.Unmarshal([]byte(outputsJSON), &outputs); err != nil {
return nil, err
}
options.Outputs = outputs
}
} }
return options, nil return options, nil
} }
func parseVersion(s string) (types.BuilderVersion, error) { func parseVersion(s string) (types.BuilderVersion, error) {
switch types.BuilderVersion(s) { if s == "" || s == string(types.BuilderV1) {
case types.BuilderV1:
return types.BuilderV1, nil return types.BuilderV1, nil
case types.BuilderBuildKit:
return types.BuilderBuildKit, nil
default:
return "", invalidParam{errors.Errorf("invalid version %q", s)}
} }
if s == string(types.BuilderBuildKit) {
return types.BuilderBuildKit, nil
}
return "", errors.Errorf("invalid version %s", s)
} }
func (br *buildRouter) postPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (br *buildRouter) postPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
fltrs, err := filters.FromJSON(r.Form.Get("filters")) filters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil { if err != nil {
return err return errors.Wrap(err, "could not parse filters")
} }
ksfv := r.FormValue("keep-storage") ksfv := r.FormValue("keep-storage")
if ksfv == "" { if ksfv == "" {
@ -184,12 +186,12 @@ func (br *buildRouter) postPrune(ctx context.Context, w http.ResponseWriter, r *
} }
ks, err := strconv.Atoi(ksfv) ks, err := strconv.Atoi(ksfv)
if err != nil { if err != nil {
return invalidParam{errors.Wrapf(err, "keep-storage is in bytes and expects an integer, got %v", ksfv)} return errors.Wrapf(err, "keep-storage is in bytes and expects an integer, got %v", ksfv)
} }
opts := types.BuildCachePruneOptions{ opts := types.BuildCachePruneOptions{
All: httputils.BoolValue(r, "all"), All: httputils.BoolValue(r, "all"),
Filters: fltrs, Filters: filters,
KeepStorage: int64(ks), KeepStorage: int64(ks),
} }
@ -205,7 +207,7 @@ func (br *buildRouter) postCancel(ctx context.Context, w http.ResponseWriter, r
id := r.FormValue("id") id := r.FormValue("id")
if id == "" { if id == "" {
return invalidParam{errors.New("build ID not provided")} return errors.Errorf("build ID not provided")
} }
return br.backend.Cancel(ctx, id) return br.backend.Cancel(ctx, id)
@ -232,11 +234,12 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
} }
output := ioutils.NewWriteFlusher(ww) output := ioutils.NewWriteFlusher(ww)
defer func() { _ = output.Close() }() defer output.Close()
errf := func(err error) error { errf := func(err error) error {
if httputils.BoolValue(r, "q") && notVerboseBuffer.Len() > 0 { if httputils.BoolValue(r, "q") && notVerboseBuffer.Len() > 0 {
_, _ = output.Write(notVerboseBuffer.Bytes()) output.Write(notVerboseBuffer.Bytes())
} }
// Do not write the error in the http output if it's still empty. // Do not write the error in the http output if it's still empty.
@ -246,7 +249,7 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
} }
_, err = output.Write(streamformatter.FormatError(err)) _, err = output.Write(streamformatter.FormatError(err))
if err != nil { if err != nil {
log.G(ctx).Warnf("could not write error response: %v", err) logrus.Warnf("could not write error response: %v", err)
} }
return nil return nil
} }
@ -258,7 +261,7 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
buildOptions.AuthConfigs = getAuthConfigs(r.Header) buildOptions.AuthConfigs = getAuthConfigs(r.Header)
if buildOptions.Squash && !br.daemon.HasExperimental() { if buildOptions.Squash && !br.daemon.HasExperimental() {
return invalidParam{errors.New("squash is only supported with experimental mode")} return errdefs.InvalidParameter(errors.New("squash is only supported with experimental mode"))
} }
out := io.Writer(output) out := io.Writer(output)
@ -287,13 +290,13 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
// Everything worked so if -q was provided the output from the daemon // Everything worked so if -q was provided the output from the daemon
// should be just the image ID and we'll print that to stdout. // should be just the image ID and we'll print that to stdout.
if buildOptions.SuppressOutput { if buildOptions.SuppressOutput {
_, _ = fmt.Fprintln(streamformatter.NewStdoutWriter(output), imgID) fmt.Fprintln(streamformatter.NewStdoutWriter(output), imgID)
} }
return nil return nil
} }
func getAuthConfigs(header http.Header) map[string]registry.AuthConfig { func getAuthConfigs(header http.Header) map[string]types.AuthConfig {
authConfigs := map[string]registry.AuthConfig{} authConfigs := map[string]types.AuthConfig{}
authConfigsEncoded := header.Get("X-Registry-Config") authConfigsEncoded := header.Get("X-Registry-Config")
if authConfigsEncoded == "" { if authConfigsEncoded == "" {
@ -303,7 +306,7 @@ func getAuthConfigs(header http.Header) map[string]registry.AuthConfig {
authConfigsJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authConfigsEncoded)) authConfigsJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authConfigsEncoded))
// Pulling an image does not error when no auth is provided so to remain // Pulling an image does not error when no auth is provided so to remain
// consistent with the existing api decode errors are ignored // consistent with the existing api decode errors are ignored
_ = json.NewDecoder(authConfigsJSON).Decode(&authConfigs) json.NewDecoder(authConfigsJSON).Decode(&authConfigs)
return authConfigs return authConfigs
} }
@ -425,7 +428,7 @@ func (w *wcf) notify() {
w.mu.Lock() w.mu.Lock()
if !w.ready { if !w.ready {
if w.buf.Len() > 0 { if w.buf.Len() > 0 {
_, _ = io.Copy(w.Writer, w.buf) io.Copy(w.Writer, w.buf)
} }
if w.flushed { if w.flushed {
w.flusher.Flush() w.flusher.Flush()

View file

@ -1,10 +1,10 @@
package checkpoint // import "github.com/docker/docker/api/server/router/checkpoint" package checkpoint // import "github.com/docker/docker/api/server/router/checkpoint"
import "github.com/docker/docker/api/types/checkpoint" import "github.com/docker/docker/api/types"
// Backend for Checkpoint // Backend for Checkpoint
type Backend interface { type Backend interface {
CheckpointCreate(container string, config checkpoint.CreateOptions) error CheckpointCreate(container string, config types.CheckpointCreateOptions) error
CheckpointDelete(container string, config checkpoint.DeleteOptions) error CheckpointDelete(container string, config types.CheckpointDeleteOptions) error
CheckpointList(container string, config checkpoint.ListOptions) ([]checkpoint.Summary, error) CheckpointList(container string, config types.CheckpointListOptions) ([]types.Checkpoint, error)
} }

View file

@ -2,10 +2,11 @@ package checkpoint // import "github.com/docker/docker/api/server/router/checkpo
import ( import (
"context" "context"
"encoding/json"
"net/http" "net/http"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/checkpoint" "github.com/docker/docker/api/types"
) )
func (s *checkpointRouter) postContainerCheckpoint(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *checkpointRouter) postContainerCheckpoint(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@ -13,8 +14,10 @@ func (s *checkpointRouter) postContainerCheckpoint(ctx context.Context, w http.R
return err return err
} }
var options checkpoint.CreateOptions var options types.CheckpointCreateOptions
if err := httputils.ReadJSON(r, &options); err != nil {
decoder := json.NewDecoder(r.Body)
if err := decoder.Decode(&options); err != nil {
return err return err
} }
@ -32,9 +35,10 @@ func (s *checkpointRouter) getContainerCheckpoints(ctx context.Context, w http.R
return err return err
} }
checkpoints, err := s.backend.CheckpointList(vars["name"], checkpoint.ListOptions{ checkpoints, err := s.backend.CheckpointList(vars["name"], types.CheckpointListOptions{
CheckpointDir: r.Form.Get("dir"), CheckpointDir: r.Form.Get("dir"),
}) })
if err != nil { if err != nil {
return err return err
} }
@ -47,10 +51,11 @@ func (s *checkpointRouter) deleteContainerCheckpoint(ctx context.Context, w http
return err return err
} }
err := s.backend.CheckpointDelete(vars["name"], checkpoint.DeleteOptions{ err := s.backend.CheckpointDelete(vars["name"], types.CheckpointDeleteOptions{
CheckpointDir: r.Form.Get("dir"), CheckpointDir: r.Form.Get("dir"),
CheckpointID: vars["checkpoint"], CheckpointID: vars["checkpoint"],
}) })
if err != nil { if err != nil {
return err return err
} }

View file

@ -17,29 +17,30 @@ type execBackend interface {
ContainerExecCreate(name string, config *types.ExecConfig) (string, error) ContainerExecCreate(name string, config *types.ExecConfig) (string, error)
ContainerExecInspect(id string) (*backend.ExecInspect, error) ContainerExecInspect(id string) (*backend.ExecInspect, error)
ContainerExecResize(name string, height, width int) error ContainerExecResize(name string, height, width int) error
ContainerExecStart(ctx context.Context, name string, options container.ExecStartOptions) error ContainerExecStart(ctx context.Context, name string, stdin io.Reader, stdout io.Writer, stderr io.Writer) error
ExecExists(name string) (bool, error) ExecExists(name string) (bool, error)
} }
// copyBackend includes functions to implement to provide container copy functionality. // copyBackend includes functions to implement to provide container copy functionality.
type copyBackend interface { type copyBackend interface {
ContainerArchivePath(name string, path string) (content io.ReadCloser, stat *types.ContainerPathStat, err error) ContainerArchivePath(name string, path string) (content io.ReadCloser, stat *types.ContainerPathStat, err error)
ContainerExport(ctx context.Context, name string, out io.Writer) error ContainerCopy(name string, res string) (io.ReadCloser, error)
ContainerExport(name string, out io.Writer) error
ContainerExtractToDir(name, path string, copyUIDGID, noOverwriteDirNonDir bool, content io.Reader) error ContainerExtractToDir(name, path string, copyUIDGID, noOverwriteDirNonDir bool, content io.Reader) error
ContainerStatPath(name string, path string) (stat *types.ContainerPathStat, err error) ContainerStatPath(name string, path string) (stat *types.ContainerPathStat, err error)
} }
// stateBackend includes functions to implement to provide container state lifecycle functionality. // stateBackend includes functions to implement to provide container state lifecycle functionality.
type stateBackend interface { type stateBackend interface {
ContainerCreate(ctx context.Context, config backend.ContainerCreateConfig) (container.CreateResponse, error) ContainerCreate(config types.ContainerCreateConfig) (container.ContainerCreateCreatedBody, error)
ContainerKill(name string, signal string) error ContainerKill(name string, sig uint64) error
ContainerPause(name string) error ContainerPause(name string) error
ContainerRename(oldName, newName string) error ContainerRename(oldName, newName string) error
ContainerResize(name string, height, width int) error ContainerResize(name string, height, width int) error
ContainerRestart(ctx context.Context, name string, options container.StopOptions) error ContainerRestart(name string, seconds *int) error
ContainerRm(name string, config *backend.ContainerRmConfig) error ContainerRm(name string, config *types.ContainerRmConfig) error
ContainerStart(ctx context.Context, name string, checkpoint string, checkpointDir string) error ContainerStart(name string, hostConfig *container.HostConfig, checkpoint string, checkpointDir string) error
ContainerStop(ctx context.Context, name string, options container.StopOptions) error ContainerStop(name string, seconds *int) error
ContainerUnpause(name string) error ContainerUnpause(name string) error
ContainerUpdate(name string, hostConfig *container.HostConfig) (container.ContainerUpdateOKBody, error) ContainerUpdate(name string, hostConfig *container.HostConfig) (container.ContainerUpdateOKBody, error)
ContainerWait(ctx context.Context, name string, condition containerpkg.WaitCondition) (<-chan containerpkg.StateStatus, error) ContainerWait(ctx context.Context, name string, condition containerpkg.WaitCondition) (<-chan containerpkg.StateStatus, error)
@ -47,12 +48,13 @@ type stateBackend interface {
// monitorBackend includes functions to implement to provide containers monitoring functionality. // monitorBackend includes functions to implement to provide containers monitoring functionality.
type monitorBackend interface { type monitorBackend interface {
ContainerChanges(ctx context.Context, name string) ([]archive.Change, error) ContainerChanges(name string) ([]archive.Change, error)
ContainerInspect(ctx context.Context, name string, size bool, version string) (interface{}, error) ContainerInspect(name string, size bool, version string) (interface{}, error)
ContainerLogs(ctx context.Context, name string, config *container.LogsOptions) (msgs <-chan *backend.LogMessage, tty bool, err error) ContainerLogs(ctx context.Context, name string, config *types.ContainerLogsOptions) (msgs <-chan *backend.LogMessage, tty bool, err error)
ContainerStats(ctx context.Context, name string, config *backend.ContainerStatsConfig) error ContainerStats(ctx context.Context, name string, config *backend.ContainerStatsConfig) error
ContainerTop(name string, psArgs string) (*container.ContainerTopOKBody, error) ContainerTop(name string, psArgs string) (*container.ContainerTopOKBody, error)
Containers(ctx context.Context, config *container.ListOptions) ([]*types.Container, error)
Containers(config *types.ContainerListOptions) ([]*types.Container, error)
} }
// attachBackend includes function to implement to provide container attaching functionality. // attachBackend includes function to implement to provide container attaching functionality.
@ -66,7 +68,7 @@ type systemBackend interface {
} }
type commitBackend interface { type commitBackend interface {
CreateImageFromContainer(ctx context.Context, name string, config *backend.CreateImageConfig) (imageID string, err error) CreateImageFromContainer(name string, config *backend.CreateImageConfig) (imageID string, err error)
} }
// Backend is all the methods that need to be implemented to provide container specific functionality. // Backend is all the methods that need to be implemented to provide container specific functionality.

View file

@ -10,15 +10,13 @@ type containerRouter struct {
backend Backend backend Backend
decoder httputils.ContainerDecoder decoder httputils.ContainerDecoder
routes []router.Route routes []router.Route
cgroup2 bool
} }
// NewRouter initializes a new container router // NewRouter initializes a new container router
func NewRouter(b Backend, decoder httputils.ContainerDecoder, cgroup2 bool) router.Router { func NewRouter(b Backend, decoder httputils.ContainerDecoder) router.Router {
r := &containerRouter{ r := &containerRouter{
backend: b, backend: b,
decoder: decoder, decoder: decoder,
cgroup2: cgroup2,
} }
r.initRoutes() r.initRoutes()
return r return r
@ -56,6 +54,7 @@ func (r *containerRouter) initRoutes() {
router.NewPostRoute("/containers/{name:.*}/wait", r.postContainersWait), router.NewPostRoute("/containers/{name:.*}/wait", r.postContainersWait),
router.NewPostRoute("/containers/{name:.*}/resize", r.postContainersResize), router.NewPostRoute("/containers/{name:.*}/resize", r.postContainersResize),
router.NewPostRoute("/containers/{name:.*}/attach", r.postContainersAttach), router.NewPostRoute("/containers/{name:.*}/attach", r.postContainersAttach),
router.NewPostRoute("/containers/{name:.*}/copy", r.postContainersCopy), // Deprecated since 1.8, Errors out since 1.12
router.NewPostRoute("/containers/{name:.*}/exec", r.postContainerExecCreate), router.NewPostRoute("/containers/{name:.*}/exec", r.postContainerExecCreate),
router.NewPostRoute("/exec/{name:.*}/start", r.postContainerExecStart), router.NewPostRoute("/exec/{name:.*}/start", r.postContainerExecStart),
router.NewPostRoute("/exec/{name:.*}/resize", r.postContainerExecResize), router.NewPostRoute("/exec/{name:.*}/resize", r.postContainerExecResize),

View file

@ -6,27 +6,21 @@ import (
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
"runtime"
"strconv" "strconv"
"strings" "syscall"
"github.com/containerd/containerd/platforms"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend" "github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/versions" "github.com/docker/docker/api/types/versions"
containerpkg "github.com/docker/docker/container" containerpkg "github.com/docker/docker/container"
"github.com/docker/docker/errdefs" "github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/runconfig" "github.com/docker/docker/pkg/signal"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/net/websocket" "golang.org/x/net/websocket"
) )
@ -39,24 +33,29 @@ func (s *containerRouter) postCommit(ctx context.Context, w http.ResponseWriter,
return err return err
} }
// TODO: remove pause arg, and always pause in backend
pause := httputils.BoolValue(r, "pause")
version := httputils.VersionFromContext(ctx)
if r.FormValue("pause") == "" && versions.GreaterThanOrEqualTo(version, "1.13") {
pause = true
}
config, _, _, err := s.decoder.DecodeConfig(r.Body) config, _, _, err := s.decoder.DecodeConfig(r.Body)
if err != nil && !errors.Is(err, io.EOF) { // Do not fail if body is empty. if err != nil && err != io.EOF { //Do not fail if body is empty.
return err return err
} }
ref, err := httputils.RepoTagReference(r.Form.Get("repo"), r.Form.Get("tag")) commitCfg := &backend.CreateImageConfig{
if err != nil { Pause: pause,
return errdefs.InvalidParameter(err) Repo: r.Form.Get("repo"),
} Tag: r.Form.Get("tag"),
imgID, err := s.backend.CreateImageFromContainer(ctx, r.Form.Get("container"), &backend.CreateImageConfig{
Pause: httputils.BoolValueOrDefault(r, "pause", true), // TODO(dnephin): remove pause arg, and always pause in backend
Tag: ref,
Author: r.Form.Get("author"), Author: r.Form.Get("author"),
Comment: r.Form.Get("comment"), Comment: r.Form.Get("comment"),
Config: config, Config: config,
Changes: r.Form["changes"], Changes: r.Form["changes"],
}) }
imgID, err := s.backend.CreateImageFromContainer(r.Form.Get("container"), commitCfg)
if err != nil { if err != nil {
return err return err
} }
@ -73,7 +72,7 @@ func (s *containerRouter) getContainersJSON(ctx context.Context, w http.Response
return err return err
} }
config := &container.ListOptions{ config := &types.ContainerListOptions{
All: httputils.BoolValue(r, "all"), All: httputils.BoolValue(r, "all"),
Size: httputils.BoolValue(r, "size"), Size: httputils.BoolValue(r, "size"),
Since: r.Form.Get("since"), Since: r.Form.Get("since"),
@ -89,7 +88,7 @@ func (s *containerRouter) getContainersJSON(ctx context.Context, w http.Response
config.Limit = limit config.Limit = limit
} }
containers, err := s.backend.Containers(ctx, config) containers, err := s.backend.Containers(config)
if err != nil { if err != nil {
return err return err
} }
@ -106,16 +105,14 @@ func (s *containerRouter) getContainersStats(ctx context.Context, w http.Respons
if !stream { if !stream {
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
} }
var oneShot bool
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.41") { config := &backend.ContainerStatsConfig{
oneShot = httputils.BoolValueOrDefault(r, "one-shot", false) Stream: stream,
OutStream: w,
Version: httputils.VersionFromContext(ctx),
} }
return s.backend.ContainerStats(ctx, vars["name"], &backend.ContainerStatsConfig{ return s.backend.ContainerStats(ctx, vars["name"], config)
Stream: stream,
OneShot: oneShot,
OutStream: w,
})
} }
func (s *containerRouter) getContainersLogs(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *containerRouter) getContainersLogs(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@ -134,7 +131,7 @@ func (s *containerRouter) getContainersLogs(ctx context.Context, w http.Response
} }
containerName := vars["name"] containerName := vars["name"]
logsConfig := &container.LogsOptions{ logsConfig := &types.ContainerLogsOptions{
Follow: httputils.BoolValue(r, "follow"), Follow: httputils.BoolValue(r, "follow"),
Timestamps: httputils.BoolValue(r, "timestamps"), Timestamps: httputils.BoolValue(r, "timestamps"),
Since: r.Form.Get("since"), Since: r.Form.Get("since"),
@ -150,12 +147,6 @@ func (s *containerRouter) getContainersLogs(ctx context.Context, w http.Response
return err return err
} }
contentType := types.MediaTypeRawStream
if !tty && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = types.MediaTypeMultiplexedStream
}
w.Header().Set("Content-Type", contentType)
// if has a tty, we're not muxing streams. if it doesn't, we are. simple. // if has a tty, we're not muxing streams. if it doesn't, we are. simple.
// this is the point of no return for writing a response. once we call // this is the point of no return for writing a response. once we call
// WriteLogStream, the response has been started and errors will be // WriteLogStream, the response has been started and errors will be
@ -165,9 +156,17 @@ func (s *containerRouter) getContainersLogs(ctx context.Context, w http.Response
} }
func (s *containerRouter) getContainersExport(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *containerRouter) getContainersExport(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return s.backend.ContainerExport(ctx, vars["name"], w) return s.backend.ContainerExport(vars["name"], w)
} }
type bodyOnStartError struct{}
func (bodyOnStartError) Error() string {
return "starting container with non-empty request body was deprecated since API v1.22 and removed in v1.24"
}
func (bodyOnStartError) InvalidParameter() {}
func (s *containerRouter) postContainersStart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *containerRouter) postContainersStart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// If contentLength is -1, we can assumed chunked encoding // If contentLength is -1, we can assumed chunked encoding
// or more technically that the length is unknown // or more technically that the length is unknown
@ -175,17 +174,33 @@ func (s *containerRouter) postContainersStart(ctx context.Context, w http.Respon
// net/http otherwise seems to swallow any headers related to chunked encoding // net/http otherwise seems to swallow any headers related to chunked encoding
// including r.TransferEncoding // including r.TransferEncoding
// allow a nil body for backwards compatibility // allow a nil body for backwards compatibility
//
version := httputils.VersionFromContext(ctx)
var hostConfig *container.HostConfig
// A non-nil json object is at least 7 characters. // A non-nil json object is at least 7 characters.
if r.ContentLength > 7 || r.ContentLength == -1 { if r.ContentLength > 7 || r.ContentLength == -1 {
return errdefs.InvalidParameter(errors.New("starting container with non-empty request body was deprecated since API v1.22 and removed in v1.24")) if versions.GreaterThanOrEqualTo(version, "1.24") {
return bodyOnStartError{}
}
if err := httputils.CheckForJSON(r); err != nil {
return err
}
c, err := s.decoder.DecodeHostConfig(r.Body)
if err != nil {
return err
}
hostConfig = c
} }
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
if err := s.backend.ContainerStart(ctx, vars["name"], r.Form.Get("checkpoint"), r.Form.Get("checkpoint-dir")); err != nil { checkpoint := r.Form.Get("checkpoint")
checkpointDir := r.Form.Get("checkpoint-dir")
if err := s.backend.ContainerStart(vars["name"], hostConfig, checkpoint, checkpointDir); err != nil {
return err return err
} }
@ -198,37 +213,52 @@ func (s *containerRouter) postContainersStop(ctx context.Context, w http.Respons
return err return err
} }
var ( var seconds *int
options container.StopOptions
version = httputils.VersionFromContext(ctx)
)
if versions.GreaterThanOrEqualTo(version, "1.42") {
options.Signal = r.Form.Get("signal")
}
if tmpSeconds := r.Form.Get("t"); tmpSeconds != "" { if tmpSeconds := r.Form.Get("t"); tmpSeconds != "" {
valSeconds, err := strconv.Atoi(tmpSeconds) valSeconds, err := strconv.Atoi(tmpSeconds)
if err != nil { if err != nil {
return err return err
} }
options.Timeout = &valSeconds seconds = &valSeconds
} }
if err := s.backend.ContainerStop(ctx, vars["name"], options); err != nil { if err := s.backend.ContainerStop(vars["name"], seconds); err != nil {
return err return err
} }
w.WriteHeader(http.StatusNoContent) w.WriteHeader(http.StatusNoContent)
return nil return nil
} }
func (s *containerRouter) postContainersKill(_ context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *containerRouter) postContainersKill(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
var sig syscall.Signal
name := vars["name"] name := vars["name"]
if err := s.backend.ContainerKill(name, r.Form.Get("signal")); err != nil {
return errors.Wrapf(err, "cannot kill container: %s", name) // If we have a signal, look at it. Otherwise, do nothing
if sigStr := r.Form.Get("signal"); sigStr != "" {
var err error
if sig, err = signal.ParseSignal(sigStr); err != nil {
return errdefs.InvalidParameter(err)
}
}
if err := s.backend.ContainerKill(name, uint64(sig)); err != nil {
var isStopped bool
if errdefs.IsConflict(err) {
isStopped = true
}
// Return error that's not caused because the container is stopped.
// Return error if the container is not running and the api is >= 1.20
// to keep backwards compatibility.
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.20") || !isStopped {
return errors.Wrapf(err, "Cannot kill container: %s", name)
}
} }
w.WriteHeader(http.StatusNoContent) w.WriteHeader(http.StatusNoContent)
@ -240,26 +270,21 @@ func (s *containerRouter) postContainersRestart(ctx context.Context, w http.Resp
return err return err
} }
var ( var seconds *int
options container.StopOptions
version = httputils.VersionFromContext(ctx)
)
if versions.GreaterThanOrEqualTo(version, "1.42") {
options.Signal = r.Form.Get("signal")
}
if tmpSeconds := r.Form.Get("t"); tmpSeconds != "" { if tmpSeconds := r.Form.Get("t"); tmpSeconds != "" {
valSeconds, err := strconv.Atoi(tmpSeconds) valSeconds, err := strconv.Atoi(tmpSeconds)
if err != nil { if err != nil {
return err return err
} }
options.Timeout = &valSeconds seconds = &valSeconds
} }
if err := s.backend.ContainerRestart(ctx, vars["name"], options); err != nil { if err := s.backend.ContainerRestart(vars["name"], seconds); err != nil {
return err return err
} }
w.WriteHeader(http.StatusNoContent) w.WriteHeader(http.StatusNoContent)
return nil return nil
} }
@ -304,18 +329,12 @@ func (s *containerRouter) postContainersWait(ctx context.Context, w http.Respons
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
if v := r.Form.Get("condition"); v != "" { switch container.WaitCondition(r.Form.Get("condition")) {
switch container.WaitCondition(v) {
case container.WaitConditionNotRunning:
waitCondition = containerpkg.WaitConditionNotRunning
case container.WaitConditionNextExit: case container.WaitConditionNextExit:
waitCondition = containerpkg.WaitConditionNextExit waitCondition = containerpkg.WaitConditionNextExit
case container.WaitConditionRemoved: case container.WaitConditionRemoved:
waitCondition = containerpkg.WaitConditionRemoved waitCondition = containerpkg.WaitConditionRemoved
legacyRemovalWaitPre134 = versions.LessThan(version, "1.34") legacyRemovalWaitPre134 = versions.LessThan(version, "1.34")
default:
return errdefs.InvalidParameter(errors.Errorf("invalid condition: %q", v))
}
} }
} }
@ -345,19 +364,19 @@ func (s *containerRouter) postContainersWait(ctx context.Context, w http.Respons
return nil return nil
} }
var waitError *container.WaitExitError var waitError *container.ContainerWaitOKBodyError
if status.Err() != nil { if status.Err() != nil {
waitError = &container.WaitExitError{Message: status.Err().Error()} waitError = &container.ContainerWaitOKBodyError{Message: status.Err().Error()}
} }
return json.NewEncoder(w).Encode(&container.WaitResponse{ return json.NewEncoder(w).Encode(&container.ContainerWaitOKBody{
StatusCode: int64(status.ExitCode()), StatusCode: int64(status.ExitCode()),
Error: waitError, Error: waitError,
}) })
} }
func (s *containerRouter) getContainersChanges(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *containerRouter) getContainersChanges(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
changes, err := s.backend.ContainerChanges(ctx, vars["name"]) changes, err := s.backend.ContainerChanges(vars["name"])
if err != nil { if err != nil {
return err return err
} }
@ -396,20 +415,19 @@ func (s *containerRouter) postContainerUpdate(ctx context.Context, w http.Respon
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
if err := httputils.CheckForJSON(r); err != nil {
return err
}
var updateConfig container.UpdateConfig var updateConfig container.UpdateConfig
if err := httputils.ReadJSON(r, &updateConfig); err != nil {
decoder := json.NewDecoder(r.Body)
if err := decoder.Decode(&updateConfig); err != nil {
return err return err
} }
if versions.LessThan(httputils.VersionFromContext(ctx), "1.40") { if versions.LessThan(httputils.VersionFromContext(ctx), "1.40") {
updateConfig.PidsLimit = nil updateConfig.PidsLimit = nil
} }
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
// Ignore KernelMemory removed in API 1.42.
updateConfig.KernelMemory = 0
}
if updateConfig.PidsLimit != nil && *updateConfig.PidsLimit <= 0 { if updateConfig.PidsLimit != nil && *updateConfig.PidsLimit <= 0 {
// Both `0` and `-1` are accepted to set "unlimited" when updating. // Both `0` and `-1` are accepted to set "unlimited" when updating.
// Historically, any negative value was accepted, so treat them as // Historically, any negative value was accepted, so treat them as
@ -444,182 +462,36 @@ func (s *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
config, hostConfig, networkingConfig, err := s.decoder.DecodeConfig(r.Body) config, hostConfig, networkingConfig, err := s.decoder.DecodeConfig(r.Body)
if err != nil { if err != nil {
if errors.Is(err, io.EOF) {
return errdefs.InvalidParameter(errors.New("invalid JSON: got EOF while reading request body"))
}
return err return err
} }
if config == nil {
return errdefs.InvalidParameter(runconfig.ErrEmptyConfig)
}
if hostConfig == nil {
hostConfig = &container.HostConfig{}
}
if networkingConfig == nil {
networkingConfig = &network.NetworkingConfig{}
}
if networkingConfig.EndpointsConfig == nil {
networkingConfig.EndpointsConfig = make(map[string]*network.EndpointSettings)
}
// The NetworkMode "default" is used as a way to express a container should
// be attached to the OS-dependant default network, in an OS-independent
// way. Doing this conversion as soon as possible ensures we have less
// NetworkMode to handle down the path (including in the
// backward-compatibility layer we have just below).
//
// Note that this is not the only place where this conversion has to be
// done (as there are various other places where containers get created).
if hostConfig.NetworkMode == "" || hostConfig.NetworkMode.IsDefault() {
hostConfig.NetworkMode = runconfig.DefaultDaemonNetworkMode()
if nw, ok := networkingConfig.EndpointsConfig[network.NetworkDefault]; ok {
networkingConfig.EndpointsConfig[hostConfig.NetworkMode.NetworkName()] = nw
delete(networkingConfig.EndpointsConfig, network.NetworkDefault)
}
}
version := httputils.VersionFromContext(ctx) version := httputils.VersionFromContext(ctx)
adjustCPUShares := versions.LessThan(version, "1.19")
// When using API 1.24 and under, the client is responsible for removing the container // When using API 1.24 and under, the client is responsible for removing the container
if versions.LessThan(version, "1.25") { if hostConfig != nil && versions.LessThan(version, "1.25") {
hostConfig.AutoRemove = false hostConfig.AutoRemove = false
} }
if versions.LessThan(version, "1.40") { if hostConfig != nil && versions.LessThan(version, "1.40") {
// Ignore BindOptions.NonRecursive because it was added in API 1.40. // Ignore BindOptions.NonRecursive because it was added in API 1.40.
for _, m := range hostConfig.Mounts { for _, m := range hostConfig.Mounts {
if bo := m.BindOptions; bo != nil { if bo := m.BindOptions; bo != nil {
bo.NonRecursive = false bo.NonRecursive = false
} }
} }
// Ignore KernelMemoryTCP because it was added in API 1.40. // Ignore KernelMemoryTCP because it was added in API 1.40.
hostConfig.KernelMemoryTCP = 0 hostConfig.KernelMemoryTCP = 0
// Ignore Capabilities because it was added in API 1.40.
hostConfig.Capabilities = nil
// Older clients (API < 1.40) expects the default to be shareable, make them happy // Older clients (API < 1.40) expects the default to be shareable, make them happy
if hostConfig.IpcMode.IsEmpty() { if hostConfig.IpcMode.IsEmpty() {
hostConfig.IpcMode = container.IPCModeShareable hostConfig.IpcMode = container.IpcMode("shareable")
} }
} }
if versions.LessThan(version, "1.41") { if hostConfig != nil && hostConfig.PidsLimit != nil && *hostConfig.PidsLimit <= 0 {
// Older clients expect the default to be "host" on cgroup v1 hosts
if !s.cgroup2 && hostConfig.CgroupnsMode.IsEmpty() {
hostConfig.CgroupnsMode = container.CgroupnsModeHost
}
}
var platform *ocispec.Platform
if versions.GreaterThanOrEqualTo(version, "1.41") {
if v := r.Form.Get("platform"); v != "" {
p, err := platforms.Parse(v)
if err != nil {
return errdefs.InvalidParameter(err)
}
platform = &p
}
}
if versions.LessThan(version, "1.42") {
for _, m := range hostConfig.Mounts {
// Ignore BindOptions.CreateMountpoint because it was added in API 1.42.
if bo := m.BindOptions; bo != nil {
bo.CreateMountpoint = false
}
// These combinations are invalid, but weren't validated in API < 1.42.
// We reset them here, so that validation doesn't produce an error.
if o := m.VolumeOptions; o != nil && m.Type != mount.TypeVolume {
m.VolumeOptions = nil
}
if o := m.TmpfsOptions; o != nil && m.Type != mount.TypeTmpfs {
m.TmpfsOptions = nil
}
if bo := m.BindOptions; bo != nil {
// Ignore BindOptions.CreateMountpoint because it was added in API 1.42.
bo.CreateMountpoint = false
}
}
if runtime.GOOS == "linux" {
// ConsoleSize is not respected by Linux daemon before API 1.42
hostConfig.ConsoleSize = [2]uint{0, 0}
}
}
if versions.GreaterThanOrEqualTo(version, "1.42") {
// Ignore KernelMemory removed in API 1.42.
hostConfig.KernelMemory = 0
for _, m := range hostConfig.Mounts {
if o := m.VolumeOptions; o != nil && m.Type != mount.TypeVolume {
return errdefs.InvalidParameter(fmt.Errorf("VolumeOptions must not be specified on mount type %q", m.Type))
}
if o := m.BindOptions; o != nil && m.Type != mount.TypeBind {
return errdefs.InvalidParameter(fmt.Errorf("BindOptions must not be specified on mount type %q", m.Type))
}
if o := m.TmpfsOptions; o != nil && m.Type != mount.TypeTmpfs {
return errdefs.InvalidParameter(fmt.Errorf("TmpfsOptions must not be specified on mount type %q", m.Type))
}
}
}
if versions.LessThan(version, "1.43") {
// Ignore Annotations because it was added in API v1.43.
hostConfig.Annotations = nil
}
defaultReadOnlyNonRecursive := false
if versions.LessThan(version, "1.44") {
if config.Healthcheck != nil {
// StartInterval was added in API 1.44
config.Healthcheck.StartInterval = 0
}
// Set ReadOnlyNonRecursive to true because it was added in API 1.44
// Before that all read-only mounts were non-recursive.
// Keep that behavior for clients on older APIs.
defaultReadOnlyNonRecursive = true
for _, m := range hostConfig.Mounts {
if m.Type == mount.TypeBind {
if m.BindOptions != nil && m.BindOptions.ReadOnlyForceRecursive {
// NOTE: that technically this is a breaking change for older
// API versions, and we should ignore the new field.
// However, this option may be incorrectly set by a client with
// the expectation that the failing to apply recursive read-only
// is enforced, so we decided to produce an error instead,
// instead of silently ignoring.
return errdefs.InvalidParameter(errors.New("BindOptions.ReadOnlyForceRecursive needs API v1.44 or newer"))
}
}
}
// Creating a container connected to several networks is not supported until v1.44.
if len(networkingConfig.EndpointsConfig) > 1 {
l := make([]string, 0, len(networkingConfig.EndpointsConfig))
for k := range networkingConfig.EndpointsConfig {
l = append(l, k)
}
return errdefs.InvalidParameter(errors.Errorf("Container cannot be created with multiple network endpoints: %s", strings.Join(l, ", ")))
}
}
if versions.LessThan(version, "1.45") {
for _, m := range hostConfig.Mounts {
if m.VolumeOptions != nil && m.VolumeOptions.Subpath != "" {
return errdefs.InvalidParameter(errors.New("VolumeOptions.Subpath needs API v1.45 or newer"))
}
}
}
var warnings []string
if warn, err := handleMACAddressBC(config, hostConfig, networkingConfig, version); err != nil {
return err
} else if warn != "" {
warnings = append(warnings, warn)
}
if hostConfig.PidsLimit != nil && *hostConfig.PidsLimit <= 0 {
// Don't set a limit if either no limit was specified, or "unlimited" was // Don't set a limit if either no limit was specified, or "unlimited" was
// explicitly set. // explicitly set.
// Both `0` and `-1` are accepted as "unlimited", and historically any // Both `0` and `-1` are accepted as "unlimited", and historically any
@ -627,107 +499,27 @@ func (s *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
hostConfig.PidsLimit = nil hostConfig.PidsLimit = nil
} }
ccr, err := s.backend.ContainerCreate(ctx, backend.ContainerCreateConfig{ ccr, err := s.backend.ContainerCreate(types.ContainerCreateConfig{
Name: name, Name: name,
Config: config, Config: config,
HostConfig: hostConfig, HostConfig: hostConfig,
NetworkingConfig: networkingConfig, NetworkingConfig: networkingConfig,
Platform: platform, AdjustCPUShares: adjustCPUShares,
DefaultReadOnlyNonRecursive: defaultReadOnlyNonRecursive,
}) })
if err != nil { if err != nil {
return err return err
} }
ccr.Warnings = append(ccr.Warnings, warnings...)
return httputils.WriteJSON(w, http.StatusCreated, ccr) return httputils.WriteJSON(w, http.StatusCreated, ccr)
} }
// handleMACAddressBC takes care of backward-compatibility for the container-wide MAC address by mutating the
// networkingConfig to set the endpoint-specific MACAddress field introduced in API v1.44. It returns a warning message
// or an error if the container-wide field was specified for API >= v1.44.
func handleMACAddressBC(config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, version string) (string, error) {
deprecatedMacAddress := config.MacAddress //nolint:staticcheck // ignore SA1019: field is deprecated, but still used on API < v1.44.
// For older versions of the API, migrate the container-wide MAC address to EndpointsConfig.
if versions.LessThan(version, "1.44") {
if deprecatedMacAddress == "" {
// If a MAC address is supplied in EndpointsConfig, discard it because the old API
// would have ignored it.
for _, ep := range networkingConfig.EndpointsConfig {
ep.MacAddress = ""
}
return "", nil
}
if !hostConfig.NetworkMode.IsBridge() && !hostConfig.NetworkMode.IsUserDefined() {
return "", runconfig.ErrConflictContainerNetworkAndMac
}
// There cannot be more than one entry in EndpointsConfig with API < 1.44.
// If there's no EndpointsConfig, create a place to store the configured address. It is
// safe to use NetworkMode as the network name, whether it's a name or id/short-id, as
// it will be normalised later and there is no other EndpointSettings object that might
// refer to this network/endpoint.
if len(networkingConfig.EndpointsConfig) == 0 {
nwName := hostConfig.NetworkMode.NetworkName()
networkingConfig.EndpointsConfig[nwName] = &network.EndpointSettings{}
}
// There's exactly one network in EndpointsConfig, either from the API or just-created.
// Migrate the container-wide setting to it.
// No need to check for a match between NetworkMode and the names/ids in EndpointsConfig,
// the old version of the API would have applied the address to this network anyway.
for _, ep := range networkingConfig.EndpointsConfig {
ep.MacAddress = deprecatedMacAddress
}
return "", nil
}
// The container-wide MacAddress parameter is deprecated and should now be specified in EndpointsConfig.
if deprecatedMacAddress == "" {
return "", nil
}
var warning string
if hostConfig.NetworkMode.IsBridge() || hostConfig.NetworkMode.IsUserDefined() {
nwName := hostConfig.NetworkMode.NetworkName()
// If there's no endpoint config, create a place to store the configured address.
if len(networkingConfig.EndpointsConfig) == 0 {
networkingConfig.EndpointsConfig[nwName] = &network.EndpointSettings{
MacAddress: deprecatedMacAddress,
}
} else {
// There is existing endpoint config - if it's not indexed by NetworkMode.Name(), we
// can't tell which network the container-wide settings was intended for. NetworkMode,
// the keys in EndpointsConfig and the NetworkID in EndpointsConfig may mix network
// name/id/short-id. It's not safe to create EndpointsConfig under the NetworkMode
// name to store the container-wide MAC address, because that may result in two sets
// of EndpointsConfig for the same network and one set will be discarded later. So,
// reject the request ...
ep, ok := networkingConfig.EndpointsConfig[nwName]
if !ok {
return "", errdefs.InvalidParameter(errors.New("if a container-wide MAC address is supplied, HostConfig.NetworkMode must match the identity of a network in NetworkSettings.Networks"))
}
// ep is the endpoint that needs the container-wide MAC address; migrate the address
// to it, or bail out if there's a mismatch.
if ep.MacAddress == "" {
ep.MacAddress = deprecatedMacAddress
} else if ep.MacAddress != deprecatedMacAddress {
return "", errdefs.InvalidParameter(errors.New("the container-wide MAC address must match the endpoint-specific MAC address for the main network, or be left empty"))
}
}
}
warning = "The container-wide MacAddress field is now deprecated. It should be specified in EndpointsConfig instead."
config.MacAddress = "" //nolint:staticcheck // ignore SA1019: field is deprecated, but still used on API < v1.44.
return warning, nil
}
func (s *containerRouter) deleteContainers(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *containerRouter) deleteContainers(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
name := vars["name"] name := vars["name"]
config := &backend.ContainerRmConfig{ config := &types.ContainerRmConfig{
ForceRemove: httputils.BoolValue(r, "force"), ForceRemove: httputils.BoolValue(r, "force"),
RemoveVolume: httputils.BoolValue(r, "v"), RemoveVolume: httputils.BoolValue(r, "v"),
RemoveLink: httputils.BoolValue(r, "link"), RemoveLink: httputils.BoolValue(r, "link"),
@ -774,8 +566,7 @@ func (s *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
return errdefs.InvalidParameter(errors.Errorf("error attaching to container %s, hijack connection missing", containerName)) return errdefs.InvalidParameter(errors.Errorf("error attaching to container %s, hijack connection missing", containerName))
} }
contentType := types.MediaTypeRawStream setupStreams := func() (io.ReadCloser, io.Writer, io.Writer, error) {
setupStreams := func(multiplexed bool) (io.ReadCloser, io.Writer, io.Writer, error) {
conn, _, err := hijacker.Hijack() conn, _, err := hijacker.Hijack()
if err != nil { if err != nil {
return nil, nil, nil, err return nil, nil, nil, err
@ -785,10 +576,7 @@ func (s *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
conn.Write([]byte{}) conn.Write([]byte{})
if upgrade { if upgrade {
if multiplexed && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") { fmt.Fprintf(conn, "HTTP/1.1 101 UPGRADED\r\nContent-Type: application/vnd.docker.raw-stream\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n\r\n")
contentType = types.MediaTypeMultiplexedStream
}
fmt.Fprintf(conn, "HTTP/1.1 101 UPGRADED\r\nContent-Type: "+contentType+"\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n\r\n")
} else { } else {
fmt.Fprintf(conn, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n") fmt.Fprintf(conn, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n")
} }
@ -812,16 +600,16 @@ func (s *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
} }
if err = s.backend.ContainerAttach(containerName, attachConfig); err != nil { if err = s.backend.ContainerAttach(containerName, attachConfig); err != nil {
log.G(ctx).WithError(err).Errorf("Handler for %s %s returned error", r.Method, r.URL.Path) logrus.Errorf("Handler for %s %s returned error: %v", r.Method, r.URL.Path, err)
// Remember to close stream if error happens // Remember to close stream if error happens
conn, _, errHijack := hijacker.Hijack() conn, _, errHijack := hijacker.Hijack()
if errHijack != nil { if errHijack == nil {
log.G(ctx).WithError(err).Errorf("Handler for %s %s: unable to close stream; error when hijacking connection", r.Method, r.URL.Path) statusCode := errdefs.GetHTTPErrorStatusCode(err)
} else {
statusCode := httpstatus.FromError(err)
statusText := http.StatusText(statusCode) statusText := http.StatusText(statusCode)
fmt.Fprintf(conn, "HTTP/1.1 %d %s\r\nContent-Type: %s\r\n\r\n%s\r\n", statusCode, statusText, contentType, err.Error()) fmt.Fprintf(conn, "HTTP/1.1 %d %s\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n%s\r\n", statusCode, statusText, err.Error())
httputils.CloseStreams(conn) httputils.CloseStreams(conn)
} else {
logrus.Errorf("Error Hijacking: %v", err)
} }
} }
return nil return nil
@ -841,7 +629,7 @@ func (s *containerRouter) wsContainersAttach(ctx context.Context, w http.Respons
version := httputils.VersionFromContext(ctx) version := httputils.VersionFromContext(ctx)
setupStreams := func(multiplexed bool) (io.ReadCloser, io.Writer, io.Writer, error) { setupStreams := func() (io.ReadCloser, io.Writer, io.Writer, error) {
wsChan := make(chan *websocket.Conn) wsChan := make(chan *websocket.Conn)
h := func(conn *websocket.Conn) { h := func(conn *websocket.Conn) {
wsChan <- conn wsChan <- conn
@ -863,22 +651,15 @@ func (s *containerRouter) wsContainersAttach(ctx context.Context, w http.Respons
return conn, conn, conn, nil return conn, conn, conn, nil
} }
useStdin, useStdout, useStderr := true, true, true
if versions.GreaterThanOrEqualTo(version, "1.42") {
useStdin = httputils.BoolValue(r, "stdin")
useStdout = httputils.BoolValue(r, "stdout")
useStderr = httputils.BoolValue(r, "stderr")
}
attachConfig := &backend.ContainerAttachConfig{ attachConfig := &backend.ContainerAttachConfig{
GetStreams: setupStreams, GetStreams: setupStreams,
UseStdin: useStdin,
UseStdout: useStdout,
UseStderr: useStderr,
Logs: httputils.BoolValue(r, "logs"), Logs: httputils.BoolValue(r, "logs"),
Stream: httputils.BoolValue(r, "stream"), Stream: httputils.BoolValue(r, "stream"),
DetachKeys: detachKeys, DetachKeys: detachKeys,
MuxStreams: false, // never multiplex, as we rely on websocket to manage distinct streams UseStdin: true,
UseStdout: true,
UseStderr: true,
MuxStreams: false, // TODO: this should be true since it's a single stream for both stdout and stderr
} }
err = s.backend.ContainerAttach(containerName, attachConfig) err = s.backend.ContainerAttach(containerName, attachConfig)
@ -886,9 +667,9 @@ func (s *containerRouter) wsContainersAttach(ctx context.Context, w http.Respons
select { select {
case <-started: case <-started:
if err != nil { if err != nil {
log.G(ctx).Errorf("Error attaching websocket: %s", err) logrus.Errorf("Error attaching websocket: %s", err)
} else { } else {
log.G(ctx).Debug("websocket connection was closed by client") logrus.Debug("websocket connection was closed by client")
} }
return nil return nil
default: default:
@ -903,7 +684,7 @@ func (s *containerRouter) postContainersPrune(ctx context.Context, w http.Respon
pruneFilters, err := filters.FromJSON(r.Form.Get("filters")) pruneFilters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil { if err != nil {
return err return errdefs.InvalidParameter(err)
} }
pruneReport, err := s.backend.ContainersPrune(ctx, pruneFilters) pruneReport, err := s.backend.ContainersPrune(ctx, pruneFilters)

View file

@ -1,160 +0,0 @@
package container
import (
"testing"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
func TestHandleMACAddressBC(t *testing.T) {
testcases := []struct {
name string
apiVersion string
ctrWideMAC string
networkMode container.NetworkMode
epConfig map[string]*network.EndpointSettings
expEpWithCtrWideMAC string
expEpWithNoMAC string
expCtrWideMAC string
expWarning string
expError string
}{
{
name: "old api ctr-wide mac mix id and name",
apiVersion: "1.43",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetId",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
expEpWithCtrWideMAC: "aNetName",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "old api clear ep mac",
apiVersion: "1.43",
networkMode: "aNetId",
epConfig: map[string]*network.EndpointSettings{"aNetName": {MacAddress: "11:22:33:44:55:66"}},
expEpWithNoMAC: "aNetName",
},
{
name: "old api no-network ctr-wide mac",
apiVersion: "1.43",
networkMode: "none",
ctrWideMAC: "11:22:33:44:55:66",
expError: "conflicting options: mac-address and the network mode",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "old api create ep",
apiVersion: "1.43",
networkMode: "aNetId",
ctrWideMAC: "11:22:33:44:55:66",
epConfig: map[string]*network.EndpointSettings{},
expEpWithCtrWideMAC: "aNetId",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "old api migrate ctr-wide mac",
apiVersion: "1.43",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
expEpWithCtrWideMAC: "aNetName",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "new api no macs",
apiVersion: "1.44",
networkMode: "aNetId",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
},
{
name: "new api ep specific mac",
apiVersion: "1.44",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{"aNetName": {MacAddress: "11:22:33:44:55:66"}},
},
{
name: "new api migrate ctr-wide mac to new ep",
apiVersion: "1.44",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{},
expEpWithCtrWideMAC: "aNetName",
expWarning: "The container-wide MacAddress field is now deprecated",
expCtrWideMAC: "",
},
{
name: "new api migrate ctr-wide mac to existing ep",
apiVersion: "1.44",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
expEpWithCtrWideMAC: "aNetName",
expWarning: "The container-wide MacAddress field is now deprecated",
expCtrWideMAC: "",
},
{
name: "new api mode vs name mismatch",
apiVersion: "1.44",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetId",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
expError: "if a container-wide MAC address is supplied, HostConfig.NetworkMode must match the identity of a network in NetworkSettings.Networks",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "new api mac mismatch",
apiVersion: "1.44",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{"aNetName": {MacAddress: "00:11:22:33:44:55"}},
expError: "the container-wide MAC address must match the endpoint-specific MAC address",
expCtrWideMAC: "11:22:33:44:55:66",
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
cfg := &container.Config{
MacAddress: tc.ctrWideMAC, //nolint:staticcheck // ignore SA1019: field is deprecated, but still used on API < v1.44.
}
hostCfg := &container.HostConfig{
NetworkMode: tc.networkMode,
}
epConfig := make(map[string]*network.EndpointSettings, len(tc.epConfig))
for k, v := range tc.epConfig {
v := v
epConfig[k] = v
}
netCfg := &network.NetworkingConfig{
EndpointsConfig: epConfig,
}
warning, err := handleMACAddressBC(cfg, hostCfg, netCfg, tc.apiVersion)
if tc.expError == "" {
assert.Check(t, err)
} else {
assert.Check(t, is.ErrorContains(err, tc.expError))
}
if tc.expWarning == "" {
assert.Check(t, is.Equal(warning, ""))
} else {
assert.Check(t, is.Contains(warning, tc.expWarning))
}
if tc.expEpWithCtrWideMAC != "" {
got := netCfg.EndpointsConfig[tc.expEpWithCtrWideMAC].MacAddress
assert.Check(t, is.Equal(got, tc.ctrWideMAC))
}
if tc.expEpWithNoMAC != "" {
got := netCfg.EndpointsConfig[tc.expEpWithNoMAC].MacAddress
assert.Check(t, is.Equal(got, ""))
}
gotCtrWideMAC := cfg.MacAddress //nolint:staticcheck // ignore SA1019: field is deprecated, but still used on API < v1.44.
assert.Check(t, is.Equal(gotCtrWideMAC, tc.expCtrWideMAC))
})
}
}

View file

@ -6,15 +6,61 @@ import (
"context" "context"
"encoding/base64" "encoding/base64"
"encoding/json" "encoding/json"
"errors"
"io" "io"
"net/http" "net/http"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
gddohttputil "github.com/golang/gddo/httputil" gddohttputil "github.com/golang/gddo/httputil"
) )
// setContainerPathStatHeader encodes the stat to JSON, base64 encode, and place in a header. type pathError struct{}
func (pathError) Error() string {
return "Path cannot be empty"
}
func (pathError) InvalidParameter() {}
// postContainersCopy is deprecated in favor of getContainersArchive.
func (s *containerRouter) postContainersCopy(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// Deprecated since 1.8, Errors out since 1.12
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.24") {
w.WriteHeader(http.StatusNotFound)
return nil
}
if err := httputils.CheckForJSON(r); err != nil {
return err
}
cfg := types.CopyConfig{}
if err := json.NewDecoder(r.Body).Decode(&cfg); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if cfg.Resource == "" {
return pathError{}
}
data, err := s.backend.ContainerCopy(vars["name"], cfg.Resource)
if err != nil {
return err
}
defer data.Close()
w.Header().Set("Content-Type", "application/x-tar")
_, err = io.Copy(w, data)
return err
}
// // Encode the stat to JSON, base64 encode, and place in a header.
func setContainerPathStatHeader(stat *types.ContainerPathStat, header http.Header) error { func setContainerPathStatHeader(stat *types.ContainerPathStat, header http.Header) error {
statJSON, err := json.Marshal(stat) statJSON, err := json.Marshal(stat)
if err != nil { if err != nil {

View file

@ -2,18 +2,19 @@ package container // import "github.com/docker/docker/api/server/router/containe
import ( import (
"context" "context"
"encoding/json"
"errors"
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
"strconv" "strconv"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/versions" "github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs" "github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/stdcopy" "github.com/docker/docker/pkg/stdcopy"
"github.com/sirupsen/logrus"
) )
func (s *containerRouter) getExecByID(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *containerRouter) getExecByID(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@ -37,26 +38,27 @@ func (s *containerRouter) postContainerExecCreate(ctx context.Context, w http.Re
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
if err := httputils.CheckForJSON(r); err != nil {
return err
}
name := vars["name"]
execConfig := &types.ExecConfig{} execConfig := &types.ExecConfig{}
if err := httputils.ReadJSON(r, execConfig); err != nil { if err := json.NewDecoder(r.Body).Decode(execConfig); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
} }
if len(execConfig.Cmd) == 0 { if len(execConfig.Cmd) == 0 {
return execCommandError{} return execCommandError{}
} }
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.42") {
// Not supported by API versions before 1.42
execConfig.ConsoleSize = nil
}
// Register an instance of Exec in container. // Register an instance of Exec in container.
id, err := s.backend.ContainerExecCreate(vars["name"], execConfig) id, err := s.backend.ContainerExecCreate(name, execConfig)
if err != nil { if err != nil {
log.G(ctx).Errorf("Error setting up exec command in container %s: %v", vars["name"], err) logrus.Errorf("Error setting up exec command in container %s: %v", name, err)
return err return err
} }
@ -71,6 +73,13 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
return err return err
} }
version := httputils.VersionFromContext(ctx)
if versions.GreaterThan(version, "1.21") {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
}
var ( var (
execName = vars["name"] execName = vars["name"]
stdin, inStream io.ReadCloser stdin, inStream io.ReadCloser
@ -78,28 +87,17 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
) )
execStartCheck := &types.ExecStartCheck{} execStartCheck := &types.ExecStartCheck{}
if err := httputils.ReadJSON(r, execStartCheck); err != nil { if err := json.NewDecoder(r.Body).Decode(execStartCheck); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
} }
if exists, err := s.backend.ExecExists(execName); !exists { if exists, err := s.backend.ExecExists(execName); !exists {
return err return err
} }
if execStartCheck.ConsoleSize != nil {
version := httputils.VersionFromContext(ctx)
// Not supported before 1.42
if versions.LessThan(version, "1.42") {
execStartCheck.ConsoleSize = nil
}
// No console without tty
if !execStartCheck.Tty {
execStartCheck.ConsoleSize = nil
}
}
if !execStartCheck.Detach { if !execStartCheck.Detach {
var err error var err error
// Setting up the streaming http interface. // Setting up the streaming http interface.
@ -110,11 +108,7 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
defer httputils.CloseStreams(inStream, outStream) defer httputils.CloseStreams(inStream, outStream)
if _, ok := r.Header["Upgrade"]; ok { if _, ok := r.Header["Upgrade"]; ok {
contentType := types.MediaTypeRawStream fmt.Fprint(outStream, "HTTP/1.1 101 UPGRADED\r\nContent-Type: application/vnd.docker.raw-stream\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n")
if !execStartCheck.Tty && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = types.MediaTypeMultiplexedStream
}
fmt.Fprint(outStream, "HTTP/1.1 101 UPGRADED\r\nContent-Type: "+contentType+"\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n")
} else { } else {
fmt.Fprint(outStream, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n") fmt.Fprint(outStream, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n")
} }
@ -133,21 +127,14 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
} }
} }
options := container.ExecStartOptions{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
ConsoleSize: execStartCheck.ConsoleSize,
}
// Now run the user process in container. // Now run the user process in container.
// Maybe we should we pass ctx here if we're not detaching? // Maybe we should we pass ctx here if we're not detaching?
if err := s.backend.ContainerExecStart(context.Background(), execName, options); err != nil { if err := s.backend.ContainerExecStart(context.Background(), execName, stdin, stdout, stderr); err != nil {
if execStartCheck.Detach { if execStartCheck.Detach {
return err return err
} }
stdout.Write([]byte(err.Error() + "\r\n")) stdout.Write([]byte(err.Error() + "\r\n"))
log.G(ctx).Errorf("Error running exec %s in container: %v", execName, err) logrus.Errorf("Error running exec %s in container: %v", execName, err)
} }
return nil return nil
} }

View file

@ -12,7 +12,7 @@ func (s *containerRouter) getContainersByName(ctx context.Context, w http.Respon
displaySize := httputils.BoolValue(r, "size") displaySize := httputils.BoolValue(r, "size")
version := httputils.VersionFromContext(ctx) version := httputils.VersionFromContext(ctx)
json, err := s.backend.ContainerInspect(ctx, vars["name"], displaySize, version) json, err := s.backend.ContainerInspect(vars["name"], displaySize, version)
if err != nil { if err != nil {
return err return err
} }

View file

@ -3,13 +3,13 @@ package distribution // import "github.com/docker/docker/api/server/router/distr
import ( import (
"context" "context"
"github.com/distribution/reference"
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/docker/api/types/registry" "github.com/docker/distribution/reference"
"github.com/docker/docker/api/types"
) )
// Backend is all the methods that need to be implemented // Backend is all the methods that need to be implemented
// to provide image specific functionality. // to provide image specific functionality.
type Backend interface { type Backend interface {
GetRepositories(context.Context, reference.Named, *registry.AuthConfig) ([]distribution.Repository, error) GetRepository(context.Context, reference.Named, *types.AuthConfig) (distribution.Repository, bool, error)
} }

View file

@ -2,20 +2,20 @@ package distribution // import "github.com/docker/docker/api/server/router/distr
import ( import (
"context" "context"
"encoding/base64"
"encoding/json" "encoding/json"
"net/http" "net/http"
"os" "strings"
"github.com/distribution/reference"
"github.com/docker/distribution"
"github.com/docker/distribution/manifest/manifestlist" "github.com/docker/distribution/manifest/manifestlist"
"github.com/docker/distribution/manifest/schema1" "github.com/docker/distribution/manifest/schema1"
"github.com/docker/distribution/manifest/schema2" "github.com/docker/distribution/manifest/schema2"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/registry" "github.com/docker/docker/api/types"
distributionpkg "github.com/docker/docker/distribution" registrytypes "github.com/docker/docker/api/types/registry"
"github.com/docker/docker/errdefs" "github.com/docker/docker/errdefs"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
@ -26,10 +26,25 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
imgName := vars["name"] var (
config = &types.AuthConfig{}
authEncoded = r.Header.Get("X-Registry-Auth")
distributionInspect registrytypes.DistributionInspect
)
if authEncoded != "" {
authJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJSON).Decode(&config); err != nil {
// for a search it is not an error if no auth was given
// to increase compatibility with the existing api it is defaulting to be empty
config = &types.AuthConfig{}
}
}
image := vars["name"]
// TODO why is reference.ParseAnyReference() / reference.ParseNormalizedNamed() not using the reference.ErrTagInvalidFormat (and so on) errors? // TODO why is reference.ParseAnyReference() / reference.ParseNormalizedNamed() not using the reference.ErrTagInvalidFormat (and so on) errors?
ref, err := reference.ParseAnyReference(imgName) ref, err := reference.ParseAnyReference(image)
if err != nil { if err != nil {
return errdefs.InvalidParameter(err) return errdefs.InvalidParameter(err)
} }
@ -39,58 +54,28 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
// full image ID // full image ID
return errors.Errorf("no manifest found for full image ID") return errors.Errorf("no manifest found for full image ID")
} }
return errdefs.InvalidParameter(errors.Errorf("unknown image reference format: %s", imgName)) return errdefs.InvalidParameter(errors.Errorf("unknown image reference format: %s", image))
} }
// For a search it is not an error if no auth was given. Ignore invalid distrepo, _, err := s.backend.GetRepository(ctx, namedRef, config)
// AuthConfig to increase compatibility with the existing API.
authConfig, _ := registry.DecodeAuthConfig(r.Header.Get(registry.AuthHeader))
repos, err := s.backend.GetRepositories(ctx, namedRef, authConfig)
if err != nil { if err != nil {
return err return err
} }
blobsrvc := distrepo.Blobs(ctx)
// Fetch the manifest; if a mirror is configured, try the mirror first,
// but continue with upstream on failure.
//
// FIXME(thaJeztah): construct "repositories" on-demand;
// GetRepositories() will attempt to connect to all endpoints (registries),
// but we may only need the first one if it contains the manifest we're
// looking for, or if the configured mirror is a pull-through mirror.
//
// Logic for this could be implemented similar to "distribution.Pull()",
// which uses the "pullEndpoints" utility to iterate over the list
// of endpoints;
//
// - https://github.com/moby/moby/blob/12c7411b6b7314bef130cd59f1c7384a7db06d0b/distribution/pull.go#L17-L31
// - https://github.com/moby/moby/blob/12c7411b6b7314bef130cd59f1c7384a7db06d0b/distribution/pull.go#L76-L152
var lastErr error
for _, repo := range repos {
distributionInspect, err := s.fetchManifest(ctx, repo, namedRef)
if err != nil {
lastErr = err
continue
}
return httputils.WriteJSON(w, http.StatusOK, distributionInspect)
}
return lastErr
}
func (s *distributionRouter) fetchManifest(ctx context.Context, distrepo distribution.Repository, namedRef reference.Named) (registry.DistributionInspect, error) {
var distributionInspect registry.DistributionInspect
if canonicalRef, ok := namedRef.(reference.Canonical); !ok { if canonicalRef, ok := namedRef.(reference.Canonical); !ok {
namedRef = reference.TagNameOnly(namedRef) namedRef = reference.TagNameOnly(namedRef)
taggedRef, ok := namedRef.(reference.NamedTagged) taggedRef, ok := namedRef.(reference.NamedTagged)
if !ok { if !ok {
return registry.DistributionInspect{}, errdefs.InvalidParameter(errors.Errorf("image reference not tagged: %s", namedRef)) return errdefs.InvalidParameter(errors.Errorf("image reference not tagged: %s", image))
} }
descriptor, err := distrepo.Tags(ctx).Get(ctx, taggedRef.Tag()) descriptor, err := distrepo.Tags(ctx).Get(ctx, taggedRef.Tag())
if err != nil { if err != nil {
return registry.DistributionInspect{}, err return err
} }
distributionInspect.Descriptor = ocispec.Descriptor{ distributionInspect.Descriptor = v1.Descriptor{
MediaType: descriptor.MediaType, MediaType: descriptor.MediaType,
Digest: descriptor.Digest, Digest: descriptor.Digest,
Size: descriptor.Size, Size: descriptor.Size,
@ -105,7 +90,7 @@ func (s *distributionRouter) fetchManifest(ctx context.Context, distrepo distrib
// we have a digest, so we can retrieve the manifest // we have a digest, so we can retrieve the manifest
mnfstsrvc, err := distrepo.Manifests(ctx) mnfstsrvc, err := distrepo.Manifests(ctx)
if err != nil { if err != nil {
return registry.DistributionInspect{}, err return err
} }
mnfst, err := mnfstsrvc.Get(ctx, distributionInspect.Descriptor.Digest) mnfst, err := mnfstsrvc.Get(ctx, distributionInspect.Descriptor.Digest)
if err != nil { if err != nil {
@ -117,14 +102,14 @@ func (s *distributionRouter) fetchManifest(ctx context.Context, distrepo distrib
reference.ErrNameEmpty, reference.ErrNameEmpty,
reference.ErrNameTooLong, reference.ErrNameTooLong,
reference.ErrNameNotCanonical: reference.ErrNameNotCanonical:
return registry.DistributionInspect{}, errdefs.InvalidParameter(err) return errdefs.InvalidParameter(err)
} }
return registry.DistributionInspect{}, err return err
} }
mediaType, payload, err := mnfst.Payload() mediaType, payload, err := mnfst.Payload()
if err != nil { if err != nil {
return registry.DistributionInspect{}, err return err
} }
// update MediaType because registry might return something incorrect // update MediaType because registry might return something incorrect
distributionInspect.Descriptor.MediaType = mediaType distributionInspect.Descriptor.MediaType = mediaType
@ -136,7 +121,7 @@ func (s *distributionRouter) fetchManifest(ctx context.Context, distrepo distrib
switch mnfstObj := mnfst.(type) { switch mnfstObj := mnfst.(type) {
case *manifestlist.DeserializedManifestList: case *manifestlist.DeserializedManifestList:
for _, m := range mnfstObj.Manifests { for _, m := range mnfstObj.Manifests {
distributionInspect.Platforms = append(distributionInspect.Platforms, ocispec.Platform{ distributionInspect.Platforms = append(distributionInspect.Platforms, v1.Platform{
Architecture: m.Platform.Architecture, Architecture: m.Platform.Architecture,
OS: m.Platform.OS, OS: m.Platform.OS,
OSVersion: m.Platform.OSVersion, OSVersion: m.Platform.OSVersion,
@ -145,9 +130,8 @@ func (s *distributionRouter) fetchManifest(ctx context.Context, distrepo distrib
}) })
} }
case *schema2.DeserializedManifest: case *schema2.DeserializedManifest:
blobStore := distrepo.Blobs(ctx) configJSON, err := blobsrvc.Get(ctx, mnfstObj.Config.Digest)
configJSON, err := blobStore.Get(ctx, mnfstObj.Config.Digest) var platform v1.Platform
var platform ocispec.Platform
if err == nil { if err == nil {
err := json.Unmarshal(configJSON, &platform) err := json.Unmarshal(configJSON, &platform)
if err == nil && (platform.OS != "" || platform.Architecture != "") { if err == nil && (platform.OS != "" || platform.Architecture != "") {
@ -155,14 +139,12 @@ func (s *distributionRouter) fetchManifest(ctx context.Context, distrepo distrib
} }
} }
case *schema1.SignedManifest: case *schema1.SignedManifest:
if os.Getenv("DOCKER_ENABLE_DEPRECATED_PULL_SCHEMA_1_IMAGE") == "" { platform := v1.Platform{
return registry.DistributionInspect{}, distributionpkg.DeprecatedSchema1ImageError(namedRef)
}
platform := ocispec.Platform{
Architecture: mnfstObj.Architecture, Architecture: mnfstObj.Architecture,
OS: "linux", OS: "linux",
} }
distributionInspect.Platforms = append(distributionInspect.Platforms, platform) distributionInspect.Platforms = append(distributionInspect.Platforms, platform)
} }
return distributionInspect, nil
return httputils.WriteJSON(w, http.StatusOK, distributionInspect)
} }

View file

@ -1,13 +1,7 @@
package grpc // import "github.com/docker/docker/api/server/router/grpc" package grpc // import "github.com/docker/docker/api/server/router/grpc"
import ( import (
"context"
"strings"
"github.com/docker/docker/api/server/router" "github.com/docker/docker/api/server/router"
grpc_middleware "github.com/grpc-ecosystem/go-grpc-middleware"
"github.com/moby/buildkit/util/grpcerrors"
"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
"golang.org/x/net/http2" "golang.org/x/net/http2"
"google.golang.org/grpc" "google.golang.org/grpc"
) )
@ -20,12 +14,9 @@ type grpcRouter struct {
// NewRouter initializes a new grpc http router // NewRouter initializes a new grpc http router
func NewRouter(backends ...Backend) router.Router { func NewRouter(backends ...Backend) router.Router {
unary := grpc.UnaryInterceptor(grpc_middleware.ChainUnaryServer(unaryInterceptor(), grpcerrors.UnaryServerInterceptor))
stream := grpc.StreamInterceptor(grpc_middleware.ChainStreamServer(otelgrpc.StreamServerInterceptor(), grpcerrors.StreamServerInterceptor)) //nolint:staticcheck // TODO(thaJeztah): ignore SA1019 for deprecated options: see https://github.com/moby/moby/issues/47437
r := &grpcRouter{ r := &grpcRouter{
h2Server: &http2.Server{}, h2Server: &http2.Server{},
grpcServer: grpc.NewServer(unary, stream), grpcServer: grpc.NewServer(),
} }
for _, b := range backends { for _, b := range backends {
b.RegisterGRPC(r.grpcServer) b.RegisterGRPC(r.grpcServer)
@ -35,26 +26,12 @@ func NewRouter(backends ...Backend) router.Router {
} }
// Routes returns the available routers to the session controller // Routes returns the available routers to the session controller
func (gr *grpcRouter) Routes() []router.Route { func (r *grpcRouter) Routes() []router.Route {
return gr.routes return r.routes
} }
func (gr *grpcRouter) initRoutes() { func (r *grpcRouter) initRoutes() {
gr.routes = []router.Route{ r.routes = []router.Route{
router.NewPostRoute("/grpc", gr.serveGRPC), router.NewPostRoute("/grpc", r.serveGRPC),
}
}
func unaryInterceptor() grpc.UnaryServerInterceptor {
withTrace := otelgrpc.UnaryServerInterceptor() //nolint:staticcheck // TODO(thaJeztah): ignore SA1019 for deprecated options: see https://github.com/moby/moby/issues/47437
return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp interface{}, err error) {
// This method is used by the clients to send their traces to buildkit so they can be included
// in the daemon trace and stored in the build history record. This method can not be traced because
// it would cause an infinite loop.
if strings.HasSuffix(info.FullMethod, "opentelemetry.proto.collector.trace.v1.TraceService/Export") {
return handler(ctx, req)
}
return withTrace(ctx, req, info, handler)
} }
} }

View file

@ -4,14 +4,11 @@ import (
"context" "context"
"io" "io"
"github.com/distribution/reference"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image" "github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/registry" "github.com/docker/docker/api/types/registry"
dockerimage "github.com/docker/docker/image" specs "github.com/opencontainers/image-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
) )
// Backend is all the methods that need to be implemented // Backend is all the methods that need to be implemented
@ -23,25 +20,22 @@ type Backend interface {
} }
type imageBackend interface { type imageBackend interface {
ImageDelete(ctx context.Context, imageRef string, force, prune bool) ([]image.DeleteResponse, error) ImageDelete(imageRef string, force, prune bool) ([]types.ImageDeleteResponseItem, error)
ImageHistory(ctx context.Context, imageName string) ([]*image.HistoryResponseItem, error) ImageHistory(imageName string) ([]*image.HistoryResponseItem, error)
Images(ctx context.Context, opts image.ListOptions) ([]*image.Summary, error) Images(imageFilters filters.Args, all bool, withExtraAttrs bool) ([]*types.ImageSummary, error)
GetImage(ctx context.Context, refOrID string, options backend.GetImageOpts) (*dockerimage.Image, error) LookupImage(name string) (*types.ImageInspect, error)
TagImage(ctx context.Context, id dockerimage.ID, newRef reference.Named) error TagImage(imageName, repository, tag string) (string, error)
ImagesPrune(ctx context.Context, pruneFilters filters.Args) (*types.ImagesPruneReport, error) ImagesPrune(ctx context.Context, pruneFilters filters.Args) (*types.ImagesPruneReport, error)
} }
type importExportBackend interface { type importExportBackend interface {
LoadImage(ctx context.Context, inTar io.ReadCloser, outStream io.Writer, quiet bool) error LoadImage(inTar io.ReadCloser, outStream io.Writer, quiet bool) error
ImportImage(ctx context.Context, ref reference.Named, platform *ocispec.Platform, msg string, layerReader io.Reader, changes []string) (dockerimage.ID, error) ImportImage(src string, repository, platform string, tag string, msg string, inConfig io.ReadCloser, outStream io.Writer, changes []string) error
ExportImage(ctx context.Context, names []string, outStream io.Writer) error ExportImage(names []string, outStream io.Writer) error
} }
type registryBackend interface { type registryBackend interface {
PullImage(ctx context.Context, ref reference.Named, platform *ocispec.Platform, metaHeaders map[string][]string, authConfig *registry.AuthConfig, outStream io.Writer) error PullImage(ctx context.Context, image, tag string, platform *specs.Platform, metaHeaders map[string][]string, authConfig *types.AuthConfig, outStream io.Writer) error
PushImage(ctx context.Context, ref reference.Named, metaHeaders map[string][]string, authConfig *registry.AuthConfig, outStream io.Writer) error PushImage(ctx context.Context, image, tag string, metaHeaders map[string][]string, authConfig *types.AuthConfig, outStream io.Writer) error
} SearchRegistryForImages(ctx context.Context, filtersArgs string, term string, limit int, authConfig *types.AuthConfig, metaHeaders map[string][]string) (*registry.SearchResults, error)
type Searcher interface {
Search(ctx context.Context, searchFilters filters.Args, term string, limit int, authConfig *registry.AuthConfig, headers map[string][]string) ([]registry.SearchResult, error)
} }

View file

@ -2,56 +2,43 @@ package image // import "github.com/docker/docker/api/server/router/image"
import ( import (
"github.com/docker/docker/api/server/router" "github.com/docker/docker/api/server/router"
"github.com/docker/docker/image"
"github.com/docker/docker/layer"
"github.com/docker/docker/reference"
) )
// imageRouter is a router to talk with the image controller // imageRouter is a router to talk with the image controller
type imageRouter struct { type imageRouter struct {
backend Backend backend Backend
searcher Searcher
referenceBackend reference.Store
imageStore image.Store
layerStore layer.Store
routes []router.Route routes []router.Route
} }
// NewRouter initializes a new image router // NewRouter initializes a new image router
func NewRouter(backend Backend, searcher Searcher, referenceBackend reference.Store, imageStore image.Store, layerStore layer.Store) router.Router { func NewRouter(backend Backend) router.Router {
ir := &imageRouter{ r := &imageRouter{backend: backend}
backend: backend, r.initRoutes()
searcher: searcher, return r
referenceBackend: referenceBackend,
imageStore: imageStore,
layerStore: layerStore,
}
ir.initRoutes()
return ir
} }
// Routes returns the available routes to the image controller // Routes returns the available routes to the image controller
func (ir *imageRouter) Routes() []router.Route { func (r *imageRouter) Routes() []router.Route {
return ir.routes return r.routes
} }
// initRoutes initializes the routes in the image router // initRoutes initializes the routes in the image router
func (ir *imageRouter) initRoutes() { func (r *imageRouter) initRoutes() {
ir.routes = []router.Route{ r.routes = []router.Route{
// GET // GET
router.NewGetRoute("/images/json", ir.getImagesJSON), router.NewGetRoute("/images/json", r.getImagesJSON),
router.NewGetRoute("/images/search", ir.getImagesSearch), router.NewGetRoute("/images/search", r.getImagesSearch),
router.NewGetRoute("/images/get", ir.getImagesGet), router.NewGetRoute("/images/get", r.getImagesGet),
router.NewGetRoute("/images/{name:.*}/get", ir.getImagesGet), router.NewGetRoute("/images/{name:.*}/get", r.getImagesGet),
router.NewGetRoute("/images/{name:.*}/history", ir.getImagesHistory), router.NewGetRoute("/images/{name:.*}/history", r.getImagesHistory),
router.NewGetRoute("/images/{name:.*}/json", ir.getImagesByName), router.NewGetRoute("/images/{name:.*}/json", r.getImagesByName),
// POST // POST
router.NewPostRoute("/images/load", ir.postImagesLoad), router.NewPostRoute("/images/load", r.postImagesLoad),
router.NewPostRoute("/images/create", ir.postImagesCreate), router.NewPostRoute("/images/create", r.postImagesCreate),
router.NewPostRoute("/images/{name:.*}/push", ir.postImagesPush), router.NewPostRoute("/images/{name:.*}/push", r.postImagesPush),
router.NewPostRoute("/images/{name:.*}/tag", ir.postImagesTag), router.NewPostRoute("/images/{name:.*}/tag", r.postImagesTag),
router.NewPostRoute("/images/prune", ir.postImagesPrune), router.NewPostRoute("/images/prune", r.postImagesPrune),
// DELETE // DELETE
router.NewDeleteRoute("/images/{name:.*}", ir.deleteImages), router.NewDeleteRoute("/images/{name:.*}", r.deleteImages),
} }
} }

View file

@ -2,50 +2,41 @@ package image // import "github.com/docker/docker/api/server/router/image"
import ( import (
"context" "context"
"fmt" "encoding/base64"
"io" "encoding/json"
"net/http" "net/http"
"net/url"
"strconv" "strconv"
"strings" "strings"
"time"
"github.com/containerd/containerd/platforms" "github.com/containerd/containerd/platforms"
"github.com/distribution/reference"
"github.com/docker/docker/api"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
imagetypes "github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/versions" "github.com/docker/docker/api/types/versions"
"github.com/docker/docker/builder/remotecontext"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/errdefs" "github.com/docker/docker/errdefs"
"github.com/docker/docker/image"
"github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/progress"
"github.com/docker/docker/pkg/streamformatter" "github.com/docker/docker/pkg/streamformatter"
"github.com/opencontainers/go-digest" "github.com/docker/docker/pkg/system"
ocispec "github.com/opencontainers/image-spec/specs-go/v1" "github.com/docker/docker/registry"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
// Creates an image from Pull or from Import // Creates an image from Pull or from Import
func (ir *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
var ( var (
img = r.Form.Get("fromImage") image = r.Form.Get("fromImage")
repo = r.Form.Get("repo") repo = r.Form.Get("repo")
tag = r.Form.Get("tag") tag = r.Form.Get("tag")
comment = r.Form.Get("message") message = r.Form.Get("message")
progressErr error err error
output = ioutils.NewWriteFlusher(w) output = ioutils.NewWriteFlusher(w)
platform *ocispec.Platform platform *specs.Platform
) )
defer output.Close() defer output.Close()
@ -53,16 +44,20 @@ func (ir *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrit
version := httputils.VersionFromContext(ctx) version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.32") { if versions.GreaterThanOrEqualTo(version, "1.32") {
if p := r.FormValue("platform"); p != "" { apiPlatform := r.FormValue("platform")
sp, err := platforms.Parse(p) if apiPlatform != "" {
sp, err := platforms.Parse(apiPlatform)
if err != nil { if err != nil {
return err return err
} }
if err := system.ValidatePlatform(sp); err != nil {
return err
}
platform = &sp platform = &sp
} }
} }
if img != "" { // pull if image != "" { //pull
metaHeaders := map[string][]string{} metaHeaders := map[string][]string{}
for k, v := range r.Header { for k, v := range r.Header {
if strings.HasPrefix(k, "X-Meta-") { if strings.HasPrefix(k, "X-Meta-") {
@ -70,92 +65,39 @@ func (ir *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrit
} }
} }
// Special case: "pull -a" may send an image name with a authEncoded := r.Header.Get("X-Registry-Auth")
// trailing :. This is ugly, but let's not break API authConfig := &types.AuthConfig{}
// compatibility. if authEncoded != "" {
imgName := strings.TrimSuffix(img, ":") authJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJSON).Decode(authConfig); err != nil {
ref, err := reference.ParseNormalizedNamed(imgName) // for a pull it is not an error if no auth was given
if err != nil { // to increase compatibility with the existing api it is defaulting to be empty
return errdefs.InvalidParameter(err) authConfig = &types.AuthConfig{}
}
// TODO(thaJeztah) this could use a WithTagOrDigest() utility
if tag != "" {
// The "tag" could actually be a digest.
var dgst digest.Digest
dgst, err = digest.Parse(tag)
if err == nil {
ref, err = reference.WithDigest(reference.TrimNamed(ref), dgst)
} else {
ref, err = reference.WithTag(ref, tag)
}
if err != nil {
return errdefs.InvalidParameter(err)
} }
} }
err = s.backend.PullImage(ctx, image, tag, platform, metaHeaders, authConfig, output)
if err := validateRepoName(ref); err != nil { } else { //import
return errdefs.Forbidden(err)
}
// For a pull it is not an error if no auth was given. Ignore invalid
// AuthConfig to increase compatibility with the existing API.
authConfig, _ := registry.DecodeAuthConfig(r.Header.Get(registry.AuthHeader))
progressErr = ir.backend.PullImage(ctx, ref, platform, metaHeaders, authConfig, output)
} else { // import
src := r.Form.Get("fromSrc") src := r.Form.Get("fromSrc")
// 'err' MUST NOT be defined within this block, we need any error
tagRef, err := httputils.RepoTagReference(repo, tag) // generated from the download to be available to the output
if err != nil { // stream processing below
return errdefs.InvalidParameter(err) os := ""
} if platform != nil {
os = platform.OS
if len(comment) == 0 { }
comment = "Imported from " + src err = s.backend.ImportImage(src, repo, os, tag, message, r.Body, output, r.Form["changes"])
} }
var layerReader io.ReadCloser
defer r.Body.Close()
if src == "-" {
layerReader = r.Body
} else {
if len(strings.Split(src, "://")) == 1 {
src = "http://" + src
}
u, err := url.Parse(src)
if err != nil {
return errdefs.InvalidParameter(err)
}
resp, err := remotecontext.GetWithStatusError(u.String())
if err != nil { if err != nil {
if !output.Flushed() {
return err return err
} }
output.Write(streamformatter.FormatStatus("", "Downloading from %s", u)) output.Write(streamformatter.FormatError(err))
progressOutput := streamformatter.NewJSONProgressOutput(output, true)
layerReader = progress.NewProgressReader(resp.Body, progressOutput, resp.ContentLength, "", "Importing")
defer layerReader.Close()
}
var id image.ID
id, progressErr = ir.backend.ImportImage(ctx, tagRef, platform, comment, layerReader, r.Form["changes"])
if progressErr == nil {
output.Write(streamformatter.FormatStatus("", id.String()))
}
}
if progressErr != nil {
if !output.Flushed() {
return progressErr
}
_, _ = output.Write(streamformatter.FormatError(progressErr))
} }
return nil return nil
} }
func (ir *imageRouter) postImagesPush(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *imageRouter) postImagesPush(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
metaHeaders := map[string][]string{} metaHeaders := map[string][]string{}
for k, v := range r.Header { for k, v := range r.Header {
if strings.HasPrefix(k, "X-Meta-") { if strings.HasPrefix(k, "X-Meta-") {
@ -165,56 +107,41 @@ func (ir *imageRouter) postImagesPush(ctx context.Context, w http.ResponseWriter
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
authConfig := &types.AuthConfig{}
var authConfig *registry.AuthConfig authEncoded := r.Header.Get("X-Registry-Auth")
if authEncoded := r.Header.Get(registry.AuthHeader); authEncoded != "" { if authEncoded != "" {
// the new format is to handle the authConfig as a header. Ignore invalid // the new format is to handle the authConfig as a header
// AuthConfig to increase compatibility with the existing API. authJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
authConfig, _ = registry.DecodeAuthConfig(authEncoded) if err := json.NewDecoder(authJSON).Decode(authConfig); err != nil {
// to increase compatibility to existing api it is defaulting to be empty
authConfig = &types.AuthConfig{}
}
} else { } else {
// the old format is supported for compatibility if there was no authConfig header // the old format is supported for compatibility if there was no authConfig header
var err error if err := json.NewDecoder(r.Body).Decode(authConfig); err != nil {
authConfig, err = registry.DecodeAuthConfigBody(r.Body) return errors.Wrap(errdefs.InvalidParameter(err), "Bad parameters and missing X-Registry-Auth")
if err != nil {
return errors.Wrap(err, "bad parameters and missing X-Registry-Auth")
} }
} }
image := vars["name"]
tag := r.Form.Get("tag")
output := ioutils.NewWriteFlusher(w) output := ioutils.NewWriteFlusher(w)
defer output.Close() defer output.Close()
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
img := vars["name"] if err := s.backend.PushImage(ctx, image, tag, metaHeaders, authConfig, output); err != nil {
tag := r.Form.Get("tag")
var ref reference.Named
// Tag is empty only in case PushOptions.All is true.
if tag != "" {
r, err := httputils.RepoTagReference(img, tag)
if err != nil {
return errdefs.InvalidParameter(err)
}
ref = r
} else {
r, err := reference.ParseNormalizedNamed(img)
if err != nil {
return errdefs.InvalidParameter(err)
}
ref = r
}
if err := ir.backend.PushImage(ctx, ref, metaHeaders, authConfig, output); err != nil {
if !output.Flushed() { if !output.Flushed() {
return err return err
} }
_, _ = output.Write(streamformatter.FormatError(err)) output.Write(streamformatter.FormatError(err))
} }
return nil return nil
} }
func (ir *imageRouter) getImagesGet(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *imageRouter) getImagesGet(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
@ -230,16 +157,16 @@ func (ir *imageRouter) getImagesGet(ctx context.Context, w http.ResponseWriter,
names = r.Form["names"] names = r.Form["names"]
} }
if err := ir.backend.ExportImage(ctx, names, output); err != nil { if err := s.backend.ExportImage(names, output); err != nil {
if !output.Flushed() { if !output.Flushed() {
return err return err
} }
_, _ = output.Write(streamformatter.FormatError(err)) output.Write(streamformatter.FormatError(err))
} }
return nil return nil
} }
func (ir *imageRouter) postImagesLoad(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *imageRouter) postImagesLoad(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
@ -249,8 +176,8 @@ func (ir *imageRouter) postImagesLoad(ctx context.Context, w http.ResponseWriter
output := ioutils.NewWriteFlusher(w) output := ioutils.NewWriteFlusher(w)
defer output.Close() defer output.Close()
if err := ir.backend.LoadImage(ctx, r.Body, output, quiet); err != nil { if err := s.backend.LoadImage(r.Body, output, quiet); err != nil {
_, _ = output.Write(streamformatter.FormatError(err)) output.Write(streamformatter.FormatError(err))
} }
return nil return nil
} }
@ -263,7 +190,7 @@ func (missingImageError) Error() string {
func (missingImageError) InvalidParameter() {} func (missingImageError) InvalidParameter() {}
func (ir *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
@ -277,7 +204,7 @@ func (ir *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter,
force := httputils.BoolValue(r, "force") force := httputils.BoolValue(r, "force")
prune := !httputils.BoolValue(r, "noprune") prune := !httputils.BoolValue(r, "noprune")
list, err := ir.backend.ImageDelete(ctx, name, force, prune) list, err := s.backend.ImageDelete(name, force, prune)
if err != nil { if err != nil {
return err return err
} }
@ -285,103 +212,16 @@ func (ir *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter,
return httputils.WriteJSON(w, http.StatusOK, list) return httputils.WriteJSON(w, http.StatusOK, list)
} }
func (ir *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
img, err := ir.backend.GetImage(ctx, vars["name"], backend.GetImageOpts{Details: true}) imageInspect, err := s.backend.LookupImage(vars["name"])
if err != nil { if err != nil {
return err return err
} }
imageInspect, err := ir.toImageInspect(img)
if err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.44") {
imageInspect.VirtualSize = imageInspect.Size //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.44.
if imageInspect.Created == "" {
// backwards compatibility for Created not existing returning "0001-01-01T00:00:00Z"
// https://github.com/moby/moby/issues/47368
imageInspect.Created = time.Time{}.Format(time.RFC3339Nano)
}
}
if versions.GreaterThanOrEqualTo(version, "1.45") {
imageInspect.Container = "" //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
imageInspect.ContainerConfig = nil //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
}
return httputils.WriteJSON(w, http.StatusOK, imageInspect) return httputils.WriteJSON(w, http.StatusOK, imageInspect)
} }
func (ir *imageRouter) toImageInspect(img *image.Image) (*types.ImageInspect, error) { func (s *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var repoTags, repoDigests []string
for _, ref := range img.Details.References {
switch ref.(type) {
case reference.NamedTagged:
repoTags = append(repoTags, reference.FamiliarString(ref))
case reference.Canonical:
repoDigests = append(repoDigests, reference.FamiliarString(ref))
}
}
comment := img.Comment
if len(comment) == 0 && len(img.History) > 0 {
comment = img.History[len(img.History)-1].Comment
}
// Make sure we output empty arrays instead of nil.
if repoTags == nil {
repoTags = []string{}
}
if repoDigests == nil {
repoDigests = []string{}
}
var created string
if img.Created != nil {
created = img.Created.Format(time.RFC3339Nano)
}
return &types.ImageInspect{
ID: img.ID().String(),
RepoTags: repoTags,
RepoDigests: repoDigests,
Parent: img.Parent.String(),
Comment: comment,
Created: created,
Container: img.Container, //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
ContainerConfig: &img.ContainerConfig, //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
DockerVersion: img.DockerVersion,
Author: img.Author,
Config: img.Config,
Architecture: img.Architecture,
Variant: img.Variant,
Os: img.OperatingSystem(),
OsVersion: img.OSVersion,
Size: img.Details.Size,
GraphDriver: types.GraphDriverData{
Name: img.Details.Driver,
Data: img.Details.Metadata,
},
RootFS: rootFSToAPIType(img.RootFS),
Metadata: imagetypes.Metadata{
LastTagTime: img.Details.LastUpdated,
},
}, nil
}
func rootFSToAPIType(rootfs *image.RootFS) types.RootFS {
var layers []string
for _, l := range rootfs.DiffIDs {
layers = append(layers, l.String())
}
return types.RootFS{
Type: rootfs.Type,
Layers: layers,
}
}
func (ir *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
@ -391,56 +231,23 @@ func (ir *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter,
return err return err
} }
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.41") {
// NOTE: filter is a shell glob string applied to repository names.
filterParam := r.Form.Get("filter") filterParam := r.Form.Get("filter")
// FIXME(vdemeester) This has been deprecated in 1.13, and is target for removal for v17.12
if filterParam != "" { if filterParam != "" {
imageFilters.Add("reference", filterParam) imageFilters.Add("reference", filterParam)
} }
}
var sharedSize bool images, err := s.backend.Images(imageFilters, httputils.BoolValue(r, "all"), false)
if versions.GreaterThanOrEqualTo(version, "1.42") {
// NOTE: Support for the "shared-size" parameter was added in API 1.42.
sharedSize = httputils.BoolValue(r, "shared-size")
}
images, err := ir.backend.Images(ctx, imagetypes.ListOptions{
All: httputils.BoolValue(r, "all"),
Filters: imageFilters,
SharedSize: sharedSize,
})
if err != nil { if err != nil {
return err return err
} }
useNone := versions.LessThan(version, "1.43")
withVirtualSize := versions.LessThan(version, "1.44")
for _, img := range images {
if useNone {
if len(img.RepoTags) == 0 && len(img.RepoDigests) == 0 {
img.RepoTags = append(img.RepoTags, "<none>:<none>")
img.RepoDigests = append(img.RepoDigests, "<none>@<none>")
}
} else {
if img.RepoTags == nil {
img.RepoTags = []string{}
}
if img.RepoDigests == nil {
img.RepoDigests = []string{}
}
}
if withVirtualSize {
img.VirtualSize = img.Size //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.44.
}
}
return httputils.WriteJSON(w, http.StatusOK, images) return httputils.WriteJSON(w, http.StatusOK, images)
} }
func (ir *imageRouter) getImagesHistory(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *imageRouter) getImagesHistory(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
history, err := ir.backend.ImageHistory(ctx, vars["name"]) name := vars["name"]
history, err := s.backend.ImageHistory(name)
if err != nil { if err != nil {
return err return err
} }
@ -448,71 +255,56 @@ func (ir *imageRouter) getImagesHistory(ctx context.Context, w http.ResponseWrit
return httputils.WriteJSON(w, http.StatusOK, history) return httputils.WriteJSON(w, http.StatusOK, history)
} }
func (ir *imageRouter) postImagesTag(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *imageRouter) postImagesTag(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
if _, err := s.backend.TagImage(vars["name"], r.Form.Get("repo"), r.Form.Get("tag")); err != nil {
ref, err := httputils.RepoTagReference(r.Form.Get("repo"), r.Form.Get("tag"))
if ref == nil || err != nil {
return errdefs.InvalidParameter(err)
}
refName := reference.FamiliarName(ref)
if refName == string(digest.Canonical) {
return errdefs.InvalidParameter(errors.New("refusing to create an ambiguous tag using digest algorithm as name"))
}
img, err := ir.backend.GetImage(ctx, vars["name"], backend.GetImageOpts{})
if err != nil {
return errdefs.NotFound(err)
}
if err := ir.backend.TagImage(ctx, img.ID(), ref); err != nil {
return err return err
} }
w.WriteHeader(http.StatusCreated) w.WriteHeader(http.StatusCreated)
return nil return nil
} }
func (ir *imageRouter) getImagesSearch(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *imageRouter) getImagesSearch(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
var (
config *types.AuthConfig
authEncoded = r.Header.Get("X-Registry-Auth")
headers = map[string][]string{}
)
var limit int if authEncoded != "" {
if r.Form.Get("limit") != "" { authJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
var err error if err := json.NewDecoder(authJSON).Decode(&config); err != nil {
limit, err = strconv.Atoi(r.Form.Get("limit")) // for a search it is not an error if no auth was given
if err != nil || limit < 0 { // to increase compatibility with the existing api it is defaulting to be empty
return errdefs.InvalidParameter(errors.Wrap(err, "invalid limit specified")) config = &types.AuthConfig{}
} }
} }
searchFilters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
}
// For a search it is not an error if no auth was given. Ignore invalid
// AuthConfig to increase compatibility with the existing API.
authConfig, _ := registry.DecodeAuthConfig(r.Header.Get(registry.AuthHeader))
headers := http.Header{}
for k, v := range r.Header { for k, v := range r.Header {
k = http.CanonicalHeaderKey(k)
if strings.HasPrefix(k, "X-Meta-") { if strings.HasPrefix(k, "X-Meta-") {
headers[k] = v headers[k] = v
} }
} }
headers.Set("User-Agent", dockerversion.DockerUserAgent(ctx)) limit := registry.DefaultSearchLimit
res, err := ir.searcher.Search(ctx, searchFilters, r.Form.Get("term"), limit, authConfig, headers) if r.Form.Get("limit") != "" {
limitValue, err := strconv.Atoi(r.Form.Get("limit"))
if err != nil { if err != nil {
return err return err
} }
return httputils.WriteJSON(w, http.StatusOK, res) limit = limitValue
}
query, err := s.backend.SearchRegistryForImages(ctx, r.Form.Get("filters"), r.Form.Get("term"), limit, config, headers)
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, query.Results)
} }
func (ir *imageRouter) postImagesPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *imageRouter) postImagesPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
@ -522,18 +314,9 @@ func (ir *imageRouter) postImagesPrune(ctx context.Context, w http.ResponseWrite
return err return err
} }
pruneReport, err := ir.backend.ImagesPrune(ctx, pruneFilters) pruneReport, err := s.backend.ImagesPrune(ctx, pruneFilters)
if err != nil { if err != nil {
return err return err
} }
return httputils.WriteJSON(w, http.StatusOK, pruneReport) return httputils.WriteJSON(w, http.StatusOK, pruneReport)
} }
// validateRepoName validates the name of a repository.
func validateRepoName(name reference.Named) error {
familiarName := reference.FamiliarName(name)
if familiarName == api.NoBaseImageSpecifier {
return fmt.Errorf("'%s' is a reserved name", familiarName)
}
return nil
}

View file

@ -1,8 +1,6 @@
package router // import "github.com/docker/docker/api/server/router" package router // import "github.com/docker/docker/api/server/router"
import ( import (
"net/http"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
) )
@ -44,30 +42,30 @@ func NewRoute(method, path string, handler httputils.APIFunc, opts ...RouteWrapp
// NewGetRoute initializes a new route with the http method GET. // NewGetRoute initializes a new route with the http method GET.
func NewGetRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route { func NewGetRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodGet, path, handler, opts...) return NewRoute("GET", path, handler, opts...)
} }
// NewPostRoute initializes a new route with the http method POST. // NewPostRoute initializes a new route with the http method POST.
func NewPostRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route { func NewPostRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodPost, path, handler, opts...) return NewRoute("POST", path, handler, opts...)
} }
// NewPutRoute initializes a new route with the http method PUT. // NewPutRoute initializes a new route with the http method PUT.
func NewPutRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route { func NewPutRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodPut, path, handler, opts...) return NewRoute("PUT", path, handler, opts...)
} }
// NewDeleteRoute initializes a new route with the http method DELETE. // NewDeleteRoute initializes a new route with the http method DELETE.
func NewDeleteRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route { func NewDeleteRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodDelete, path, handler, opts...) return NewRoute("DELETE", path, handler, opts...)
} }
// NewOptionsRoute initializes a new route with the http method OPTIONS. // NewOptionsRoute initializes a new route with the http method OPTIONS.
func NewOptionsRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route { func NewOptionsRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodOptions, path, handler, opts...) return NewRoute("OPTIONS", path, handler, opts...)
} }
// NewHeadRoute initializes a new route with the http method HEAD. // NewHeadRoute initializes a new route with the http method HEAD.
func NewHeadRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route { func NewHeadRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodHead, path, handler, opts...) return NewRoute("HEAD", path, handler, opts...)
} }

View file

@ -4,15 +4,16 @@ import (
"context" "context"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/network"
"github.com/docker/libnetwork"
) )
// Backend is all the methods that need to be implemented // Backend is all the methods that need to be implemented
// to provide network specific functionality. // to provide network specific functionality.
type Backend interface { type Backend interface {
GetNetworks(filters.Args, backend.NetworkListConfig) ([]types.NetworkResource, error) FindNetwork(idName string) (libnetwork.Network, error)
GetNetworks(filters.Args, types.NetworkListConfig) ([]types.NetworkResource, error)
CreateNetwork(nc types.NetworkCreateRequest) (*types.NetworkCreateResponse, error) CreateNetwork(nc types.NetworkCreateRequest) (*types.NetworkCreateResponse, error)
ConnectContainerToNetwork(containerName, networkName string, endpointConfig *network.EndpointSettings) error ConnectContainerToNetwork(containerName, networkName string, endpointConfig *network.EndpointSettings) error
DisconnectContainerFromNetwork(containerName string, networkName string, force bool) error DisconnectContainerFromNetwork(containerName string, networkName string, force bool) error

View file

@ -0,0 +1 @@
package network // import "github.com/docker/docker/api/server/router/network"

View file

@ -2,19 +2,20 @@ package network // import "github.com/docker/docker/api/server/router/network"
import ( import (
"context" "context"
"encoding/json"
"io"
"net/http" "net/http"
"strconv" "strconv"
"strings" "strings"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/versions" "github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs" "github.com/docker/docker/errdefs"
"github.com/docker/docker/libnetwork" "github.com/docker/libnetwork"
"github.com/docker/docker/libnetwork/scope" netconst "github.com/docker/libnetwork/datastore"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
@ -40,7 +41,7 @@ func (n *networkRouter) getNetworksList(ctx context.Context, w http.ResponseWrit
// Combine the network list returned by Docker daemon if it is not already // Combine the network list returned by Docker daemon if it is not already
// returned by the cluster manager // returned by the cluster manager
localNetworks, err := n.backend.GetNetworks(filter, backend.NetworkListConfig{Detailed: versions.LessThan(httputils.VersionFromContext(ctx), "1.28")}) localNetworks, err := n.backend.GetNetworks(filter, types.NetworkListConfig{Detailed: versions.LessThan(httputils.VersionFromContext(ctx), "1.28")})
if err != nil { if err != nil {
return err return err
} }
@ -84,6 +85,10 @@ func (e ambigousResultsError) Error() string {
func (ambigousResultsError) InvalidParameter() {} func (ambigousResultsError) InvalidParameter() {}
func nameConflict(name string) error {
return errdefs.Conflict(libnetwork.NetworkNameError(name))
}
func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
@ -99,7 +104,7 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
return errors.Wrapf(invalidRequestError{err}, "invalid value for verbose: %s", v) return errors.Wrapf(invalidRequestError{err}, "invalid value for verbose: %s", v)
} }
} }
networkScope := r.URL.Query().Get("scope") scope := r.URL.Query().Get("scope")
// In case multiple networks have duplicate names, return error. // In case multiple networks have duplicate names, return error.
// TODO (yongtang): should we wrap with version here for backward compatibility? // TODO (yongtang): should we wrap with version here for backward compatibility?
@ -115,23 +120,23 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
// TODO(@cpuguy83): All this logic for figuring out which network to return does not belong here // TODO(@cpuguy83): All this logic for figuring out which network to return does not belong here
// Instead there should be a backend function to just get one network. // Instead there should be a backend function to just get one network.
filter := filters.NewArgs(filters.Arg("idOrName", term)) filter := filters.NewArgs(filters.Arg("idOrName", term))
if networkScope != "" { if scope != "" {
filter.Add("scope", networkScope) filter.Add("scope", scope)
} }
networks, _ := n.backend.GetNetworks(filter, backend.NetworkListConfig{Detailed: true, Verbose: verbose}) nw, _ := n.backend.GetNetworks(filter, types.NetworkListConfig{Detailed: true, Verbose: verbose})
for _, nw := range networks { for _, network := range nw {
if nw.ID == term { if network.ID == term {
return httputils.WriteJSON(w, http.StatusOK, nw) return httputils.WriteJSON(w, http.StatusOK, network)
} }
if nw.Name == term { if network.Name == term {
// No need to check the ID collision here as we are still in // No need to check the ID collision here as we are still in
// local scope and the network ID is unique in this scope. // local scope and the network ID is unique in this scope.
listByFullName[nw.ID] = nw listByFullName[network.ID] = network
} }
if strings.HasPrefix(nw.ID, term) { if strings.HasPrefix(network.ID, term) {
// No need to check the ID collision here as we are still in // No need to check the ID collision here as we are still in
// local scope and the network ID is unique in this scope. // local scope and the network ID is unique in this scope.
listByPartialID[nw.ID] = nw listByPartialID[network.ID] = network
} }
} }
@ -141,7 +146,7 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
// or if the get network was passed with a network name and scope as swarm // or if the get network was passed with a network name and scope as swarm
// return the network. Skipped using isMatchingScope because it is true if the scope // return the network. Skipped using isMatchingScope because it is true if the scope
// is not set which would be case if the client API v1.30 // is not set which would be case if the client API v1.30
if strings.HasPrefix(nwk.ID, term) || networkScope == scope.Swarm { if strings.HasPrefix(nwk.ID, term) || (netconst.SwarmScope == scope) {
// If we have a previous match "backend", return it, we need verbose when enabled // If we have a previous match "backend", return it, we need verbose when enabled
// ex: overlay/partial_ID or name/swarm_scope // ex: overlay/partial_ID or name/swarm_scope
if nwv, ok := listByPartialID[nwk.ID]; ok { if nwv, ok := listByPartialID[nwk.ID]; ok {
@ -153,25 +158,25 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
} }
} }
networks, _ = n.cluster.GetNetworks(filter) nr, _ := n.cluster.GetNetworks(filter)
for _, nw := range networks { for _, network := range nr {
if nw.ID == term { if network.ID == term {
return httputils.WriteJSON(w, http.StatusOK, nw) return httputils.WriteJSON(w, http.StatusOK, network)
} }
if nw.Name == term { if network.Name == term {
// Check the ID collision as we are in swarm scope here, and // Check the ID collision as we are in swarm scope here, and
// the map (of the listByFullName) may have already had a // the map (of the listByFullName) may have already had a
// network with the same ID (from local scope previously) // network with the same ID (from local scope previously)
if _, ok := listByFullName[nw.ID]; !ok { if _, ok := listByFullName[network.ID]; !ok {
listByFullName[nw.ID] = nw listByFullName[network.ID] = network
} }
} }
if strings.HasPrefix(nw.ID, term) { if strings.HasPrefix(network.ID, term) {
// Check the ID collision as we are in swarm scope here, and // Check the ID collision as we are in swarm scope here, and
// the map (of the listByPartialID) may have already had a // the map (of the listByPartialID) may have already had a
// network with the same ID (from local scope previously) // network with the same ID (from local scope previously)
if _, ok := listByPartialID[nw.ID]; !ok { if _, ok := listByPartialID[network.ID]; !ok {
listByPartialID[nw.ID] = nw listByPartialID[network.ID] = network
} }
} }
} }
@ -200,25 +205,39 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
} }
func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var create types.NetworkCreateRequest
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
var create types.NetworkCreateRequest if err := httputils.CheckForJSON(r); err != nil {
if err := httputils.ReadJSON(r, &create); err != nil {
return err return err
} }
if nws, err := n.cluster.GetNetworksByName(create.Name); err == nil && len(nws) > 0 { if err := json.NewDecoder(r.Body).Decode(&create); err != nil {
return libnetwork.NetworkNameError(create.Name) if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if nws, err := n.cluster.GetNetworksByName(create.Name); err == nil && len(nws) > 0 {
return nameConflict(create.Name)
} }
// For a Swarm-scoped network, this call to backend.CreateNetwork is used to
// validate the configuration. The network will not be created but, if the
// configuration is valid, ManagerRedirectError will be returned and handled
// below.
nw, err := n.backend.CreateNetwork(create) nw, err := n.backend.CreateNetwork(create)
if err != nil { if err != nil {
var warning string
if _, ok := err.(libnetwork.NetworkNameError); ok {
// check if user defined CheckDuplicate, if set true, return err
// otherwise prepare a warning message
if create.CheckDuplicate {
return nameConflict(create.Name)
}
warning = libnetwork.NetworkNameError(create.Name).Error()
}
if _, ok := err.(libnetwork.ManagerRedirectError); !ok { if _, ok := err.(libnetwork.ManagerRedirectError); !ok {
return err return err
} }
@ -228,6 +247,7 @@ func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWr
} }
nw = &types.NetworkCreateResponse{ nw = &types.NetworkCreateResponse{
ID: id, ID: id,
Warning: warning,
} }
} }
@ -235,15 +255,22 @@ func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWr
} }
func (n *networkRouter) postNetworkConnect(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (n *networkRouter) postNetworkConnect(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var connect types.NetworkConnect
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
var connect types.NetworkConnect if err := httputils.CheckForJSON(r); err != nil {
if err := httputils.ReadJSON(r, &connect); err != nil {
return err return err
} }
if err := json.NewDecoder(r.Body).Decode(&connect); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
// Unlike other operations, we does not check ambiguity of the name/ID here. // Unlike other operations, we does not check ambiguity of the name/ID here.
// The reason is that, In case of attachable network in swarm scope, the actual local network // The reason is that, In case of attachable network in swarm scope, the actual local network
// may not be available at the time. At the same time, inside daemon `ConnectContainerToNetwork` // may not be available at the time. At the same time, inside daemon `ConnectContainerToNetwork`
@ -252,15 +279,22 @@ func (n *networkRouter) postNetworkConnect(ctx context.Context, w http.ResponseW
} }
func (n *networkRouter) postNetworkDisconnect(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (n *networkRouter) postNetworkDisconnect(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var disconnect types.NetworkDisconnect
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
var disconnect types.NetworkDisconnect if err := httputils.CheckForJSON(r); err != nil {
if err := httputils.ReadJSON(r, &disconnect); err != nil {
return err return err
} }
if err := json.NewDecoder(r.Body).Decode(&disconnect); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
return n.backend.DisconnectContainerFromNetwork(disconnect.Container, vars["id"], disconnect.Force) return n.backend.DisconnectContainerFromNetwork(disconnect.Container, vars["id"], disconnect.Force)
} }
@ -316,42 +350,42 @@ func (n *networkRouter) findUniqueNetwork(term string) (types.NetworkResource, e
listByPartialID := map[string]types.NetworkResource{} listByPartialID := map[string]types.NetworkResource{}
filter := filters.NewArgs(filters.Arg("idOrName", term)) filter := filters.NewArgs(filters.Arg("idOrName", term))
networks, _ := n.backend.GetNetworks(filter, backend.NetworkListConfig{Detailed: true}) nw, _ := n.backend.GetNetworks(filter, types.NetworkListConfig{Detailed: true})
for _, nw := range networks { for _, network := range nw {
if nw.ID == term { if network.ID == term {
return nw, nil return network, nil
} }
if nw.Name == term && !nw.Ingress { if network.Name == term && !network.Ingress {
// No need to check the ID collision here as we are still in // No need to check the ID collision here as we are still in
// local scope and the network ID is unique in this scope. // local scope and the network ID is unique in this scope.
listByFullName[nw.ID] = nw listByFullName[network.ID] = network
} }
if strings.HasPrefix(nw.ID, term) { if strings.HasPrefix(network.ID, term) {
// No need to check the ID collision here as we are still in // No need to check the ID collision here as we are still in
// local scope and the network ID is unique in this scope. // local scope and the network ID is unique in this scope.
listByPartialID[nw.ID] = nw listByPartialID[network.ID] = network
} }
} }
networks, _ = n.cluster.GetNetworks(filter) nr, _ := n.cluster.GetNetworks(filter)
for _, nw := range networks { for _, network := range nr {
if nw.ID == term { if network.ID == term {
return nw, nil return network, nil
} }
if nw.Name == term { if network.Name == term {
// Check the ID collision as we are in swarm scope here, and // Check the ID collision as we are in swarm scope here, and
// the map (of the listByFullName) may have already had a // the map (of the listByFullName) may have already had a
// network with the same ID (from local scope previously) // network with the same ID (from local scope previously)
if _, ok := listByFullName[nw.ID]; !ok { if _, ok := listByFullName[network.ID]; !ok {
listByFullName[nw.ID] = nw listByFullName[network.ID] = network
} }
} }
if strings.HasPrefix(nw.ID, term) { if strings.HasPrefix(network.ID, term) {
// Check the ID collision as we are in swarm scope here, and // Check the ID collision as we are in swarm scope here, and
// the map (of the listByPartialID) may have already had a // the map (of the listByPartialID) may have already had a
// network with the same ID (from local scope previously) // network with the same ID (from local scope previously)
if _, ok := listByPartialID[nw.ID]; !ok { if _, ok := listByPartialID[network.ID]; !ok {
listByPartialID[nw.ID] = nw listByPartialID[network.ID] = network
} }
} }
} }

View file

@ -5,25 +5,23 @@ import (
"io" "io"
"net/http" "net/http"
"github.com/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/docker/api/types" enginetypes "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/plugin" "github.com/docker/docker/plugin"
) )
// Backend for Plugin // Backend for Plugin
type Backend interface { type Backend interface {
Disable(name string, config *backend.PluginDisableConfig) error Disable(name string, config *enginetypes.PluginDisableConfig) error
Enable(name string, config *backend.PluginEnableConfig) error Enable(name string, config *enginetypes.PluginEnableConfig) error
List(filters.Args) ([]types.Plugin, error) List(filters.Args) ([]enginetypes.Plugin, error)
Inspect(name string) (*types.Plugin, error) Inspect(name string) (*enginetypes.Plugin, error)
Remove(name string, config *backend.PluginRmConfig) error Remove(name string, config *enginetypes.PluginRmConfig) error
Set(name string, args []string) error Set(name string, args []string) error
Privileges(ctx context.Context, ref reference.Named, metaHeaders http.Header, authConfig *registry.AuthConfig) (types.PluginPrivileges, error) Privileges(ctx context.Context, ref reference.Named, metaHeaders http.Header, authConfig *enginetypes.AuthConfig) (enginetypes.PluginPrivileges, error)
Pull(ctx context.Context, ref reference.Named, name string, metaHeaders http.Header, authConfig *registry.AuthConfig, privileges types.PluginPrivileges, outStream io.Writer, opts ...plugin.CreateOpt) error Pull(ctx context.Context, ref reference.Named, name string, metaHeaders http.Header, authConfig *enginetypes.AuthConfig, privileges enginetypes.PluginPrivileges, outStream io.Writer, opts ...plugin.CreateOpt) error
Push(ctx context.Context, name string, metaHeaders http.Header, authConfig *registry.AuthConfig, outStream io.Writer) error Push(ctx context.Context, name string, metaHeaders http.Header, authConfig *enginetypes.AuthConfig, outStream io.Writer) error
Upgrade(ctx context.Context, ref reference.Named, name string, metaHeaders http.Header, authConfig *registry.AuthConfig, privileges types.PluginPrivileges, outStream io.Writer) error Upgrade(ctx context.Context, ref reference.Named, name string, metaHeaders http.Header, authConfig *enginetypes.AuthConfig, privileges enginetypes.PluginPrivileges, outStream io.Writer) error
CreateFromContext(ctx context.Context, tarCtx io.ReadCloser, options *types.PluginCreateOptions) error CreateFromContext(ctx context.Context, tarCtx io.ReadCloser, options *enginetypes.PluginCreateOptions) error
} }

View file

@ -2,22 +2,25 @@ package plugin // import "github.com/docker/docker/api/server/router/plugin"
import ( import (
"context" "context"
"encoding/base64"
"encoding/json"
"io"
"net/http" "net/http"
"strconv" "strconv"
"strings" "strings"
"github.com/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/registry" "github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/streamformatter" "github.com/docker/docker/pkg/streamformatter"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
func parseHeaders(headers http.Header) (map[string][]string, *registry.AuthConfig) { func parseHeaders(headers http.Header) (map[string][]string, *types.AuthConfig) {
metaHeaders := map[string][]string{} metaHeaders := map[string][]string{}
for k, v := range headers { for k, v := range headers {
if strings.HasPrefix(k, "X-Meta-") { if strings.HasPrefix(k, "X-Meta-") {
@ -25,8 +28,16 @@ func parseHeaders(headers http.Header) (map[string][]string, *registry.AuthConfi
} }
} }
// Ignore invalid AuthConfig to increase compatibility with the existing API. // Get X-Registry-Auth
authConfig, _ := registry.DecodeAuthConfig(headers.Get(registry.AuthHeader)) authEncoded := headers.Get("X-Registry-Auth")
authConfig := &types.AuthConfig{}
if authEncoded != "" {
authJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJSON).Decode(authConfig); err != nil {
authConfig = &types.AuthConfig{}
}
}
return metaHeaders, authConfig return metaHeaders, authConfig
} }
@ -85,8 +96,12 @@ func (pr *pluginRouter) upgradePlugin(ctx context.Context, w http.ResponseWriter
} }
var privileges types.PluginPrivileges var privileges types.PluginPrivileges
if err := httputils.ReadJSON(r, &privileges); err != nil { dec := json.NewDecoder(r.Body)
return err if err := dec.Decode(&privileges); err != nil {
return errors.Wrap(err, "failed to parse privileges")
}
if dec.More() {
return errors.New("invalid privileges")
} }
metaHeaders, authConfig := parseHeaders(r.Header) metaHeaders, authConfig := parseHeaders(r.Header)
@ -108,7 +123,7 @@ func (pr *pluginRouter) upgradePlugin(ctx context.Context, w http.ResponseWriter
if !output.Flushed() { if !output.Flushed() {
return err return err
} }
_, _ = output.Write(streamformatter.FormatError(err)) output.Write(streamformatter.FormatError(err))
} }
return nil return nil
@ -120,8 +135,12 @@ func (pr *pluginRouter) pullPlugin(ctx context.Context, w http.ResponseWriter, r
} }
var privileges types.PluginPrivileges var privileges types.PluginPrivileges
if err := httputils.ReadJSON(r, &privileges); err != nil { dec := json.NewDecoder(r.Body)
return err if err := dec.Decode(&privileges); err != nil {
return errors.Wrap(err, "failed to parse privileges")
}
if dec.More() {
return errors.New("invalid privileges")
} }
metaHeaders, authConfig := parseHeaders(r.Header) metaHeaders, authConfig := parseHeaders(r.Header)
@ -143,7 +162,7 @@ func (pr *pluginRouter) pullPlugin(ctx context.Context, w http.ResponseWriter, r
if !output.Flushed() { if !output.Flushed() {
return err return err
} }
_, _ = output.Write(streamformatter.FormatError(err)) output.Write(streamformatter.FormatError(err))
} }
return nil return nil
@ -187,13 +206,12 @@ func (pr *pluginRouter) createPlugin(ctx context.Context, w http.ResponseWriter,
} }
options := &types.PluginCreateOptions{ options := &types.PluginCreateOptions{
RepoName: r.FormValue("name"), RepoName: r.FormValue("name")}
}
if err := pr.backend.CreateFromContext(ctx, r.Body, options); err != nil { if err := pr.backend.CreateFromContext(ctx, r.Body, options); err != nil {
return err return err
} }
// TODO: send progress bar //TODO: send progress bar
w.WriteHeader(http.StatusNoContent) w.WriteHeader(http.StatusNoContent)
return nil return nil
} }
@ -208,7 +226,7 @@ func (pr *pluginRouter) enablePlugin(ctx context.Context, w http.ResponseWriter,
if err != nil { if err != nil {
return err return err
} }
config := &backend.PluginEnableConfig{Timeout: timeout} config := &types.PluginEnableConfig{Timeout: timeout}
return pr.backend.Enable(name, config) return pr.backend.Enable(name, config)
} }
@ -219,7 +237,7 @@ func (pr *pluginRouter) disablePlugin(ctx context.Context, w http.ResponseWriter
} }
name := vars["name"] name := vars["name"]
config := &backend.PluginDisableConfig{ config := &types.PluginDisableConfig{
ForceDisable: httputils.BoolValue(r, "force"), ForceDisable: httputils.BoolValue(r, "force"),
} }
@ -232,7 +250,7 @@ func (pr *pluginRouter) removePlugin(ctx context.Context, w http.ResponseWriter,
} }
name := vars["name"] name := vars["name"]
config := &backend.PluginRmConfig{ config := &types.PluginRmConfig{
ForceRemove: httputils.BoolValue(r, "force"), ForceRemove: httputils.BoolValue(r, "force"),
} }
return pr.backend.Remove(name, config) return pr.backend.Remove(name, config)
@ -252,15 +270,18 @@ func (pr *pluginRouter) pushPlugin(ctx context.Context, w http.ResponseWriter, r
if !output.Flushed() { if !output.Flushed() {
return err return err
} }
_, _ = output.Write(streamformatter.FormatError(err)) output.Write(streamformatter.FormatError(err))
} }
return nil return nil
} }
func (pr *pluginRouter) setPlugin(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (pr *pluginRouter) setPlugin(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var args []string var args []string
if err := httputils.ReadJSON(r, &args); err != nil { if err := json.NewDecoder(r.Body).Decode(&args); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
} }
if err := pr.backend.Set(vars["name"], args); err != nil { if err := pr.backend.Set(vars["name"], args); err != nil {
return err return err

View file

@ -3,41 +3,46 @@ package swarm // import "github.com/docker/docker/api/server/router/swarm"
import ( import (
"context" "context"
"github.com/docker/docker/api/types" basictypes "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend" "github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container" types "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/swarm"
) )
// Backend abstracts a swarm manager. // Backend abstracts a swarm manager.
type Backend interface { type Backend interface {
Init(req swarm.InitRequest) (string, error) Init(req types.InitRequest) (string, error)
Join(req swarm.JoinRequest) error Join(req types.JoinRequest) error
Leave(ctx context.Context, force bool) error Leave(force bool) error
Inspect() (swarm.Swarm, error) Inspect() (types.Swarm, error)
Update(uint64, swarm.Spec, swarm.UpdateFlags) error Update(uint64, types.Spec, types.UpdateFlags) error
GetUnlockKey() (string, error) GetUnlockKey() (string, error)
UnlockSwarm(req swarm.UnlockRequest) error UnlockSwarm(req types.UnlockRequest) error
GetServices(types.ServiceListOptions) ([]swarm.Service, error)
GetService(idOrName string, insertDefaults bool) (swarm.Service, error) GetServices(basictypes.ServiceListOptions) ([]types.Service, error)
CreateService(swarm.ServiceSpec, string, bool) (*swarm.ServiceCreateResponse, error) GetService(idOrName string, insertDefaults bool) (types.Service, error)
UpdateService(string, uint64, swarm.ServiceSpec, types.ServiceUpdateOptions, bool) (*swarm.ServiceUpdateResponse, error) CreateService(types.ServiceSpec, string, bool) (*basictypes.ServiceCreateResponse, error)
UpdateService(string, uint64, types.ServiceSpec, basictypes.ServiceUpdateOptions, bool) (*basictypes.ServiceUpdateResponse, error)
RemoveService(string) error RemoveService(string) error
ServiceLogs(context.Context, *backend.LogSelector, *container.LogsOptions) (<-chan *backend.LogMessage, error)
GetNodes(types.NodeListOptions) ([]swarm.Node, error) ServiceLogs(context.Context, *backend.LogSelector, *basictypes.ContainerLogsOptions) (<-chan *backend.LogMessage, error)
GetNode(string) (swarm.Node, error)
UpdateNode(string, uint64, swarm.NodeSpec) error GetNodes(basictypes.NodeListOptions) ([]types.Node, error)
GetNode(string) (types.Node, error)
UpdateNode(string, uint64, types.NodeSpec) error
RemoveNode(string, bool) error RemoveNode(string, bool) error
GetTasks(types.TaskListOptions) ([]swarm.Task, error)
GetTask(string) (swarm.Task, error) GetTasks(basictypes.TaskListOptions) ([]types.Task, error)
GetSecrets(opts types.SecretListOptions) ([]swarm.Secret, error) GetTask(string) (types.Task, error)
CreateSecret(s swarm.SecretSpec) (string, error)
GetSecrets(opts basictypes.SecretListOptions) ([]types.Secret, error)
CreateSecret(s types.SecretSpec) (string, error)
RemoveSecret(idOrName string) error RemoveSecret(idOrName string) error
GetSecret(id string) (swarm.Secret, error) GetSecret(id string) (types.Secret, error)
UpdateSecret(idOrName string, version uint64, spec swarm.SecretSpec) error UpdateSecret(idOrName string, version uint64, spec types.SecretSpec) error
GetConfigs(opts types.ConfigListOptions) ([]swarm.Config, error)
CreateConfig(s swarm.ConfigSpec) (string, error) GetConfigs(opts basictypes.ConfigListOptions) ([]types.Config, error)
CreateConfig(s types.ConfigSpec) (string, error)
RemoveConfig(id string) error RemoveConfig(id string) error
GetConfig(id string) (swarm.Config, error) GetConfig(id string) (types.Config, error)
UpdateConfig(idOrName string, version uint64, spec swarm.ConfigSpec) error UpdateConfig(idOrName string, version uint64, spec types.ConfigSpec) error
} }

View file

@ -2,26 +2,30 @@ package swarm // import "github.com/docker/docker/api/server/router/swarm"
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"io"
"net/http" "net/http"
"strconv" "strconv"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
basictypes "github.com/docker/docker/api/types" basictypes "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend" "github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/registry"
types "github.com/docker/docker/api/types/swarm" types "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/versions" "github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs" "github.com/docker/docker/errdefs"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/sirupsen/logrus"
) )
func (sr *swarmRouter) initCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) initCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var req types.InitRequest var req types.InitRequest
if err := httputils.ReadJSON(r, &req); err != nil { if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
} }
version := httputils.VersionFromContext(ctx) version := httputils.VersionFromContext(ctx)
@ -36,7 +40,7 @@ func (sr *swarmRouter) initCluster(ctx context.Context, w http.ResponseWriter, r
} }
nodeID, err := sr.backend.Init(req) nodeID, err := sr.backend.Init(req)
if err != nil { if err != nil {
log.G(ctx).WithContext(ctx).WithError(err).Debug("Error initializing swarm") logrus.Errorf("Error initializing swarm: %v", err)
return err return err
} }
return httputils.WriteJSON(w, http.StatusOK, nodeID) return httputils.WriteJSON(w, http.StatusOK, nodeID)
@ -44,8 +48,11 @@ func (sr *swarmRouter) initCluster(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) joinCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) joinCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var req types.JoinRequest var req types.JoinRequest
if err := httputils.ReadJSON(r, &req); err != nil { if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
} }
return sr.backend.Join(req) return sr.backend.Join(req)
} }
@ -56,13 +63,13 @@ func (sr *swarmRouter) leaveCluster(ctx context.Context, w http.ResponseWriter,
} }
force := httputils.BoolValue(r, "force") force := httputils.BoolValue(r, "force")
return sr.backend.Leave(ctx, force) return sr.backend.Leave(force)
} }
func (sr *swarmRouter) inspectCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) inspectCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
swarm, err := sr.backend.Inspect() swarm, err := sr.backend.Inspect()
if err != nil { if err != nil {
log.G(ctx).WithContext(ctx).WithError(err).Debug("Error getting swarm") logrus.Errorf("Error getting swarm: %v", err)
return err return err
} }
@ -71,8 +78,11 @@ func (sr *swarmRouter) inspectCluster(ctx context.Context, w http.ResponseWriter
func (sr *swarmRouter) updateCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) updateCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var swarm types.Spec var swarm types.Spec
if err := httputils.ReadJSON(r, &swarm); err != nil { if err := json.NewDecoder(r.Body).Decode(&swarm); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
} }
rawVersion := r.URL.Query().Get("version") rawVersion := r.URL.Query().Get("version")
@ -114,7 +124,7 @@ func (sr *swarmRouter) updateCluster(ctx context.Context, w http.ResponseWriter,
} }
if err := sr.backend.Update(version, swarm, flags); err != nil { if err := sr.backend.Update(version, swarm, flags); err != nil {
log.G(ctx).WithContext(ctx).WithError(err).Debug("Error configuring swarm") logrus.Errorf("Error configuring swarm: %v", err)
return err return err
} }
return nil return nil
@ -122,12 +132,15 @@ func (sr *swarmRouter) updateCluster(ctx context.Context, w http.ResponseWriter,
func (sr *swarmRouter) unlockCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) unlockCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var req types.UnlockRequest var req types.UnlockRequest
if err := httputils.ReadJSON(r, &req); err != nil { if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
} }
if err := sr.backend.UnlockSwarm(req); err != nil { if err := sr.backend.UnlockSwarm(req); err != nil {
log.G(ctx).WithContext(ctx).WithError(err).Debug("Error unlocking swarm") logrus.Errorf("Error unlocking swarm: %v", err)
return err return err
} }
return nil return nil
@ -136,7 +149,7 @@ func (sr *swarmRouter) unlockCluster(ctx context.Context, w http.ResponseWriter,
func (sr *swarmRouter) getUnlockKey(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) getUnlockKey(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
unlockKey, err := sr.backend.GetUnlockKey() unlockKey, err := sr.backend.GetUnlockKey()
if err != nil { if err != nil {
log.G(ctx).WithContext(ctx).WithError(err).Debug("Error retrieving swarm unlock key") logrus.WithError(err).Errorf("Error retrieving swarm unlock key")
return err return err
} }
@ -151,24 +164,12 @@ func (sr *swarmRouter) getServices(ctx context.Context, w http.ResponseWriter, r
} }
filter, err := filters.FromJSON(r.Form.Get("filters")) filter, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil { if err != nil {
return err return errdefs.InvalidParameter(err)
} }
// the status query parameter is only support in API versions >= 1.41. If services, err := sr.backend.GetServices(basictypes.ServiceListOptions{Filters: filter})
// the client is using a lesser version, ignore the parameter.
cliVersion := httputils.VersionFromContext(ctx)
var status bool
if value := r.URL.Query().Get("status"); value != "" && !versions.LessThan(cliVersion, "1.41") {
var err error
status, err = strconv.ParseBool(value)
if err != nil { if err != nil {
return errors.Wrapf(errdefs.InvalidParameter(err), "invalid value for status: %s", value) logrus.Errorf("Error getting services: %v", err)
}
}
services, err := sr.backend.GetServices(basictypes.ServiceListOptions{Filters: filter, Status: status})
if err != nil {
log.G(ctx).WithContext(ctx).WithError(err).Debug("Error getting services")
return err return err
} }
@ -177,27 +178,18 @@ func (sr *swarmRouter) getServices(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) getService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) getService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var insertDefaults bool var insertDefaults bool
if value := r.URL.Query().Get("insertDefaults"); value != "" { if value := r.URL.Query().Get("insertDefaults"); value != "" {
var err error var err error
insertDefaults, err = strconv.ParseBool(value) insertDefaults, err = strconv.ParseBool(value)
if err != nil { if err != nil {
err := fmt.Errorf("invalid value for insertDefaults: %s", value)
return errors.Wrapf(errdefs.InvalidParameter(err), "invalid value for insertDefaults: %s", value) return errors.Wrapf(errdefs.InvalidParameter(err), "invalid value for insertDefaults: %s", value)
} }
} }
// you may note that there is no code here to handle the "status" query
// parameter, as in getServices. the Status field is not supported when
// retrieving an individual service because the Backend API changes
// required to accommodate it would be too disruptive, and because that
// field is so rarely needed as part of an individual service inspection.
service, err := sr.backend.GetService(vars["id"], insertDefaults) service, err := sr.backend.GetService(vars["id"], insertDefaults)
if err != nil { if err != nil {
log.G(ctx).WithContext(ctx).WithFields(log.Fields{ logrus.Errorf("Error getting service %s: %v", vars["id"], err)
"error": err,
"service-id": vars["id"],
}).Debug("Error getting service")
return err return err
} }
@ -206,38 +198,27 @@ func (sr *swarmRouter) getService(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) createService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) createService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var service types.ServiceSpec var service types.ServiceSpec
if err := httputils.ReadJSON(r, &service); err != nil { if err := json.NewDecoder(r.Body).Decode(&service); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
} }
// TODO(thaJeztah): remove logentries check and migration code in release v26.0.0. return errdefs.InvalidParameter(err)
if service.TaskTemplate.LogDriver != nil && service.TaskTemplate.LogDriver.Name == "logentries" {
return errdefs.InvalidParameter(errors.New("the logentries logging driver has been deprecated and removed"))
} }
// Get returns "" if the header does not exist // Get returns "" if the header does not exist
encodedAuth := r.Header.Get(registry.AuthHeader) encodedAuth := r.Header.Get("X-Registry-Auth")
cliVersion := r.Header.Get("version")
queryRegistry := false queryRegistry := false
if v := httputils.VersionFromContext(ctx); v != "" { if cliVersion != "" {
if versions.LessThan(v, "1.30") { if versions.LessThan(cliVersion, "1.30") {
queryRegistry = true queryRegistry = true
} }
adjustForAPIVersion(v, &service) adjustForAPIVersion(cliVersion, &service)
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.44") {
if service.TaskTemplate.ContainerSpec != nil && service.TaskTemplate.ContainerSpec.Healthcheck != nil {
// StartInterval was added in API 1.44
service.TaskTemplate.ContainerSpec.Healthcheck.StartInterval = 0
}
} }
resp, err := sr.backend.CreateService(service, encodedAuth, queryRegistry) resp, err := sr.backend.CreateService(service, encodedAuth, queryRegistry)
if err != nil { if err != nil {
log.G(ctx).WithFields(log.Fields{ logrus.Errorf("Error creating service %s: %v", service.Name, err)
"error": err,
"service-name": service.Name,
}).Debug("Error creating service")
return err return err
} }
@ -246,12 +227,11 @@ func (sr *swarmRouter) createService(ctx context.Context, w http.ResponseWriter,
func (sr *swarmRouter) updateService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) updateService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var service types.ServiceSpec var service types.ServiceSpec
if err := httputils.ReadJSON(r, &service); err != nil { if err := json.NewDecoder(r.Body).Decode(&service); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
} }
// TODO(thaJeztah): remove logentries check and migration code in release v26.0.0. return errdefs.InvalidParameter(err)
if service.TaskTemplate.LogDriver != nil && service.TaskTemplate.LogDriver.Name == "logentries" {
return errdefs.InvalidParameter(errors.New("the logentries logging driver has been deprecated and removed"))
} }
rawVersion := r.URL.Query().Get("version") rawVersion := r.URL.Query().Get("version")
@ -264,23 +244,21 @@ func (sr *swarmRouter) updateService(ctx context.Context, w http.ResponseWriter,
var flags basictypes.ServiceUpdateOptions var flags basictypes.ServiceUpdateOptions
// Get returns "" if the header does not exist // Get returns "" if the header does not exist
flags.EncodedRegistryAuth = r.Header.Get(registry.AuthHeader) flags.EncodedRegistryAuth = r.Header.Get("X-Registry-Auth")
flags.RegistryAuthFrom = r.URL.Query().Get("registryAuthFrom") flags.RegistryAuthFrom = r.URL.Query().Get("registryAuthFrom")
flags.Rollback = r.URL.Query().Get("rollback") flags.Rollback = r.URL.Query().Get("rollback")
cliVersion := r.Header.Get("version")
queryRegistry := false queryRegistry := false
if v := httputils.VersionFromContext(ctx); v != "" { if cliVersion != "" {
if versions.LessThan(v, "1.30") { if versions.LessThan(cliVersion, "1.30") {
queryRegistry = true queryRegistry = true
} }
adjustForAPIVersion(v, &service) adjustForAPIVersion(cliVersion, &service)
} }
resp, err := sr.backend.UpdateService(vars["id"], version, service, flags, queryRegistry) resp, err := sr.backend.UpdateService(vars["id"], version, service, flags, queryRegistry)
if err != nil { if err != nil {
log.G(ctx).WithContext(ctx).WithFields(log.Fields{ logrus.Errorf("Error updating service %s: %v", vars["id"], err)
"error": err,
"service-id": vars["id"],
}).Debug("Error updating service")
return err return err
} }
return httputils.WriteJSON(w, http.StatusOK, resp) return httputils.WriteJSON(w, http.StatusOK, resp)
@ -288,10 +266,7 @@ func (sr *swarmRouter) updateService(ctx context.Context, w http.ResponseWriter,
func (sr *swarmRouter) removeService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) removeService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := sr.backend.RemoveService(vars["id"]); err != nil { if err := sr.backend.RemoveService(vars["id"]); err != nil {
log.G(ctx).WithContext(ctx).WithFields(log.Fields{ logrus.Errorf("Error removing service %s: %v", vars["id"], err)
"error": err,
"service-id": vars["id"],
}).Debug("Error removing service")
return err return err
} }
return nil return nil
@ -332,7 +307,7 @@ func (sr *swarmRouter) getNodes(ctx context.Context, w http.ResponseWriter, r *h
nodes, err := sr.backend.GetNodes(basictypes.NodeListOptions{Filters: filter}) nodes, err := sr.backend.GetNodes(basictypes.NodeListOptions{Filters: filter})
if err != nil { if err != nil {
log.G(ctx).WithContext(ctx).WithError(err).Debug("Error getting nodes") logrus.Errorf("Error getting nodes: %v", err)
return err return err
} }
@ -342,10 +317,7 @@ func (sr *swarmRouter) getNodes(ctx context.Context, w http.ResponseWriter, r *h
func (sr *swarmRouter) getNode(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) getNode(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
node, err := sr.backend.GetNode(vars["id"]) node, err := sr.backend.GetNode(vars["id"])
if err != nil { if err != nil {
log.G(ctx).WithContext(ctx).WithFields(log.Fields{ logrus.Errorf("Error getting node %s: %v", vars["id"], err)
"error": err,
"node-id": vars["id"],
}).Debug("Error getting node")
return err return err
} }
@ -354,8 +326,11 @@ func (sr *swarmRouter) getNode(ctx context.Context, w http.ResponseWriter, r *ht
func (sr *swarmRouter) updateNode(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) updateNode(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var node types.NodeSpec var node types.NodeSpec
if err := httputils.ReadJSON(r, &node); err != nil { if err := json.NewDecoder(r.Body).Decode(&node); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
} }
rawVersion := r.URL.Query().Get("version") rawVersion := r.URL.Query().Get("version")
@ -366,10 +341,7 @@ func (sr *swarmRouter) updateNode(ctx context.Context, w http.ResponseWriter, r
} }
if err := sr.backend.UpdateNode(vars["id"], version, node); err != nil { if err := sr.backend.UpdateNode(vars["id"], version, node); err != nil {
log.G(ctx).WithContext(ctx).WithFields(log.Fields{ logrus.Errorf("Error updating node %s: %v", vars["id"], err)
"error": err,
"node-id": vars["id"],
}).Debug("Error updating node")
return err return err
} }
return nil return nil
@ -383,10 +355,7 @@ func (sr *swarmRouter) removeNode(ctx context.Context, w http.ResponseWriter, r
force := httputils.BoolValue(r, "force") force := httputils.BoolValue(r, "force")
if err := sr.backend.RemoveNode(vars["id"], force); err != nil { if err := sr.backend.RemoveNode(vars["id"], force); err != nil {
log.G(ctx).WithContext(ctx).WithFields(log.Fields{ logrus.Errorf("Error removing node %s: %v", vars["id"], err)
"error": err,
"node-id": vars["id"],
}).Debug("Error removing node")
return err return err
} }
return nil return nil
@ -403,7 +372,7 @@ func (sr *swarmRouter) getTasks(ctx context.Context, w http.ResponseWriter, r *h
tasks, err := sr.backend.GetTasks(basictypes.TaskListOptions{Filters: filter}) tasks, err := sr.backend.GetTasks(basictypes.TaskListOptions{Filters: filter})
if err != nil { if err != nil {
log.G(ctx).WithContext(ctx).WithError(err).Debug("Error getting tasks") logrus.Errorf("Error getting tasks: %v", err)
return err return err
} }
@ -413,10 +382,7 @@ func (sr *swarmRouter) getTasks(ctx context.Context, w http.ResponseWriter, r *h
func (sr *swarmRouter) getTask(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) getTask(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
task, err := sr.backend.GetTask(vars["id"]) task, err := sr.backend.GetTask(vars["id"])
if err != nil { if err != nil {
log.G(ctx).WithContext(ctx).WithFields(log.Fields{ logrus.Errorf("Error getting task %s: %v", vars["id"], err)
"error": err,
"task-id": vars["id"],
}).Debug("Error getting task")
return err return err
} }
@ -442,8 +408,11 @@ func (sr *swarmRouter) getSecrets(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) createSecret(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) createSecret(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var secret types.SecretSpec var secret types.SecretSpec
if err := httputils.ReadJSON(r, &secret); err != nil { if err := json.NewDecoder(r.Body).Decode(&secret); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
} }
version := httputils.VersionFromContext(ctx) version := httputils.VersionFromContext(ctx)
if secret.Templating != nil && versions.LessThan(version, "1.37") { if secret.Templating != nil && versions.LessThan(version, "1.37") {
@ -480,8 +449,11 @@ func (sr *swarmRouter) getSecret(ctx context.Context, w http.ResponseWriter, r *
func (sr *swarmRouter) updateSecret(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) updateSecret(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var secret types.SecretSpec var secret types.SecretSpec
if err := httputils.ReadJSON(r, &secret); err != nil { if err := json.NewDecoder(r.Body).Decode(&secret); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
} }
rawVersion := r.URL.Query().Get("version") rawVersion := r.URL.Query().Get("version")
@ -513,8 +485,11 @@ func (sr *swarmRouter) getConfigs(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) createConfig(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) createConfig(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var config types.ConfigSpec var config types.ConfigSpec
if err := httputils.ReadJSON(r, &config); err != nil { if err := json.NewDecoder(r.Body).Decode(&config); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
} }
version := httputils.VersionFromContext(ctx) version := httputils.VersionFromContext(ctx)
@ -552,8 +527,11 @@ func (sr *swarmRouter) getConfig(ctx context.Context, w http.ResponseWriter, r *
func (sr *swarmRouter) updateConfig(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (sr *swarmRouter) updateConfig(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var config types.ConfigSpec var config types.ConfigSpec
if err := httputils.ReadJSON(r, &config); err != nil { if err := json.NewDecoder(r.Body).Decode(&config); err != nil {
return err if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
} }
rawVersion := r.URL.Query().Get("version") rawVersion := r.URL.Query().Get("version")

View file

@ -3,19 +3,19 @@ package swarm // import "github.com/docker/docker/api/server/router/swarm"
import ( import (
"context" "context"
"fmt" "fmt"
"io"
"net/http" "net/http"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
basictypes "github.com/docker/docker/api/types" basictypes "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend" "github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/swarm" "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/versions" "github.com/docker/docker/api/types/versions"
) )
// swarmLogs takes an http response, request, and selector, and writes the logs // swarmLogs takes an http response, request, and selector, and writes the logs
// specified by the selector to the response // specified by the selector to the response
func (sr *swarmRouter) swarmLogs(ctx context.Context, w http.ResponseWriter, r *http.Request, selector *backend.LogSelector) error { func (sr *swarmRouter) swarmLogs(ctx context.Context, w io.Writer, r *http.Request, selector *backend.LogSelector) error {
// Args are validated before the stream starts because when it starts we're // Args are validated before the stream starts because when it starts we're
// sending HTTP 200 by writing an empty chunk of data to tell the client that // sending HTTP 200 by writing an empty chunk of data to tell the client that
// daemon is going to stream. By sending this initial HTTP 200 we can't report // daemon is going to stream. By sending this initial HTTP 200 we can't report
@ -26,9 +26,9 @@ func (sr *swarmRouter) swarmLogs(ctx context.Context, w http.ResponseWriter, r *
return fmt.Errorf("Bad parameters: you must choose at least one stream") return fmt.Errorf("Bad parameters: you must choose at least one stream")
} }
// there is probably a neater way to manufacture the LogsOptions // there is probably a neater way to manufacture the ContainerLogsOptions
// struct, probably in the caller, to eliminate the dependency on net/http // struct, probably in the caller, to eliminate the dependency on net/http
logsConfig := &container.LogsOptions{ logsConfig := &basictypes.ContainerLogsOptions{
Follow: httputils.BoolValue(r, "follow"), Follow: httputils.BoolValue(r, "follow"),
Timestamps: httputils.BoolValue(r, "timestamps"), Timestamps: httputils.BoolValue(r, "timestamps"),
Since: r.Form.Get("since"), Since: r.Form.Get("since"),
@ -63,11 +63,6 @@ func (sr *swarmRouter) swarmLogs(ctx context.Context, w http.ResponseWriter, r *
return err return err
} }
contentType := basictypes.MediaTypeRawStream
if !tty && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = basictypes.MediaTypeMultiplexedStream
}
w.Header().Set("Content-Type", contentType)
httputils.WriteLogStream(ctx, w, msgs, logsConfig, !tty) httputils.WriteLogStream(ctx, w, msgs, logsConfig, !tty)
return nil return nil
} }
@ -100,32 +95,4 @@ func adjustForAPIVersion(cliVersion string, service *swarm.ServiceSpec) {
service.TaskTemplate.Placement.MaxReplicas = 0 service.TaskTemplate.Placement.MaxReplicas = 0
} }
} }
if versions.LessThan(cliVersion, "1.41") {
if service.TaskTemplate.ContainerSpec != nil {
// Capabilities and Ulimits for docker swarm services weren't
// supported before API version 1.41
service.TaskTemplate.ContainerSpec.CapabilityAdd = nil
service.TaskTemplate.ContainerSpec.CapabilityDrop = nil
service.TaskTemplate.ContainerSpec.Ulimits = nil
}
if service.TaskTemplate.Resources != nil && service.TaskTemplate.Resources.Limits != nil {
// Limits.Pids not supported before API version 1.41
service.TaskTemplate.Resources.Limits.Pids = 0
}
// jobs were only introduced in API version 1.41. Nil out both Job
// modes; if the service is one of these modes and subsequently has no
// mode, then something down the pipe will thrown an error.
service.Mode.ReplicatedJob = nil
service.Mode.GlobalJob = nil
}
if versions.LessThan(cliVersion, "1.44") {
// seccomp, apparmor, and no_new_privs were added in 1.44.
if service.TaskTemplate.ContainerSpec != nil && service.TaskTemplate.ContainerSpec.Privileges != nil {
service.TaskTemplate.ContainerSpec.Privileges.Seccomp = nil
service.TaskTemplate.ContainerSpec.Privileges.AppArmor = nil
service.TaskTemplate.ContainerSpec.Privileges.NoNewPrivileges = false
}
}
} }

View file

@ -5,11 +5,12 @@ import (
"testing" "testing"
"github.com/docker/docker/api/types/swarm" "github.com/docker/docker/api/types/swarm"
"github.com/docker/go-units"
) )
func TestAdjustForAPIVersion(t *testing.T) { func TestAdjustForAPIVersion(t *testing.T) {
expectedSysctls := map[string]string{"foo": "bar"} var (
expectedSysctls = map[string]string{"foo": "bar"}
)
// testing the negative -- does this leave everything else alone? -- is // testing the negative -- does this leave everything else alone? -- is
// prohibitively time-consuming to write, because it would need an object // prohibitively time-consuming to write, because it would need an object
// with literally every field filled in. // with literally every field filled in.
@ -38,40 +39,21 @@ func TestAdjustForAPIVersion(t *testing.T) {
ConfigName: "configRuntime", ConfigName: "configRuntime",
}, },
}, },
Ulimits: []*units.Ulimit{
{
Name: "nofile",
Soft: 100,
Hard: 200,
},
},
}, },
Placement: &swarm.Placement{ Placement: &swarm.Placement{
MaxReplicas: 222, MaxReplicas: 222,
}, },
Resources: &swarm.ResourceRequirements{
Limits: &swarm.Limit{
Pids: 300,
},
},
}, },
} }
// first, does calling this with a later version correctly NOT strip // first, does calling this with a later version correctly NOT strip
// fields? do the later version first, so we can reuse this spec in the // fields? do the later version first, so we can reuse this spec in the
// next test. // next test.
adjustForAPIVersion("1.41", spec) adjustForAPIVersion("1.40", spec)
if !reflect.DeepEqual(spec.TaskTemplate.ContainerSpec.Sysctls, expectedSysctls) { if !reflect.DeepEqual(spec.TaskTemplate.ContainerSpec.Sysctls, expectedSysctls) {
t.Error("Sysctls was stripped from spec") t.Error("Sysctls was stripped from spec")
} }
if spec.TaskTemplate.Resources.Limits.Pids == 0 {
t.Error("PidsLimit was stripped from spec")
}
if spec.TaskTemplate.Resources.Limits.Pids != 300 {
t.Error("PidsLimit did not preserve the value from spec")
}
if spec.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config != "someconfig" { if spec.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config != "someconfig" {
t.Error("CredentialSpec.Config field was stripped from spec") t.Error("CredentialSpec.Config field was stripped from spec")
} }
@ -84,20 +66,12 @@ func TestAdjustForAPIVersion(t *testing.T) {
t.Error("MaxReplicas was stripped from spec") t.Error("MaxReplicas was stripped from spec")
} }
if len(spec.TaskTemplate.ContainerSpec.Ulimits) == 0 {
t.Error("Ulimits were stripped from spec")
}
// next, does calling this with an earlier version correctly strip fields? // next, does calling this with an earlier version correctly strip fields?
adjustForAPIVersion("1.29", spec) adjustForAPIVersion("1.29", spec)
if spec.TaskTemplate.ContainerSpec.Sysctls != nil { if spec.TaskTemplate.ContainerSpec.Sysctls != nil {
t.Error("Sysctls was not stripped from spec") t.Error("Sysctls was not stripped from spec")
} }
if spec.TaskTemplate.Resources.Limits.Pids != 0 {
t.Error("PidsLimit was not stripped from spec")
}
if spec.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config != "" { if spec.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config != "" {
t.Error("CredentialSpec.Config field was not stripped from spec") t.Error("CredentialSpec.Config field was not stripped from spec")
} }
@ -110,7 +84,4 @@ func TestAdjustForAPIVersion(t *testing.T) {
t.Error("MaxReplicas was not stripped from spec") t.Error("MaxReplicas was not stripped from spec")
} }
if len(spec.TaskTemplate.ContainerSpec.Ulimits) != 0 {
t.Error("Ulimits were not stripped from spec")
}
} }

View file

@ -7,41 +7,22 @@ import (
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/events" "github.com/docker/docker/api/types/events"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/swarm" "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/system"
) )
// DiskUsageOptions holds parameters for system disk usage query.
type DiskUsageOptions struct {
// Containers controls whether container disk usage should be computed.
Containers bool
// Images controls whether image disk usage should be computed.
Images bool
// Volumes controls whether volume disk usage should be computed.
Volumes bool
}
// Backend is the methods that need to be implemented to provide // Backend is the methods that need to be implemented to provide
// system specific functionality. // system specific functionality.
type Backend interface { type Backend interface {
SystemInfo(context.Context) (*system.Info, error) SystemInfo() (*types.Info, error)
SystemVersion(context.Context) (types.Version, error) SystemVersion() types.Version
SystemDiskUsage(ctx context.Context, opts DiskUsageOptions) (*types.DiskUsage, error) SystemDiskUsage(ctx context.Context) (*types.DiskUsage, error)
SubscribeToEvents(since, until time.Time, ef filters.Args) ([]events.Message, chan interface{}) SubscribeToEvents(since, until time.Time, ef filters.Args) ([]events.Message, chan interface{})
UnsubscribeFromEvents(chan interface{}) UnsubscribeFromEvents(chan interface{})
AuthenticateToRegistry(ctx context.Context, authConfig *registry.AuthConfig) (string, string, error) AuthenticateToRegistry(ctx context.Context, authConfig *types.AuthConfig) (string, string, error)
} }
// ClusterBackend is all the methods that need to be implemented // ClusterBackend is all the methods that need to be implemented
// to provide cluster system specific functionality. // to provide cluster system specific functionality.
type ClusterBackend interface { type ClusterBackend interface {
Info(context.Context) swarm.Info Info() swarm.Info
}
// StatusProvider provides methods to get the swarm status of the current node.
type StatusProvider interface {
Status() string
} }

View file

@ -1,13 +1,9 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.19
package system // import "github.com/docker/docker/api/server/router/system" package system // import "github.com/docker/docker/api/server/router/system"
import ( import (
"github.com/docker/docker/api/server/router" "github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/types/system" "github.com/docker/docker/builder/builder-next"
buildkit "github.com/docker/docker/builder/builder-next" "github.com/docker/docker/builder/fscache"
"resenje.org/singleflight"
) )
// systemRouter provides information about the Docker system overall. // systemRouter provides information about the Docker system overall.
@ -16,20 +12,17 @@ type systemRouter struct {
backend Backend backend Backend
cluster ClusterBackend cluster ClusterBackend
routes []router.Route routes []router.Route
fscache *fscache.FSCache // legacy
builder *buildkit.Builder builder *buildkit.Builder
features func() map[string]bool features *map[string]bool
// collectSystemInfo is a single-flight for the /info endpoint,
// unique per API version (as different API versions may return
// a different API response).
collectSystemInfo singleflight.Group[string, *system.Info]
} }
// NewRouter initializes a new system router // NewRouter initializes a new system router
func NewRouter(b Backend, c ClusterBackend, builder *buildkit.Builder, features func() map[string]bool) router.Router { func NewRouter(b Backend, c ClusterBackend, fscache *fscache.FSCache, builder *buildkit.Builder, features *map[string]bool) router.Router {
r := &systemRouter{ r := &systemRouter{
backend: b, backend: b,
cluster: c, cluster: c,
fscache: fscache,
builder: builder, builder: builder,
features: features, features: features,
} }

View file

@ -7,19 +7,17 @@ import (
"net/http" "net/http"
"time" "time"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/router/build" "github.com/docker/docker/api/server/router/build"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/events" "github.com/docker/docker/api/types/events"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/registry" "github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/system"
timetypes "github.com/docker/docker/api/types/time" timetypes "github.com/docker/docker/api/types/time"
"github.com/docker/docker/api/types/versions" "github.com/docker/docker/api/types/versions"
"github.com/docker/docker/pkg/ioutils" "github.com/docker/docker/pkg/ioutils"
"github.com/pkg/errors" pkgerrors "github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
@ -32,13 +30,10 @@ func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r
w.Header().Add("Cache-Control", "no-cache, no-store, must-revalidate") w.Header().Add("Cache-Control", "no-cache, no-store, must-revalidate")
w.Header().Add("Pragma", "no-cache") w.Header().Add("Pragma", "no-cache")
builderVersion := build.BuilderVersion(s.features()) builderVersion := build.BuilderVersion(*s.features)
if bv := builderVersion; bv != "" { if bv := builderVersion; bv != "" {
w.Header().Set("Builder-Version", string(bv)) w.Header().Set("Builder-Version", string(bv))
} }
w.Header().Set("Swarm", s.swarmStatus())
if r.Method == http.MethodHead { if r.Method == http.MethodHead {
w.Header().Set("Content-Type", "text/plain; charset=utf-8") w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.Header().Set("Content-Length", "0") w.Header().Set("Content-Length", "0")
@ -48,42 +43,38 @@ func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r
return err return err
} }
func (s *systemRouter) swarmStatus() string {
if s.cluster != nil {
if p, ok := s.cluster.(StatusProvider); ok {
return p.Status()
}
}
return string(swarm.LocalNodeStateInactive)
}
func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
version := httputils.VersionFromContext(ctx) info, err := s.backend.SystemInfo()
info, _, _ := s.collectSystemInfo.Do(ctx, version, func(ctx context.Context) (*system.Info, error) {
info, err := s.backend.SystemInfo(ctx)
if err != nil { if err != nil {
return nil, err return err
} }
if s.cluster != nil { if s.cluster != nil {
info.Swarm = s.cluster.Info(ctx) info.Swarm = s.cluster.Info()
info.Warnings = append(info.Warnings, info.Swarm.Warnings...) info.Warnings = append(info.Warnings, info.Swarm.Warnings...)
} }
if versions.LessThan(version, "1.25") { if versions.LessThan(httputils.VersionFromContext(ctx), "1.25") {
// TODO: handle this conversion in engine-api // TODO: handle this conversion in engine-api
kvSecOpts, err := system.DecodeSecurityOptions(info.SecurityOptions) type oldInfo struct {
*types.Info
ExecutionDriver string
}
old := &oldInfo{
Info: info,
ExecutionDriver: "<not supported>",
}
nameOnlySecurityOptions := []string{}
kvSecOpts, err := types.DecodeSecurityOptions(old.SecurityOptions)
if err != nil { if err != nil {
info.Warnings = append(info.Warnings, err.Error()) return err
} }
var nameOnly []string for _, s := range kvSecOpts {
for _, so := range kvSecOpts { nameOnlySecurityOptions = append(nameOnlySecurityOptions, s.Name)
nameOnly = append(nameOnly, so.Name)
} }
info.SecurityOptions = nameOnly old.SecurityOptions = nameOnlySecurityOptions
info.ExecutionDriver = "<not supported>" //nolint:staticcheck // ignore SA1019 (ExecutionDriver is deprecated) return httputils.WriteJSON(w, http.StatusOK, old)
} }
if versions.LessThan(version, "1.39") { if versions.LessThan(httputils.VersionFromContext(ctx), "1.39") {
if info.KernelVersion == "" { if info.KernelVersion == "" {
info.KernelVersion = "<unknown>" info.KernelVersion = "<unknown>"
} }
@ -91,124 +82,56 @@ func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *ht
info.OperatingSystem = "<unknown>" info.OperatingSystem = "<unknown>"
} }
} }
if versions.LessThan(version, "1.44") {
for k, rt := range info.Runtimes {
// Status field introduced in API v1.44.
info.Runtimes[k] = system.RuntimeWithStatus{Runtime: rt.Runtime}
}
}
if versions.GreaterThanOrEqualTo(version, "1.42") {
info.KernelMemory = false
}
return info, nil
})
return httputils.WriteJSON(w, http.StatusOK, info) return httputils.WriteJSON(w, http.StatusOK, info)
} }
func (s *systemRouter) getVersion(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *systemRouter) getVersion(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
info, err := s.backend.SystemVersion(ctx) info := s.backend.SystemVersion()
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, info) return httputils.WriteJSON(w, http.StatusOK, info)
} }
func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
var getContainers, getImages, getVolumes, getBuildCache bool
typeStrs, ok := r.Form["type"]
if versions.LessThan(version, "1.42") || !ok {
getContainers, getImages, getVolumes, getBuildCache = true, true, true, s.builder != nil
} else {
for _, typ := range typeStrs {
switch types.DiskUsageObject(typ) {
case types.ContainerObject:
getContainers = true
case types.ImageObject:
getImages = true
case types.VolumeObject:
getVolumes = true
case types.BuildCacheObject:
getBuildCache = true
default:
return invalidRequestError{Err: fmt.Errorf("unknown object type: %s", typ)}
}
}
}
eg, ctx := errgroup.WithContext(ctx) eg, ctx := errgroup.WithContext(ctx)
var systemDiskUsage *types.DiskUsage var du *types.DiskUsage
if getContainers || getImages || getVolumes {
eg.Go(func() error { eg.Go(func() error {
var err error var err error
systemDiskUsage, err = s.backend.SystemDiskUsage(ctx, DiskUsageOptions{ du, err = s.backend.SystemDiskUsage(ctx)
Containers: getContainers,
Images: getImages,
Volumes: getVolumes,
})
return err return err
}) })
var builderSize int64 // legacy
eg.Go(func() error {
var err error
builderSize, err = s.fscache.DiskUsage(ctx)
if err != nil {
return pkgerrors.Wrap(err, "error getting fscache build cache usage")
} }
return nil
})
var buildCache []*types.BuildCache var buildCache []*types.BuildCache
if getBuildCache {
eg.Go(func() error { eg.Go(func() error {
var err error var err error
buildCache, err = s.builder.DiskUsage(ctx) buildCache, err = s.builder.DiskUsage(ctx)
if err != nil { if err != nil {
return errors.Wrap(err, "error getting build cache usage") return pkgerrors.Wrap(err, "error getting build cache usage")
}
if buildCache == nil {
// Ensure empty `BuildCache` field is represented as empty JSON array(`[]`)
// instead of `null` to be consistent with `Images`, `Containers` etc.
buildCache = []*types.BuildCache{}
} }
return nil return nil
}) })
}
if err := eg.Wait(); err != nil { if err := eg.Wait(); err != nil {
return err return err
} }
var builderSize int64
if versions.LessThan(version, "1.42") {
for _, b := range buildCache { for _, b := range buildCache {
builderSize += b.Size builderSize += b.Size
// Parents field was added in API 1.42 to replace the Parent field.
b.Parents = nil
}
}
if versions.GreaterThanOrEqualTo(version, "1.42") {
for _, b := range buildCache {
// Parent field is deprecated in API v1.42 and up, as it is deprecated
// in BuildKit. Empty the field to omit it in the API response.
b.Parent = "" //nolint:staticcheck // ignore SA1019 (Parent field is deprecated)
}
}
if versions.LessThan(version, "1.44") {
for _, b := range systemDiskUsage.Images {
b.VirtualSize = b.Size //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.44.
}
} }
du := types.DiskUsage{ du.BuilderSize = builderSize
BuildCache: buildCache, du.BuildCache = buildCache
BuilderSize: builderSize,
}
if systemDiskUsage != nil {
du.LayersSize = systemDiskUsage.LayersSize
du.Images = systemDiskUsage.Images
du.Containers = systemDiskUsage.Containers
du.Volumes = systemDiskUsage.Volumes
}
return httputils.WriteJSON(w, http.StatusOK, du) return httputils.WriteJSON(w, http.StatusOK, du)
} }
@ -251,9 +174,7 @@ func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *
if !onlyPastEvents { if !onlyPastEvents {
dur := until.Sub(now) dur := until.Sub(now)
timer := time.NewTimer(dur) timeout = time.After(dur)
defer timer.Stop()
timeout = timer.C
} }
} }
@ -287,7 +208,7 @@ func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *
case ev := <-l: case ev := <-l:
jev, ok := ev.(events.Message) jev, ok := ev.(events.Message)
if !ok { if !ok {
log.G(ctx).Warnf("unexpected event message: %q", ev) logrus.Warnf("unexpected event message: %q", ev)
continue continue
} }
if err := enc.Encode(jev); err != nil { if err := enc.Encode(jev); err != nil {
@ -296,14 +217,14 @@ func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *
case <-timeout: case <-timeout:
return nil return nil
case <-ctx.Done(): case <-ctx.Done():
log.G(ctx).Debug("Client context cancelled, stop sending events") logrus.Debug("Client context cancelled, stop sending events")
return nil return nil
} }
} }
} }
func (s *systemRouter) postAuth(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (s *systemRouter) postAuth(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var config *registry.AuthConfig var config *types.AuthConfig
err := json.NewDecoder(r.Body).Decode(&config) err := json.NewDecoder(r.Body).Decode(&config)
r.Body.Close() r.Body.Close()
if err != nil { if err != nil {

View file

@ -7,28 +7,14 @@ import (
// TODO return types need to be refactored into pkg // TODO return types need to be refactored into pkg
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/volume"
) )
// Backend is the methods that need to be implemented to provide // Backend is the methods that need to be implemented to provide
// volume specific functionality // volume specific functionality
type Backend interface { type Backend interface {
List(ctx context.Context, filter filters.Args) ([]*volume.Volume, []string, error) List(ctx context.Context, filter filters.Args) ([]*types.Volume, []string, error)
Get(ctx context.Context, name string, opts ...opts.GetOption) (*volume.Volume, error) Get(ctx context.Context, name string, opts ...opts.GetOption) (*types.Volume, error)
Create(ctx context.Context, name, driverName string, opts ...opts.CreateOption) (*volume.Volume, error) Create(ctx context.Context, name, driverName string, opts ...opts.CreateOption) (*types.Volume, error)
Remove(ctx context.Context, name string, opts ...opts.RemoveOption) error Remove(ctx context.Context, name string, opts ...opts.RemoveOption) error
Prune(ctx context.Context, pruneFilters filters.Args) (*types.VolumesPruneReport, error) Prune(ctx context.Context, pruneFilters filters.Args) (*types.VolumesPruneReport, error)
} }
// ClusterBackend is the backend used for Swarm Cluster Volumes. Regular
// volumes go through the volume service, but to avoid across-dependency
// between the cluster package and the volume package, we simply provide two
// backends here.
type ClusterBackend interface {
GetVolume(nameOrID string) (volume.Volume, error)
GetVolumes(options volume.ListOptions) ([]*volume.Volume, error)
CreateVolume(volume volume.CreateOptions) (*volume.Volume, error)
RemoveVolume(nameOrID string, force bool) error
UpdateVolume(nameOrID string, version uint64, volume volume.UpdateOptions) error
IsManager() bool
}

View file

@ -5,15 +5,13 @@ import "github.com/docker/docker/api/server/router"
// volumeRouter is a router to talk with the volumes controller // volumeRouter is a router to talk with the volumes controller
type volumeRouter struct { type volumeRouter struct {
backend Backend backend Backend
cluster ClusterBackend
routes []router.Route routes []router.Route
} }
// NewRouter initializes a new volume router // NewRouter initializes a new volume router
func NewRouter(b Backend, cb ClusterBackend) router.Router { func NewRouter(b Backend) router.Router {
r := &volumeRouter{ r := &volumeRouter{
backend: b, backend: b,
cluster: cb,
} }
r.initRoutes() r.initRoutes()
return r return r
@ -32,8 +30,6 @@ func (r *volumeRouter) initRoutes() {
// POST // POST
router.NewPostRoute("/volumes/create", r.postVolumesCreate), router.NewPostRoute("/volumes/create", r.postVolumesCreate),
router.NewPostRoute("/volumes/prune", r.postVolumesPrune), router.NewPostRoute("/volumes/prune", r.postVolumesPrune),
// PUT
router.NewPutRoute("/volumes/{name:.*}", r.putVolumesUpdate),
// DELETE // DELETE
router.NewDeleteRoute("/volumes/{name:.*}", r.deleteVolumes), router.NewDeleteRoute("/volumes/{name:.*}", r.deleteVolumes),
} }

View file

@ -2,26 +2,18 @@ package volume // import "github.com/docker/docker/api/server/router/volume"
import ( import (
"context" "context"
"fmt" "encoding/json"
"io"
"net/http" "net/http"
"strconv"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/filters" "github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/versions" volumetypes "github.com/docker/docker/api/types/volume"
"github.com/docker/docker/api/types/volume"
"github.com/docker/docker/errdefs" "github.com/docker/docker/errdefs"
"github.com/docker/docker/volume/service/opts" "github.com/docker/docker/volume/service/opts"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
const (
// clusterVolumesVersion defines the API version that swarm cluster volume
// functionality was introduced. avoids the use of magic numbers.
clusterVolumesVersion = "1.42"
)
func (v *volumeRouter) getVolumesList(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (v *volumeRouter) getVolumesList(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
@ -29,62 +21,25 @@ func (v *volumeRouter) getVolumesList(ctx context.Context, w http.ResponseWriter
filters, err := filters.FromJSON(r.Form.Get("filters")) filters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil { if err != nil {
return errors.Wrap(err, "error reading volume filters") return errdefs.InvalidParameter(errors.Wrap(err, "error reading volume filters"))
} }
volumes, warnings, err := v.backend.List(ctx, filters) volumes, warnings, err := v.backend.List(ctx, filters)
if err != nil { if err != nil {
return err return err
} }
return httputils.WriteJSON(w, http.StatusOK, &volumetypes.VolumeListOKBody{Volumes: volumes, Warnings: warnings})
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, clusterVolumesVersion) && v.cluster.IsManager() {
clusterVolumes, swarmErr := v.cluster.GetVolumes(volume.ListOptions{Filters: filters})
if swarmErr != nil {
// if there is a swarm error, we may not want to error out right
// away. the local list probably worked. instead, let's do what we
// do if there's a bad driver while trying to list: add the error
// to the warnings. don't do this if swarm is not initialized.
warnings = append(warnings, swarmErr.Error())
}
// add the cluster volumes to the return
volumes = append(volumes, clusterVolumes...)
}
return httputils.WriteJSON(w, http.StatusOK, &volume.ListResponse{Volumes: volumes, Warnings: warnings})
} }
func (v *volumeRouter) getVolumeByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (v *volumeRouter) getVolumeByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil { if err := httputils.ParseForm(r); err != nil {
return err return err
} }
version := httputils.VersionFromContext(ctx)
// re: volume name duplication volume, err := v.backend.Get(ctx, vars["name"], opts.WithGetResolveStatus)
//
// we prefer to get volumes locally before attempting to get them from the
// cluster. Local volumes can only be looked up by name, but cluster
// volumes can also be looked up by ID.
vol, err := v.backend.Get(ctx, vars["name"], opts.WithGetResolveStatus)
// if the volume is not found in the regular volume backend, and the client
// is using an API version greater than 1.42 (when cluster volumes were
// introduced), then check if Swarm has the volume.
if errdefs.IsNotFound(err) && versions.GreaterThanOrEqualTo(version, clusterVolumesVersion) && v.cluster.IsManager() {
swarmVol, err := v.cluster.GetVolume(vars["name"])
// if swarm returns an error and that error indicates that swarm is not
// initialized, return original NotFound error. Otherwise, we'd return
// a weird swarm unavailable error on non-swarm engines.
if err != nil { if err != nil {
return err return err
} }
vol = &swarmVol return httputils.WriteJSON(w, http.StatusOK, volume)
} else if err != nil {
// otherwise, if this isn't NotFound, or this isn't a high enough version,
// just return the error by itself.
return err
}
return httputils.WriteJSON(w, http.StatusOK, vol)
} }
func (v *volumeRouter) postVolumesCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (v *volumeRouter) postVolumesCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@ -92,65 +47,23 @@ func (v *volumeRouter) postVolumesCreate(ctx context.Context, w http.ResponseWri
return err return err
} }
var req volume.CreateOptions if err := httputils.CheckForJSON(r); err != nil {
if err := httputils.ReadJSON(r, &req); err != nil {
return err return err
} }
var ( var req volumetypes.VolumeCreateBody
vol *volume.Volume if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
err error if err == io.EOF {
version = httputils.VersionFromContext(ctx) return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
)
// if the ClusterVolumeSpec is filled in, then this is a cluster volume
// and is created through the swarm cluster volume backend.
//
// re: volume name duplication
//
// As it happens, there is no good way to prevent duplication of a volume
// name between local and cluster volumes. This is because Swarm volumes
// can be created from any manager node, bypassing most of the protections
// we could put into the engine side.
//
// Instead, we will allow creating a volume with a duplicate name, which
// should not break anything.
if req.ClusterVolumeSpec != nil && versions.GreaterThanOrEqualTo(version, clusterVolumesVersion) {
log.G(ctx).Debug("using cluster volume")
vol, err = v.cluster.CreateVolume(req)
} else {
log.G(ctx).Debug("using regular volume")
vol, err = v.backend.Create(ctx, req.Name, req.Driver, opts.WithCreateOptions(req.DriverOpts), opts.WithCreateLabels(req.Labels))
} }
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusCreated, vol)
}
func (v *volumeRouter) putVolumesUpdate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if !v.cluster.IsManager() {
return errdefs.Unavailable(errors.New("volume update only valid for cluster volumes, but swarm is unavailable"))
}
if err := httputils.ParseForm(r); err != nil {
return err
}
rawVersion := r.URL.Query().Get("version")
version, err := strconv.ParseUint(rawVersion, 10, 64)
if err != nil {
err = fmt.Errorf("invalid swarm object version '%s': %v", rawVersion, err)
return errdefs.InvalidParameter(err) return errdefs.InvalidParameter(err)
} }
var req volume.UpdateOptions volume, err := v.backend.Create(ctx, req.Name, req.Driver, opts.WithCreateOptions(req.DriverOpts), opts.WithCreateLabels(req.Labels))
if err := httputils.ReadJSON(r, &req); err != nil { if err != nil {
return err return err
} }
return httputils.WriteJSON(w, http.StatusCreated, volume)
return v.cluster.UpdateVolume(vars["name"], version, req)
} }
func (v *volumeRouter) deleteVolumes(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error { func (v *volumeRouter) deleteVolumes(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@ -158,28 +71,7 @@ func (v *volumeRouter) deleteVolumes(ctx context.Context, w http.ResponseWriter,
return err return err
} }
force := httputils.BoolValue(r, "force") force := httputils.BoolValue(r, "force")
if err := v.backend.Remove(ctx, vars["name"], opts.WithPurgeOnError(force)); err != nil {
// First we try deleting local volume. The volume may not be found as a
// local volume, but could be a cluster volume, so we ignore "not found"
// errors at this stage. Note that no "not found" error is produced if
// "force" is enabled.
err := v.backend.Remove(ctx, vars["name"], opts.WithPurgeOnError(force))
if err != nil && !errdefs.IsNotFound(err) {
return err
}
// If no volume was found, the volume may be a cluster volume. If force
// is enabled, the volume backend won't return an error for non-existing
// volumes, so we don't know if removal succeeded (or not volume existed).
// In that case we always try to delete cluster volumes as well.
if errdefs.IsNotFound(err) || force {
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, clusterVolumesVersion) && v.cluster.IsManager() {
err = v.cluster.RemoveVolume(vars["name"], force)
}
}
if err != nil {
return err return err
} }
w.WriteHeader(http.StatusNoContent) w.WriteHeader(http.StatusNoContent)
@ -196,12 +88,6 @@ func (v *volumeRouter) postVolumesPrune(ctx context.Context, w http.ResponseWrit
return err return err
} }
// API version 1.42 changes behavior where prune should only prune anonymous volumes.
// To keep older API behavior working, we need to add this filter option to consider all (local) volumes for pruning, not just anonymous ones.
if versions.LessThan(httputils.VersionFromContext(ctx), "1.42") {
pruneFilters.Add("all", "true")
}
pruneReport, err := v.backend.Prune(ctx, pruneFilters) pruneReport, err := v.backend.Prune(ctx, pruneFilters)
if err != nil { if err != nil {
return err return err

View file

@ -1,760 +0,0 @@
package volume
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http/httptest"
"testing"
"gotest.tools/v3/assert"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/volume"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/volume/service/opts"
)
func callGetVolume(v *volumeRouter, name string) (*httptest.ResponseRecorder, error) {
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
vars := map[string]string{"name": name}
req := httptest.NewRequest("GET", fmt.Sprintf("/volumes/%s", name), nil)
resp := httptest.NewRecorder()
err := v.getVolumeByName(ctx, resp, req, vars)
return resp, err
}
func callListVolumes(v *volumeRouter) (*httptest.ResponseRecorder, error) {
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
vars := map[string]string{}
req := httptest.NewRequest("GET", "/volumes", nil)
resp := httptest.NewRecorder()
err := v.getVolumesList(ctx, resp, req, vars)
return resp, err
}
func TestGetVolumeByNameNotFoundNoSwarm(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{},
cluster: &fakeClusterBackend{},
}
_, err := callGetVolume(v, "notReal")
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestGetVolumeByNameNotFoundNotManager(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{},
cluster: &fakeClusterBackend{swarm: true},
}
_, err := callGetVolume(v, "notReal")
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestGetVolumeByNameNotFound(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{},
cluster: &fakeClusterBackend{swarm: true, manager: true},
}
_, err := callGetVolume(v, "notReal")
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestGetVolumeByNameFoundRegular(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"volume1": {
Name: "volume1",
},
},
},
cluster: &fakeClusterBackend{swarm: true, manager: true},
}
_, err := callGetVolume(v, "volume1")
assert.NilError(t, err)
}
func TestGetVolumeByNameFoundSwarm(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{},
cluster: &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"volume1": {
Name: "volume1",
},
},
},
}
_, err := callGetVolume(v, "volume1")
assert.NilError(t, err)
}
func TestListVolumes(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"v1": {Name: "v1"},
"v2": {Name: "v2"},
},
},
cluster: &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"v3": {Name: "v3"},
"v4": {Name: "v4"},
},
},
}
resp, err := callListVolumes(v)
assert.NilError(t, err)
d := json.NewDecoder(resp.Result().Body)
respVols := volume.ListResponse{}
assert.NilError(t, d.Decode(&respVols))
assert.Assert(t, respVols.Volumes != nil)
assert.Equal(t, len(respVols.Volumes), 4, "volumes %v", respVols.Volumes)
}
func TestListVolumesNoSwarm(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"v1": {Name: "v1"},
"v2": {Name: "v2"},
},
},
cluster: &fakeClusterBackend{},
}
_, err := callListVolumes(v)
assert.NilError(t, err)
}
func TestListVolumesNoManager(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"v1": {Name: "v1"},
"v2": {Name: "v2"},
},
},
cluster: &fakeClusterBackend{swarm: true},
}
resp, err := callListVolumes(v)
assert.NilError(t, err)
d := json.NewDecoder(resp.Result().Body)
respVols := volume.ListResponse{}
assert.NilError(t, d.Decode(&respVols))
assert.Equal(t, len(respVols.Volumes), 2)
assert.Equal(t, len(respVols.Warnings), 0)
}
func TestCreateRegularVolume(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeCreate := volume.CreateOptions{
Name: "vol1",
Driver: "foodriver",
}
buf := bytes.Buffer{}
e := json.NewEncoder(&buf)
e.Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.NilError(t, err)
respVolume := volume.Volume{}
assert.NilError(t, json.NewDecoder(resp.Result().Body).Decode(&respVolume))
assert.Equal(t, respVolume.Name, "vol1")
assert.Equal(t, respVolume.Driver, "foodriver")
assert.Equal(t, 1, len(b.volumes))
assert.Equal(t, 0, len(c.volumes))
}
func TestCreateSwarmVolumeNoSwarm(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeCreate := volume.CreateOptions{
ClusterVolumeSpec: &volume.ClusterVolumeSpec{},
Name: "volCluster",
Driver: "someCSI",
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsUnavailable(err))
}
func TestCreateSwarmVolumeNotManager(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{swarm: true}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeCreate := volume.CreateOptions{
ClusterVolumeSpec: &volume.ClusterVolumeSpec{},
Name: "volCluster",
Driver: "someCSI",
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsUnavailable(err))
}
func TestCreateVolumeCluster(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeCreate := volume.CreateOptions{
ClusterVolumeSpec: &volume.ClusterVolumeSpec{},
Name: "volCluster",
Driver: "someCSI",
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.NilError(t, err)
respVolume := volume.Volume{}
assert.NilError(t, json.NewDecoder(resp.Result().Body).Decode(&respVolume))
assert.Equal(t, respVolume.Name, "volCluster")
assert.Equal(t, respVolume.Driver, "someCSI")
assert.Equal(t, 0, len(b.volumes))
assert.Equal(t, 1, len(c.volumes))
}
func TestUpdateVolume(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vo1",
ClusterVolume: &volume.ClusterVolume{
ID: "vol1",
},
},
},
}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeUpdate := volume.UpdateOptions{
Spec: &volume.ClusterVolumeSpec{},
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeUpdate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/vol1/update?version=0", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, c.volumes["vol1"].ClusterVolume.Meta.Version.Index, uint64(1))
}
func TestUpdateVolumeNoSwarm(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeUpdate := volume.UpdateOptions{
Spec: &volume.ClusterVolumeSpec{},
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeUpdate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/vol1/update?version=0", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsUnavailable(err))
}
func TestUpdateVolumeNotFound(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{},
}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeUpdate := volume.UpdateOptions{
Spec: &volume.ClusterVolumeSpec{},
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeUpdate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/vol1/update?version=0", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestVolumeRemove(t *testing.T) {
b := &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vol1",
},
},
}
c := &fakeClusterBackend{swarm: true, manager: true}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, len(b.volumes), 0)
}
func TestVolumeRemoveSwarm(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vol1",
ClusterVolume: &volume.ClusterVolume{},
},
},
}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, len(c.volumes), 0)
}
func TestVolumeRemoveNotFoundNoSwarm(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err), err.Error())
}
func TestVolumeRemoveNotFoundNoManager(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{swarm: true}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestVolumeRemoveFoundNoSwarm(t *testing.T) {
b := &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vol1",
},
},
}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, len(b.volumes), 0)
}
func TestVolumeRemoveNoSwarmInUse(t *testing.T) {
b := &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"inuse": {
Name: "inuse",
},
},
}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/inuse", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "inuse"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsConflict(err))
}
func TestVolumeRemoveSwarmForce(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vol1",
ClusterVolume: &volume.ClusterVolume{},
Options: map[string]string{"mustforce": "yes"},
},
},
}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsConflict(err))
ctx = context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req = httptest.NewRequest("DELETE", "/volumes/vol1?force=1", nil)
resp = httptest.NewRecorder()
err = v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, len(b.volumes), 0)
assert.Equal(t, len(c.volumes), 0)
}
type fakeVolumeBackend struct {
volumes map[string]*volume.Volume
}
func (b *fakeVolumeBackend) List(_ context.Context, _ filters.Args) ([]*volume.Volume, []string, error) {
volumes := []*volume.Volume{}
for _, v := range b.volumes {
volumes = append(volumes, v)
}
return volumes, nil, nil
}
func (b *fakeVolumeBackend) Get(_ context.Context, name string, _ ...opts.GetOption) (*volume.Volume, error) {
if v, ok := b.volumes[name]; ok {
return v, nil
}
return nil, errdefs.NotFound(fmt.Errorf("volume %s not found", name))
}
func (b *fakeVolumeBackend) Create(_ context.Context, name, driverName string, _ ...opts.CreateOption) (*volume.Volume, error) {
if _, ok := b.volumes[name]; ok {
// TODO(dperny): return appropriate error type
return nil, fmt.Errorf("already exists")
}
v := &volume.Volume{
Name: name,
Driver: driverName,
}
if b.volumes == nil {
b.volumes = map[string]*volume.Volume{
name: v,
}
} else {
b.volumes[name] = v
}
return v, nil
}
func (b *fakeVolumeBackend) Remove(_ context.Context, name string, o ...opts.RemoveOption) error {
removeOpts := &opts.RemoveConfig{}
for _, opt := range o {
opt(removeOpts)
}
if v, ok := b.volumes[name]; !ok {
if !removeOpts.PurgeOnError {
return errdefs.NotFound(fmt.Errorf("volume %s not found", name))
}
} else if v.Name == "inuse" {
return errdefs.Conflict(fmt.Errorf("volume in use"))
}
delete(b.volumes, name)
return nil
}
func (b *fakeVolumeBackend) Prune(_ context.Context, _ filters.Args) (*types.VolumesPruneReport, error) {
return nil, nil
}
type fakeClusterBackend struct {
swarm bool
manager bool
idCount int
volumes map[string]*volume.Volume
}
func (c *fakeClusterBackend) checkSwarm() error {
if !c.swarm {
return errdefs.Unavailable(fmt.Errorf("this node is not a swarm manager. Use \"docker swarm init\" or \"docker swarm join\" to connect this node to swarm and try again"))
} else if !c.manager {
return errdefs.Unavailable(fmt.Errorf("this node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager"))
}
return nil
}
func (c *fakeClusterBackend) IsManager() bool {
return c.swarm && c.manager
}
func (c *fakeClusterBackend) GetVolume(nameOrID string) (volume.Volume, error) {
if err := c.checkSwarm(); err != nil {
return volume.Volume{}, err
}
if v, ok := c.volumes[nameOrID]; ok {
return *v, nil
}
return volume.Volume{}, errdefs.NotFound(fmt.Errorf("volume %s not found", nameOrID))
}
func (c *fakeClusterBackend) GetVolumes(options volume.ListOptions) ([]*volume.Volume, error) {
if err := c.checkSwarm(); err != nil {
return nil, err
}
volumes := []*volume.Volume{}
for _, v := range c.volumes {
volumes = append(volumes, v)
}
return volumes, nil
}
func (c *fakeClusterBackend) CreateVolume(volumeCreate volume.CreateOptions) (*volume.Volume, error) {
if err := c.checkSwarm(); err != nil {
return nil, err
}
if _, ok := c.volumes[volumeCreate.Name]; ok {
// TODO(dperny): return appropriate already exists error
return nil, fmt.Errorf("already exists")
}
v := &volume.Volume{
Name: volumeCreate.Name,
Driver: volumeCreate.Driver,
Labels: volumeCreate.Labels,
Options: volumeCreate.DriverOpts,
Scope: "global",
}
v.ClusterVolume = &volume.ClusterVolume{
ID: fmt.Sprintf("cluster_%d", c.idCount),
Spec: *volumeCreate.ClusterVolumeSpec,
}
c.idCount = c.idCount + 1
if c.volumes == nil {
c.volumes = map[string]*volume.Volume{
v.Name: v,
}
} else {
c.volumes[v.Name] = v
}
return v, nil
}
func (c *fakeClusterBackend) RemoveVolume(nameOrID string, force bool) error {
if err := c.checkSwarm(); err != nil {
return err
}
v, ok := c.volumes[nameOrID]
if !ok {
return errdefs.NotFound(fmt.Errorf("volume %s not found", nameOrID))
}
if _, mustforce := v.Options["mustforce"]; mustforce && !force {
return errdefs.Conflict(fmt.Errorf("volume %s must be force removed", nameOrID))
}
delete(c.volumes, nameOrID)
return nil
}
func (c *fakeClusterBackend) UpdateVolume(nameOrID string, version uint64, _ volume.UpdateOptions) error {
if err := c.checkSwarm(); err != nil {
return err
}
if v, ok := c.volumes[nameOrID]; ok {
if v.ClusterVolume.Meta.Version.Index != version {
return fmt.Errorf("wrong version")
}
v.ClusterVolume.Meta.Version.Index = v.ClusterVolume.Meta.Version.Index + 1
// for testing, we don't actually need to change anything about the
// volume object. let's just increment the version so we can see the
// call happened.
} else {
return errdefs.NotFound(fmt.Errorf("volume %q not found", nameOrID))
}
return nil
}

View file

@ -0,0 +1,30 @@
package server // import "github.com/docker/docker/api/server"
import (
"net/http"
"sync"
"github.com/gorilla/mux"
)
// routerSwapper is an http.Handler that allows you to swap
// mux routers.
type routerSwapper struct {
mu sync.Mutex
router *mux.Router
}
// Swap changes the old router with the new one.
func (rs *routerSwapper) Swap(newRouter *mux.Router) {
rs.mu.Lock()
rs.router = newRouter
rs.mu.Unlock()
}
// ServeHTTP makes the routerSwapper to implement the http.Handler interface.
func (rs *routerSwapper) ServeHTTP(w http.ResponseWriter, r *http.Request) {
rs.mu.Lock()
router := rs.router
rs.mu.Unlock()
router.ServeHTTP(w, r)
}

View file

@ -2,37 +2,124 @@ package server // import "github.com/docker/docker/api/server"
import ( import (
"context" "context"
"crypto/tls"
"net"
"net/http" "net/http"
"strings"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils" "github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/middleware" "github.com/docker/docker/api/server/middleware"
"github.com/docker/docker/api/server/router" "github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/server/router/debug" "github.com/docker/docker/api/server/router/debug"
"github.com/docker/docker/api/types"
"github.com/docker/docker/dockerversion" "github.com/docker/docker/dockerversion"
"github.com/docker/docker/errdefs"
"github.com/gorilla/mux" "github.com/gorilla/mux"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp" "github.com/sirupsen/logrus"
) )
// versionMatcher defines a variable matcher to be parsed by the router // versionMatcher defines a variable matcher to be parsed by the router
// when a request is about to be served. // when a request is about to be served.
const versionMatcher = "/v{version:[0-9.]+}" const versionMatcher = "/v{version:[0-9.]+}"
// Config provides the configuration for the API server
type Config struct {
Logging bool
CorsHeaders string
Version string
SocketGroup string
TLSConfig *tls.Config
}
// Server contains instance details for the server // Server contains instance details for the server
type Server struct { type Server struct {
cfg *Config
servers []*HTTPServer
routers []router.Router
routerSwapper *routerSwapper
middlewares []middleware.Middleware middlewares []middleware.Middleware
} }
// New returns a new instance of the server based on the specified configuration.
// It allocates resources which will be needed for ServeAPI(ports, unix-sockets).
func New(cfg *Config) *Server {
return &Server{
cfg: cfg,
}
}
// UseMiddleware appends a new middleware to the request chain. // UseMiddleware appends a new middleware to the request chain.
// This needs to be called before the API routes are configured. // This needs to be called before the API routes are configured.
func (s *Server) UseMiddleware(m middleware.Middleware) { func (s *Server) UseMiddleware(m middleware.Middleware) {
s.middlewares = append(s.middlewares, m) s.middlewares = append(s.middlewares, m)
} }
func (s *Server) makeHTTPHandler(handler httputils.APIFunc, operation string) http.HandlerFunc { // Accept sets a listener the server accepts connections into.
return otelhttp.NewHandler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { func (s *Server) Accept(addr string, listeners ...net.Listener) {
for _, listener := range listeners {
httpServer := &HTTPServer{
srv: &http.Server{
Addr: addr,
},
l: listener,
}
s.servers = append(s.servers, httpServer)
}
}
// Close closes servers and thus stop receiving requests
func (s *Server) Close() {
for _, srv := range s.servers {
if err := srv.Close(); err != nil {
logrus.Error(err)
}
}
}
// serveAPI loops through all initialized servers and spawns goroutine
// with Serve method for each. It sets createMux() as Handler also.
func (s *Server) serveAPI() error {
var chErrors = make(chan error, len(s.servers))
for _, srv := range s.servers {
srv.srv.Handler = s.routerSwapper
go func(srv *HTTPServer) {
var err error
logrus.Infof("API listen on %s", srv.l.Addr())
if err = srv.Serve(); err != nil && strings.Contains(err.Error(), "use of closed network connection") {
err = nil
}
chErrors <- err
}(srv)
}
for range s.servers {
err := <-chErrors
if err != nil {
return err
}
}
return nil
}
// HTTPServer contains an instance of http server and the listener.
// srv *http.Server, contains configuration to create an http server and a mux router with all api end points.
// l net.Listener, is a TCP or Socket listener that dispatches incoming request to the router.
type HTTPServer struct {
srv *http.Server
l net.Listener
}
// Serve starts listening for inbound requests.
func (s *HTTPServer) Serve() error {
return s.srv.Serve(s.l)
}
// Close closes the HTTPServer from listening for the inbound requests.
func (s *HTTPServer) Close() error {
return s.l.Close()
}
func (s *Server) makeHTTPHandler(handler httputils.APIFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
// Define the context that we'll pass around to share info // Define the context that we'll pass around to share info
// like the docker-request-id. // like the docker-request-id.
// //
@ -44,7 +131,6 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc, operation string) ht
// use intermediate variable to prevent "should not use basic type // use intermediate variable to prevent "should not use basic type
// string as key in context.WithValue" golint errors // string as key in context.WithValue" golint errors
ctx := context.WithValue(r.Context(), dockerversion.UAStringKey{}, r.Header.Get("User-Agent")) ctx := context.WithValue(r.Context(), dockerversion.UAStringKey{}, r.Header.Get("User-Agent"))
r = r.WithContext(ctx) r = r.WithContext(ctx)
handlerFunc := s.handlerWithGlobalMiddlewares(handler) handlerFunc := s.handlerWithGlobalMiddlewares(handler)
@ -54,47 +140,72 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc, operation string) ht
} }
if err := handlerFunc(ctx, w, r, vars); err != nil { if err := handlerFunc(ctx, w, r, vars); err != nil {
statusCode := httpstatus.FromError(err) statusCode := errdefs.GetHTTPErrorStatusCode(err)
if statusCode >= 500 { if statusCode >= 500 {
log.G(ctx).Errorf("Handler for %s %s returned error: %v", r.Method, r.URL.Path, err) logrus.Errorf("Handler for %s %s returned error: %v", r.Method, r.URL.Path, err)
}
httputils.MakeErrorHandler(err)(w, r)
} }
_ = httputils.WriteJSON(w, statusCode, &types.ErrorResponse{
Message: err.Error(),
})
} }
}), operation).ServeHTTP
} }
// CreateMux returns a new mux with all the routers registered. // InitRouter initializes the list of routers for the server.
func (s *Server) CreateMux(routers ...router.Router) *mux.Router { // This method also enables the Go profiler.
func (s *Server) InitRouter(routers ...router.Router) {
s.routers = append(s.routers, routers...)
m := s.createMux()
s.routerSwapper = &routerSwapper{
router: m,
}
}
type pageNotFoundError struct{}
func (pageNotFoundError) Error() string {
return "page not found"
}
func (pageNotFoundError) NotFound() {}
// createMux initializes the main router the server uses.
func (s *Server) createMux() *mux.Router {
m := mux.NewRouter() m := mux.NewRouter()
log.G(context.TODO()).Debug("Registering routers") logrus.Debug("Registering routers")
for _, apiRouter := range routers { for _, apiRouter := range s.routers {
for _, r := range apiRouter.Routes() { for _, r := range apiRouter.Routes() {
f := s.makeHTTPHandler(r.Handler(), r.Method()+" "+r.Path()) f := s.makeHTTPHandler(r.Handler())
log.G(context.TODO()).Debugf("Registering %s, %s", r.Method(), r.Path()) logrus.Debugf("Registering %s, %s", r.Method(), r.Path())
m.Path(versionMatcher + r.Path()).Methods(r.Method()).Handler(f) m.Path(versionMatcher + r.Path()).Methods(r.Method()).Handler(f)
m.Path(r.Path()).Methods(r.Method()).Handler(f) m.Path(r.Path()).Methods(r.Method()).Handler(f)
} }
} }
debugRouter := debug.NewRouter() debugRouter := debug.NewRouter()
s.routers = append(s.routers, debugRouter)
for _, r := range debugRouter.Routes() { for _, r := range debugRouter.Routes() {
f := s.makeHTTPHandler(r.Handler(), r.Method()+" "+r.Path()) f := s.makeHTTPHandler(r.Handler())
m.Path("/debug" + r.Path()).Handler(f) m.Path("/debug" + r.Path()).Handler(f)
} }
notFoundHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { notFoundHandler := httputils.MakeErrorHandler(pageNotFoundError{})
_ = httputils.WriteJSON(w, http.StatusNotFound, &types.ErrorResponse{
Message: "page not found",
})
})
m.HandleFunc(versionMatcher+"/{path:.*}", notFoundHandler) m.HandleFunc(versionMatcher+"/{path:.*}", notFoundHandler)
m.NotFoundHandler = notFoundHandler m.NotFoundHandler = notFoundHandler
m.MethodNotAllowedHandler = notFoundHandler m.MethodNotAllowedHandler = notFoundHandler
return m return m
} }
// Wait blocks the server goroutine until it exits.
// It sends an error message if there is any error during
// the API execution.
func (s *Server) Wait(waitChan chan error) {
if err := s.serveAPI(); err != nil {
logrus.Errorf("ServeAPI error: %v", err)
waitChan <- err
return
}
waitChan <- nil
}

View file

@ -13,15 +13,16 @@ import (
) )
func TestMiddlewares(t *testing.T) { func TestMiddlewares(t *testing.T) {
srv := &Server{} cfg := &Config{
Version: "0.1omega2",
m, err := middleware.NewVersionMiddleware("0.1omega2", api.DefaultVersion, api.MinSupportedAPIVersion) }
if err != nil { srv := &Server{
t.Fatal(err) cfg: cfg,
} }
srv.UseMiddleware(*m)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil) srv.UseMiddleware(middleware.NewVersionMiddleware("0.1omega2", api.DefaultVersion, api.MinVersion))
req, _ := http.NewRequest("GET", "/containers/json", nil)
resp := httptest.NewRecorder() resp := httptest.NewRecorder()
ctx := context.Background() ctx := context.Background()

File diff suppressed because it is too large Load diff

View file

@ -1,7 +1,8 @@
package {{ .Package }} // import "github.com/docker/docker/api/types/{{ .Package }}" package {{ .Package }} // import "github.com/docker/docker/api/types/{{ .Package }}"
// ---------------------------------------------------------------------------- // ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT. // DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
// //
// See hack/generate-swagger-api.sh // See hack/generate-swagger-api.sh
// ---------------------------------------------------------------------------- // ----------------------------------------------------------------------------

Some files were not shown because too many files have changed in this diff Show more