Before this, the client would report itself as containerd, and the containerd
version from the containerd go module:
time="2023-06-01T09:43:21.907359755Z" level=info msg="listening on [::]:5000" go.version=go1.19.9 instance.id=67b89d83-eac0-4f85-b36b-b1b18e80bde1 service=registry version=2.8.2
...
172.18.0.1 - - [01/Jun/2023:09:43:33 +0000] "HEAD /v2/multifoo/blobs/sha256:cb269d7c0c1ca22fb5a70342c3ed2196c57a825f94b3f0e5ce3aa8c55baee829 HTTP/1.1" 404 157 "" "containerd/1.6.21+unknown"
With this patch, the user-agent has the docker daemon information;
time="2023-06-01T11:27:07.959822887Z" level=info msg="listening on [::]:5000" go.version=go1.19.9 instance.id=53590f34-096a-4fd1-9c58-d3b8eb7e5092 service=registry version=2.8.2
...
172.18.0.1 - - [01/Jun/2023:11:27:20 +0000] "HEAD /v2/multifoo/blobs/sha256:c7ec7661263e5e597156f2281d97b160b91af56fa1fd2cc045061c7adac4babd HTTP/1.1" 404 157 "" "docker/dev go/go1.20.4 git-commit/8d67d0c1a8 kernel/5.15.49-linuxkit-pr os/linux arch/arm64 UpstreamClient(Docker-Client/24.0.2 \\(linux\\))"
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 66137ae429)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Resolver.setupIPTable() checks whether it needs to flush or create the
user chains used for NATing container DNS requests by testing for the
existence of the rules which jump to said user chains. Unfortunately it
does so using the IPTable.RawCombinedOutputNative() method, which
returns a non-nil error if the iptables command returns any output even
if the command exits with a zero status code. While that is fine with
iptables-legacy as it prints no output if the rule exists, iptables-nft
v1.8.7 prints some information about the rule. Consequently,
Resolver.setupIPTable() would incorrectly think that the rule does not
exist during container restore and attempt to create it. This happened
work work by coincidence before 8f5a9a741b
because the failure to create the already-existing table would be
ignored and the new NAT rules would be inserted before the stale rules
left in the table from when the container was last started/restored. Now
that failing to create the table is treated as a fatal error, the
incompatibility with iptables-nft is no longer hidden.
Switch to using IPTable.ExistsNative() to test for the existence of the
jump rules as it correctly only checks the iptables command's exit
status without regard for whether it outputs anything.
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 1178319313)
Signed-off-by: Cory Snider <csnider@mirantis.com>
The method to restore a network namespace takes a collection of
interfaces to restore with the options to apply. The interface names are
structured data, tuples of (SrcName, DstPrefix) but for whatever reason
are being passed into Restore() serialized to strings. A refactor,
f0be4d126d, accidentally broke the
serialization by dropping the delimiter. Rather than fix the
serialization and leave the time-bomb for someone else to trip over,
pass the interface names as structured data.
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 50eb2d2782)
Signed-off-by: Cory Snider <csnider@mirantis.com>
In cases where an exec start failed the exec process will be nil even
though the channel to signal that the exec started was closed.
Ideally ExecConfig would get a nice refactor to handle this case better
(ie. it's not started so don't close that channel).
This is a minimal fix to prevent NPE. Luckilly this would only get
called by a client and only the http request goroutine gets the panic
(http lib recovers the panic).
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 487ea81316)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
While the VXLAN interface and the iptables rules to mark outgoing VXLAN
packets for encryption are configured to use the Swarm data path port,
the XFRM policies for actually applying the encryption are hardcoded to
match packets with destination port 4789/udp. Consequently, encrypted
overlay networks do not pass traffic when the Swarm is configured with
any other data path port: encryption is not applied to the outgoing
VXLAN packets and the destination host drops the received cleartext
packets. Use the configured data path port instead of hardcoding port
4789 in the XFRM policies.
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 9a692a3802)
Signed-off-by: Cory Snider <csnider@mirantis.com>
This type (as well as TarsumBackup), was used for the experimental --stream
support for the classic builder. This feature was removed in commit
6ca3ec88ae, which also removed uses of
the CachableSource type.
As far as I could find, there's no external consumers of these types,
but let's deprecated it, to give potential users a heads-up that it
will be removed.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 37d4b0bee9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
It turns out that the unnecessary serialization removed in
b75246202a happened to work around a bug
in containerd. When many exec processes are started concurrently in the
same containerd task, it takes seconds to minutes for them all to start.
Add the workaround back in, only deliberately this time.
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit fb7ec1555c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Feature flags are one of the configuration items which can be reloaded
without restarting the daemon. Whether the daemon uses the containerd
snapshotter service or the legacy graph drivers is controlled by a
feature flag. However, much of the code which checks the snapshotter
feature flag assumes that the flag cannot change at runtime. Make it so
that the snapshotter setting can only be changed by restarting the
daemon, even if the flag state changes after a live configuration
reload.
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 9b9c5242eb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 6506579e18)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit a93aadc2e6)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 3a31f81838)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
To allow skipping integration tests that don't apply to the
containerd snapshotter.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 4373547857)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Starting with go1.19, the Go runtime on Windows now supports the `netgo` build-
flag to use a native Go DNS resolver. Prior to that version, the build-flag
only had an effect on non-Windows platforms. When using the `netgo` build-flag,
the Windows's host resolver is not used, and as a result, custom entries in
`etc/hosts` are ignored, which is a change in behavior from binaries compiled
with older versions of the Go runtime.
From the go1.19 release notes: https://go.dev/doc/go1.19#net
> Resolver.PreferGo is now implemented on Windows and Plan 9. It previously
> only worked on Unix platforms. Combined with Dialer.Resolver and Resolver.Dial,
> it's now possible to write portable programs and be in control of all DNS name
> lookups when dialing.
>
> The net package now has initial support for the netgo build tag on Windows.
> When used, the package uses the Go DNS client (as used by Resolver.PreferGo)
> instead of asking Windows for DNS results. The upstream DNS server it discovers
> from Windows may not yet be correct with complex system network configurations,
> however.
Our Windows binaries are compiled with the "static" (`make/binary-daemon`)
script, which has the `netgo` option set by default. This patch unsets the
`netgo` option when cross-compiling for Windows.
Co-authored-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
(cherry picked from commit 53d1b12bc0)
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
Docker with containerd integration emits "Exists" progress action when a
layer of the currently pulled image already exists. This is different
from the non-c8d Docker which emits "Already exists".
This makes both implementations consistent by emitting backwards
compatible "Already exists" action.
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit a7bc65fbd8)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
TestProxyNXDOMAIN has proven to be susceptible to failing as a
consequence of unlocked threads being set to the wrong network
namespace. As the failure mode looks a lot like a bug in the test
itself, it seems prudent to add a check for mismatched namespaces to the
test so we will know for next time that the root cause lies elsewhere.
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 871cf72363)
Signed-off-by: Cory Snider <csnider@mirantis.com>
osl.setIPv6 mistakenly captured the calling goroutine's thread's network
namespace instead of the network namespace of the thread getting its
namespace temporarily changed. As this function appears to only be
called from contexts in the process's initial network namespace, this
mistake would be of little consequence at runtime. The libnetwork unit
tests, on the other hand, unshare network namespaces so as not to
interfere with each other or the host's network namespace. But due to
this bug, the isolation backfires and the network namespace of
goroutines used by a test which are expected to be in the initial
network namespace can randomly become the isolated network namespace of
some other test. Symptoms include a loopback network server running in
one goroutine being inexplicably and randomly being unreachable by a
client in another goroutine.
Capture the original network namespace of the thread from the thread to
be tampered with, after locking the goroutine to the thread.
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 6d79864135)
Signed-off-by: Cory Snider <csnider@mirantis.com>
Swapping out the global logger on the fly is causing tests to flake out
by logging to a test's log output after the test function has returned.
Refactor Resolver to use a dependency-injected logger and the resolver
unit tests to inject a private logger instance into the Resolver under
test.
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit d4f3858a40)
Signed-off-by: Cory Snider <csnider@mirantis.com>
tstwriter mocks the server-side connection between the resolver and the
container, not the resolver and the external DNS server, so returning
the external DNS server's address as w.LocalAddr() is technically
incorrect and misleading. Only the protocols need to match as the
resolver uses the client's choice of protocol to determine which
protocol to use when forwarding the query to the external DNS server.
While this change has no material impact on the tests, it makes the
tests slightly more comprehensible for the next person.
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 0cc6e445d7)
Signed-off-by: Cory Snider <csnider@mirantis.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 34964c2454)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Our resolver is just a forwarder for external DNS so it should act like
it. Unless it's a server failure or refusal, take the response at face
value and forward it along to the client. RFC 8020 is only applicable to
caching recursive name servers and our resolver is neither caching nor
recursive.
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 41356227f2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>