Acquire the mutex in the help handler to synchronize access to the
handlers map. While a trivial issue---a panic in the request handler if
the node joins a swarm at just the right time, which would only result
in an HTTP 500 response---it is also a trivial race condition to fix.
Signed-off-by: Cory Snider <csnider@mirantis.com>
We don't need C-style callback functions which accept a void* context
parameter: Go has closures. Drop the unnecessary httpHandlerCustom type
and refactor the diagnostic server handler functions into closures which
capture whatever context they need implicitly.
If the node leaves and rejoins a swarm, the cluster agent and its
associated NetworkDB are discarded and replaced with new instances. Upon
rejoin, the agent registers its NetworkDB instance with the diagnostic
server. These handlers would all conflict with the handlers registered
by the previous NetworkDB instance. Attempting to register a second
handler on a http.ServeMux with the same pattern will panic, which the
diagnostic server would historically deal with by ignoring the duplicate
handler registration. Consequently, the first NetworkDB instance to be
registered would "stick" to the diagnostic server for the lifetime of
the process, even after it is replaced with another instance. Improve
duplicate-handler registration such that the most recently-registered
handler for a pattern is used for all subsequent requests.
Signed-off-by: Cory Snider <csnider@mirantis.com>
This is purely cosmetic - if a non-default MTU is configured, the bridge
will have the default MTU=1500 until a container's 'veth' is connected
and an MTU is set on the veth. That's a disconcerting, it looks like the
config has been ignored - so, set the bridge's MTU explicitly.
Fixes#37937
Signed-off-by: Rob Murray <rob.murray@docker.com>
I was trying to find out why `docker info` was sometimes slow so
plumbing a context through to propagate trace data through.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
The forwarding database (fdb) of Linux VXLAN links are restricted to
entries with destination VXLAN tunnel endpoint (VTEP) address of a
single address family. Which address family is permitted is set when the
link is created and cannot be modified. The overlay network driver
creates VXLAN links such that the kernel only allows fdb entries to be
created with IPv4 destination VTEP addresses. If the Swarm is configured
with IPv6 advertise addresses, creating fdb entries for remote peers
fails with EAFNOSUPPORT (address family not supported by protocol).
Make overlay networks functional over IPv6 transport by configuring the
VXLAN links for IPv6 VTEPs if the local node's advertise address is an
IPv6 address. Make encrypted overlay networks secure over IPv6 transport
by applying the iptables rules to the ip6tables when appropriate.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Early return if the iface or its address is nil to make the whole
function slightly easier to read.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
I am finally convinced that, given two netip.Prefix values a and b, the
expression
a.Contains(b.Addr()) || b.Contains(a.Addr())
is functionally equivalent to
a.Overlaps(b)
The (netip.Prefix).Contains method works by masking the address with the
prefix's mask and testing whether the remaining most-significant bits
are equal to the same bits in the prefix. The (netip.Prefix).Overlaps
method works by masking the longer prefix to the length of the shorter
prefix and testing whether the remaining most-significant bits are
equal. This is equivalent to
shorterPrefix.Contains(longerPrefix.Addr()), therefore applying Contains
symmetrically to two prefixes will always yield the same result as
applying Overlaps to the two prefixes in either order.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Add a new `com.docker.network.host_ipv6` bridge option to compliment
the existing `com.docker.network.host_ipv4` option. When set to an
IPv6 address, this causes the bridge to insert `SNAT` rules instead of
`MASQUERADE` rules (assuming `ip6tables` is enabled). `SNAT` makes it
possible for users to control the source IP address used for outgoing
connections.
Signed-off-by: Richard Hansen <rhansen@rhansen.org>
While there is nothing inherently wrong with goto statements, their use
here is not helping with readability.
Signed-off-by: Cory Snider <csnider@mirantis.com>
ds.cache is never nil so the uncached code paths are unreachable in
practice. And given how many KVObject deep-copy implementations shallow
copy pointers and other reference-typed values, there is the distinct
possibility that disabling the datastore cache could break things.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The datastore cache only uses the reference to its datastore to get a
reference to the backing store. Modify the cache to take the backing
store reference directly so that methods on the datastore can't get
called, as that might result in infinite recursion between datastore and
cache methods.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Inline the tortured logic for deciding when to skip updating the svc
records to give us a fighting chance at deciphering the logic behind the
logic and spotting logic bugs.
Update the service records synchronously. The only potential for issues
is if this change introduces deadlocks, which should be fixed by
restrucuting the mutexes rather than papering over the issue with
sketchy hacks like deferring the operation to a goroutine.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Its only remaining purpose is to elide removing the endpoint from the
service records if it was not previously added. Deleting the service
records is an idempotent operation so it is harmless to delete service
records which do not exist.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The service db entry for each network is deleted by
(*Controller).cleanupServiceDiscovery() when the network is deleted.
There is no need to also eagerly delete it whenever the network's
endpoint count drops to zero.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The logic to rename an endpoint includes code which would synchronize
the renamed service records to peers through the distributed datastore.
It would trigger the remote peers to pick up the rename by touching a
datastore object which remote peers would have subscribed to events on.
The code also asserts that the local peer is subscribed to updates on
the network associated with the endpoint, presumably as a proxy for
asserting that the remote peers would also be subscribed.
https://github.com/moby/libnetwork/pull/712
Libnetwork no longer has support for distributed datastores or
subscribing to datastore object updates, so this logic can be deleted.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The meaning of the (*Controller).isDistributedControl() method is not
immediately clear from the name, and it does not have any doc comment.
It returns true if and only if the controller is neither a manager node
nor an agent node -- that is, if the daemon is _not_ participating in a
Swarm cluster. The method name likely comes from the old abandoned
datastore-as-IPC control plane architecture for libnetwork. Refactor
c.isDistributedControl() -> !c.isSwarmNode()
to make it easier to understand code which consumes the method.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Rename all variables/fields/map keys associated with the
`com.docker.network.host_ipv4` option from `HostIP` to `HostIPv4`.
Rationale:
* This makes the variable/field name consistent with the option
name.
* This makes the code more readable because it is clear that the
variable/field does not hold an IPv6 address. This will hopefully
avoid bugs like <https://github.com/moby/moby/issues/46445> in the
future.
* If IPv6 SNAT support is ever added, the names will be symmetric.
Signed-off-by: Richard Hansen <rhansen@rhansen.org>
Rather than pass an `iptables.IPVersion` value alongside every
`iptRule` parameter, embed the IP version in the `iptRule` struct.
Signed-off-by: Richard Hansen <rhansen@rhansen.org>
That field was only used to pass `-t nat` for NAT rules. Now `-t
<tableName>` (where `<tableName>` is one of the `iptables.Table`
values) is always passed, eliminating the need for `preArgs`.
Signed-off-by: Richard Hansen <rhansen@rhansen.org>
Pass the entire `*networkConfiguration` struct to
`setupIPTablesInternal` to simplify the function signature and improve
code readability.
Signed-off-by: Richard Hansen <rhansen@rhansen.org>
On modern kernels this is an alias; however newer code has preferred
ctstate while older code has preferred the deprecated 'state' name.
Prefer the newer name for uniformity in the rules libnetwork creates,
and because some implementations/distributions of the xtables userland
tools may not support the legacy alias.
Signed-off-by: Bjorn Neergaard <bjorn.neergaard@docker.com>
The github.com/containerd/containerd/log package was moved to a separate
module, which will also be used by upcoming (patch) releases of containerd.
This patch moves our own uses of the package to use the new module.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
So far, internal networks were only isolated from the host by iptables
DROP rules. As a consequence, outbound connections from containers would
timeout instead of being "rejected" through an immediate ICMP dest/port
unreachable, a TCP RST or a failing `connect` syscall.
This was visible when internal containers were trying to resolve a
domain that don't match any container on the same network (be it a truly
"external" domain, or a container that don't exist/is dead). In that
case, the embedded resolver would try to forward DNS queries for the
different values of resolv.conf `search` option, making DNS resolution
slow to return an error, and the slowness being exacerbated by some libc
implementations.
This change makes `connect` syscall to return ENETUNREACH, and thus
solves the broader issue of failing fast when external connections are
attempted.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
This change creates a few OTEL spans and plumb context through the DNS
resolver and DNS backends (ie. Sandbox and Network). This should help
better understand how much lock contention impacts performance, and
help debug issues related to DNS queries (we basically have no
visibility into what's happening here right now).
Signed-off-by: Albin Kerouanton <albinker@gmail.com>