4da19e2dca
Conntrack entries are created for UDP flows even if there's nowhere to
route these packets (ie. no listening socket and no NAT rules to
apply). Moreover, iptables NAT rules are evaluated by netfilter only
when creating a new conntrack entry.
When Docker adds NAT rules, netfilter will ignore them for any packet
matching a pre-existing conntrack entry. In such case, when
dockerd runs with userland proxy enabled, packets got routed to it and
the main symptom will be bad source IP address (as shown by #44688).
If the publishing container is run through Docker Swarm or in
"standalone" Docker but with no userland proxy, affected packets will
be dropped (eg. routed to nowhere).
As such, Docker needs to flush all conntrack entries for published UDP
ports to make sure NAT rules are correctly applied to all packets.
- Fixes #44688
- Fixes #8795
- Fixes #16720
- Fixes #7540
- Fixes moby/libnetwork#2423
- and probably more.
As a precautionary measure, those conntrack entries are also flushed
when revoking external connectivity to avoid those entries to be reused
when a new sandbox is created (although the kernel should already
prevent such case).
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit
|
||
---|---|---|
.. | ||
bitseq | ||
cluster | ||
cmd | ||
config | ||
datastore | ||
diagnostic | ||
discoverapi | ||
docs | ||
driverapi | ||
drivers | ||
drvregistry | ||
etchosts | ||
idm | ||
internal | ||
ipam | ||
ipamapi | ||
ipams | ||
ipamutils | ||
iptables | ||
netlabel | ||
netutils | ||
networkdb | ||
ns | ||
options | ||
osl | ||
portallocator | ||
portmapper | ||
resolvconf | ||
support | ||
test/integration | ||
testutils | ||
types | ||
.dockerignore | ||
.gitignore | ||
agent.go | ||
agent.pb.go | ||
agent.proto | ||
CHANGELOG.md | ||
controller.go | ||
default_gateway.go | ||
default_gateway_freebsd.go | ||
default_gateway_linux.go | ||
default_gateway_windows.go | ||
drivers_freebsd.go | ||
drivers_ipam.go | ||
drivers_linux.go | ||
drivers_windows.go | ||
endpoint.go | ||
endpoint_cnt.go | ||
endpoint_info.go | ||
endpoint_info_unix.go | ||
endpoint_info_windows.go | ||
endpoint_test.go | ||
error.go | ||
errors_test.go | ||
firewall_linux.go | ||
firewall_linux_test.go | ||
firewall_others.go | ||
libnetwork_internal_test.go | ||
libnetwork_linux_test.go | ||
libnetwork_test.go | ||
libnetwork_unix_test.go | ||
libnetwork_windows_test.go | ||
network.go | ||
network_unix.go | ||
network_windows.go | ||
README.md | ||
resolver.go | ||
resolver_test.go | ||
resolver_unix.go | ||
resolver_windows.go | ||
sandbox.go | ||
sandbox_dns_unix.go | ||
sandbox_dns_windows.go | ||
sandbox_externalkey.go | ||
sandbox_externalkey_unix.go | ||
sandbox_externalkey_windows.go | ||
sandbox_store.go | ||
sandbox_test.go | ||
service.go | ||
service_common.go | ||
service_common_test.go | ||
service_linux.go | ||
service_unsupported.go | ||
service_windows.go | ||
store.go | ||
store_linux_test.go | ||
store_test.go |
libnetwork - networking for containers
Libnetwork provides a native Go implementation for connecting containers
The goal of libnetwork is to deliver a robust Container Network Model that provides a consistent programming interface and the required network abstractions for applications.
Design
Please refer to the design for more information.
Using libnetwork
There are many networking solutions available to suit a broad range of use-cases. libnetwork uses a driver / plugin model to support all of these solutions while abstracting the complexity of the driver implementations by exposing a simple and consistent Network Model to users.
package main
import (
"fmt"
"log"
"github.com/docker/docker/pkg/reexec"
"github.com/docker/docker/libnetwork"
"github.com/docker/docker/libnetwork/config"
"github.com/docker/docker/libnetwork/netlabel"
"github.com/docker/docker/libnetwork/options"
)
func main() {
if reexec.Init() {
return
}
// Select and configure the network driver
networkType := "bridge"
// Create a new controller instance
driverOptions := options.Generic{}
genericOption := make(map[string]interface{})
genericOption[netlabel.GenericData] = driverOptions
controller, err := libnetwork.New(config.OptionDriverConfig(networkType, genericOption))
if err != nil {
log.Fatalf("libnetwork.New: %s", err)
}
// Create a network for containers to join.
// NewNetwork accepts Variadic optional arguments that libnetwork and Drivers can use.
network, err := controller.NewNetwork(networkType, "network1", "")
if err != nil {
log.Fatalf("controller.NewNetwork: %s", err)
}
// For each new container: allocate IP and interfaces. The returned network
// settings will be used for container infos (inspect and such), as well as
// iptables rules for port publishing. This info is contained or accessible
// from the returned endpoint.
ep, err := network.CreateEndpoint("Endpoint1")
if err != nil {
log.Fatalf("network.CreateEndpoint: %s", err)
}
// Create the sandbox for the container.
// NewSandbox accepts Variadic optional arguments which libnetwork can use.
sbx, err := controller.NewSandbox("container1",
libnetwork.OptionHostname("test"),
libnetwork.OptionDomainname("docker.io"))
if err != nil {
log.Fatalf("controller.NewSandbox: %s", err)
}
// A sandbox can join the endpoint via the join api.
err = ep.Join(sbx)
if err != nil {
log.Fatalf("ep.Join: %s", err)
}
// libnetwork client can check the endpoint's operational data via the Info() API
epInfo, err := ep.DriverInfo()
if err != nil {
log.Fatalf("ep.DriverInfo: %s", err)
}
macAddress, ok := epInfo[netlabel.MacAddress]
if !ok {
log.Fatalf("failed to get mac address from endpoint info")
}
fmt.Printf("Joined endpoint %s (%s) to sandbox %s (%s)\n", ep.Name(), macAddress, sbx.ContainerID(), sbx.Key())
}
Contributing
Want to hack on libnetwork? Docker's contributions guidelines apply.
Copyright and license
Code and documentation copyright 2015 Docker, inc. Code released under the Apache 2.0 license. Docs released under Creative commons.