Since commit 2c4a868f64, Docker doesn't
use the value of net.ipv4.ip_local_port_range when choosing an ephemeral
port. This change reverts back to the previous behavior.
Fixes#43054.
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
The io/ioutil package has been deprecated in Go 1.16. This commit
replaces the existing io/ioutil functions with their new definitions in
io and os packages.
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Perhaps the testutils package in the past had an `init()` function to set up
specific things, but it no longer has. so these imports were doing nothing.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
The BurntSushi project is no longer maintained, and the container ecosystem
is moving to use the pelletier/go-toml project instead.
This patch moves libnetwork to use the pelletier/go-toml library, to reduce
our dependency tree and use the same library in all places.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
After moving libnetwork to this repo, we need to update all the import
paths for libnetwork to point to docker/docker/libnetwork instead of
docker/libnetwork.
This change implements that.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Also reduce the allowed port range as the total number of containers
per host is typically less than 1K.
This change helps in scenarios where there are other services on
the same host that uses ephemeral ports in iptables manipulation.
The workflow requires changes in docker engine (
https://github.com/moby/moby/pull/40055) and this change. It
works as follows:
1. user can now specified to docker engine an option
--published-port-range="50000-60000" as cmdline argument or
in daemon.json.
2. docker engine read and pass this info to libnetwork via
config.go:OptionDynamicPortRange.
3. libnetwork uses this range to allocate dynamic port henceforth.
4. --published-port-range can be set either via SIGHUP or
restart docker engine
5. if --published-port-range is not set by user, a OS specific
default range is used for dynamic port allocation.
Linux: 49153-60999, Windows: 60000-65000
6 if --published-port-range is invalid, that is, the range
given is outside of allowed default range, no change takes place.
libnetwork will continue to use old/existing port range for
dynamic port allocation.
Signed-off-by: Su Wang <su.wang@docker.com>
This is new feature that allows user to specify which subnetwork
Docker contrainer should choose from when it creates bridge network.
This libnetwork commit is to address moby PR 36054
Signed-off-by: selansen <elango.siva@docker.com>
Avoid negative numbers and also set a lower bondary.
500 will mean 400 bytes minimum payload that will allow
at least a couple of gossip message to fit.
There is not theoretical limit becasue the message is made of
strings so there is still the possibility to have cases where
the 400 bytes are not enough to fit a single message, but
in that case we should start thinking why do I need a node
name that is long as an enciclopedia
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
- Introduce the possibility to specify the max buffer length
in network DB. This will allow to use the whole MTU limit of
the interface
- Add queue stats per network, it can be handy to identify the
node's throughput per network and identify unbalance between
nodes that can point to an MTU missconfiguration
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
This change cleans up the SetClusterProvider method.
Swarm calls the SetClusterProvider to pass to libnetwork the pointer
of the provider from which libnetwork can fetch all the information to
initialize the internal agent.
The method can be and is called multiple times passing the same value,
with the previous logic that was erroneusly spawning multiple go routines that
were making possiblea race between an agentInit and an agentClose.
The new logic aims to disallow it by checking for the provider passed and
ensuring that if the provider is already present there is nothing to do because
there is already an active go routine that is ready to process cluster events.
Moreover a patch on moby side takes care of clearing up the Cluster Events
dispacthing using only 1 channel to handle all the events types.
This will also guarantee in order event handling because now all the events are
piped into one single channel.
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
This fix tries to fix logrus formatting by removing `f` from
`logrus.[Error|Warn|Debug|Fatal|Panic|Info]f` when formatting string
is not present.
Also fix import name to use original project name 'logrus' instead of
'log'
Signed-off-by: Daehyeok Mun <daehyeok@gmail.com>
As part of daemon init, network and ipam drivers are passed a
pluginstore object that implements the plugin/getter interface. Use this
interface methods in libnetwork to interact with network plugins. This
interface provides the new and improved pluginv2 functionality and falls
back to pluginv1 (legacy) if necessary.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
libnetwork agent mode is a mode where libnetwork can act as a local
agent for network and discovery plumbing alone while the state
management is done elsewhere. This completes the support for making
libnetwork and its associated drivers to be completely independent of a
k/v store(if needed) and work purely based on the state information
passed along by some some external controller or manager. This does not
mean that libnetwork support for decentralized state management via a
k/v store is removed.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
This adds a new options configuration routine that the engine
can call in order to configure TLS for libnetworks KV store.
Signed-off-by: Daniel Hiltgen <daniel.hiltgen@docker.com>
Always on watching of networks and endpoints can
affect scalability of the cluster beyond a few nodes.
Remove pro active watching and watch only the objects
you are interested in.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
* integrated hostdiscovery package with the new Docker Discovery
* Integrated hostdiscovery package with libnetwork core
* removed libnetwork_discovery tag
* Introduced driver apis for discovery events
* moved overlay driver to make use of the discovery events
* Using Docker Discovery service.
* Changed integration-tests to make use of the new discovery
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Currently the driver configuration is pushed through a separate
api. This makes driver configuration possible at any arbitrary
time. This unncessarily complicates the driver implementation.
More importantly the driver does not get access to it's
configuration before it can do the handshake with libnetwork.
This make the internal drivers a little bit different to
external plugins which can get their configuration before the handshake
with libnetwork.
This PR attempts to fix that mismatch between internal drivers and
external plugins.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
This way we won't vendor test related functions in docker anymore.
It also moves netns related functions to a new ns package to be able to
call the ns init function in tests. I think this also helps with the
overall package isolation.
Signed-off-by: David Calavera <david.calavera@gmail.com>
In that commit, AtomicPutCreate takes previous = nil to Atomically create keys
that don't exist. We need a create operation that is atomic to prevent races
between multiple libnetworks creating the same object.
Previously, we just created new KVs with an index of 0 and wrote them to the
datastore. Consul accepts this behaviour and interprets index of 0 as
non-existing, but other data backends do no.
- Add Exists() to the KV interface. SetIndex() should also modify a KV so
that it exists.
- Call SetIndex() from within the GetObject() method on DataStore interface.
- This ensures objects have the updated values for exists and index.
- Add SetValue() to the KV interface. This allows implementers to define
their own method to marshall and unmarshall (as bitseq and allocator have).
- Update existing users of the DataStore (endpoint, network, bitseq,
allocator, ov_network) to new interfaces.
- Fix UTs.
1. replaced --net option for service UI with SERVICE.[NETWORK] format
2. Making using of the default network/driver backend support
3. NetworkName and NetworkType from the UI/API can be empty string
and it will be replaced with DefaultNetwork and DefaultDriver
As per the design goals, we wanted to keep libnetwork core free of
handling defaults. Rather, the clients (docker & dnet) must handle the
defaultness of these entities.
Also, since there is no API to get these Default values from the
backend, UI will not handle the default values either. Hence, this falls
under the responsibility of the API layer to handle this specific case.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
The configuration format for docker runtime is based on daemon flags and
hence adjusting the libnetwork configuration to accomodate it by moving
the TOML based configuration to the dnet tool.
Also changed the controller configuration via options
Signed-off-by: Madhu Venugopal <madhu@docker.com>