libnetwork/etchosts/etchosts_test.go:167:54: empty-lines: extra empty line at the end of a block (revive)
libnetwork/osl/route_linux.go:185:74: empty-lines: extra empty line at the start of a block (revive)
libnetwork/osl/sandbox_linux_test.go:323:36: empty-lines: extra empty line at the start of a block (revive)
libnetwork/bitseq/sequence.go:412:48: empty-lines: extra empty line at the start of a block (revive)
libnetwork/datastore/datastore_test.go:67:46: empty-lines: extra empty line at the end of a block (revive)
libnetwork/datastore/mock_store.go:34:60: empty-lines: extra empty line at the end of a block (revive)
libnetwork/iptables/firewalld.go:202:44: empty-lines: extra empty line at the end of a block (revive)
libnetwork/iptables/firewalld_test.go:76:36: empty-lines: extra empty line at the end of a block (revive)
libnetwork/iptables/iptables.go:256:67: empty-lines: extra empty line at the end of a block (revive)
libnetwork/iptables/iptables.go:303:128: empty-lines: extra empty line at the start of a block (revive)
libnetwork/networkdb/cluster.go:183:72: empty-lines: extra empty line at the end of a block (revive)
libnetwork/ipams/null/null_test.go:44:38: empty-lines: extra empty line at the end of a block (revive)
libnetwork/drivers/macvlan/macvlan_store.go:45:52: empty-lines: extra empty line at the end of a block (revive)
libnetwork/ipam/allocator_test.go:1058:39: empty-lines: extra empty line at the start of a block (revive)
libnetwork/drivers/bridge/port_mapping.go:88:111: empty-lines: extra empty line at the end of a block (revive)
libnetwork/drivers/bridge/link.go:26:90: empty-lines: extra empty line at the end of a block (revive)
libnetwork/drivers/bridge/setup_ipv6_test.go:17:34: empty-lines: extra empty line at the end of a block (revive)
libnetwork/drivers/bridge/setup_ip_tables.go:392:4: empty-lines: extra empty line at the start of a block (revive)
libnetwork/drivers/bridge/bridge.go:804:50: empty-lines: extra empty line at the start of a block (revive)
libnetwork/drivers/overlay/ov_serf.go:183:29: empty-lines: extra empty line at the start of a block (revive)
libnetwork/drivers/overlay/ov_utils.go:81:64: empty-lines: extra empty line at the end of a block (revive)
libnetwork/drivers/overlay/peerdb.go:172:67: empty-lines: extra empty line at the start of a block (revive)
libnetwork/drivers/overlay/peerdb.go:209:67: empty-lines: extra empty line at the start of a block (revive)
libnetwork/drivers/overlay/peerdb.go:344:89: empty-lines: extra empty line at the start of a block (revive)
libnetwork/drivers/overlay/peerdb.go:436:63: empty-lines: extra empty line at the start of a block (revive)
libnetwork/drivers/overlay/overlay.go:183:36: empty-lines: extra empty line at the start of a block (revive)
libnetwork/drivers/overlay/encryption.go:69:28: empty-lines: extra empty line at the end of a block (revive)
libnetwork/drivers/overlay/ov_network.go:563:81: empty-lines: extra empty line at the start of a block (revive)
libnetwork/default_gateway.go:32:43: empty-lines: extra empty line at the start of a block (revive)
libnetwork/errors_test.go:9:40: empty-lines: extra empty line at the start of a block (revive)
libnetwork/service_common.go:184:64: empty-lines: extra empty line at the end of a block (revive)
libnetwork/endpoint.go:161:55: empty-lines: extra empty line at the end of a block (revive)
libnetwork/store.go:320:33: empty-lines: extra empty line at the end of a block (revive)
libnetwork/store_linux_test.go:11:38: empty-lines: extra empty line at the end of a block (revive)
libnetwork/sandbox.go:571:36: empty-lines: extra empty line at the start of a block (revive)
libnetwork/service_common.go:317:246: empty-lines: extra empty line at the start of a block (revive)
libnetwork/endpoint.go:550:17: empty-lines: extra empty line at the end of a block (revive)
libnetwork/sandbox_dns_unix.go:213:106: empty-lines: extra empty line at the start of a block (revive)
libnetwork/controller.go:676:85: empty-lines: extra empty line at the end of a block (revive)
libnetwork/agent.go:876:60: empty-lines: extra empty line at the end of a block (revive)
libnetwork/resolver.go:324:69: empty-lines: extra empty line at the end of a block (revive)
libnetwork/network.go:1153:92: empty-lines: extra empty line at the end of a block (revive)
libnetwork/network.go:1955:67: empty-lines: extra empty line at the start of a block (revive)
libnetwork/network.go:2235:9: empty-lines: extra empty line at the start of a block (revive)
libnetwork/libnetwork_internal_test.go:336:26: empty-lines: extra empty line at the start of a block (revive)
libnetwork/resolver_test.go:76:35: empty-lines: extra empty line at the end of a block (revive)
libnetwork/libnetwork_test.go:303:38: empty-lines: extra empty line at the end of a block (revive)
libnetwork/libnetwork_test.go:985:46: empty-lines: extra empty line at the end of a block (revive)
libnetwork/ipam/allocator_test.go:1263:37: empty-lines: extra empty line at the start of a block (revive)
libnetwork/errors_test.go:9:40: empty-lines: extra empty line at the end of a block (revive)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
It was unclear what the distinction was between these configuration
structs, so merging them to simplify.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
After moving libnetwork to this repo, we need to update all the import
paths for libnetwork to point to docker/docker/libnetwork instead of
docker/libnetwork.
This change implements that.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This PR carryforwards https://github.com/moby/libnetwork/pull/2239
and incorporates the suggestions in comments to fix the NPE and
potential NPEs due to a null value returned by ep.Iface()
Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
Issue - "index out of range" panic in drivers/overlay/encryption.go:539
due to a mismatch in indices between curKeys and spis due to
case where updateKeys might bail out due to an error and
not update the spis
Fix - Reconfigure keys when there is a key update failure
Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
This patch attempts to allow endpoints to complete servicing connections
while being removed from a service. The change adds a flag to the
endpoint.deleteServiceInfoFromCluster() method to indicate whether this
removal should fully remove connectivity through the load balancer
to the endpoint or should just disable directing further connections to
the endpoint. If the flag is 'false', then the load balancer assigns
a weight of 0 to the endpoint but does not remove it as a linux load
balancing destination. It does remove the endpoint as a docker load
balancing endpoint but tracks it in a special map of "disabled-but-not-
destroyed" load balancing endpoints. This allows traffic to continue
flowing, at least under Linux. If the flag is 'true', then the code
removes the endpoint entirely as a load balancing destination.
The sandbox.DisableService() method invokes deleteServiceInfoFromCluster()
with the flag sent to 'false', while the endpoint.sbLeave() method invokes
it with the flag set to 'true' to complete the removal on endpoint
finalization. Renaming the endpoint invokes deleteServiceInfoFromCluster()
with the flag set to 'true' because renaming attempts to completely
remove and then re-add each endpoint service entry.
The controller.rmServiceBinding() method, which carries out the operation,
similarly gets a new flag for whether to fully remove the endpoint. If
the flag is false, it does the job of moving the endpoint from the
load balancing set to the 'disabled' set. It then removes or
de-weights the entry in the OS load balancing table via
network.rmLBBackend(). It removes the service entirely via said method
ONLY IF there are no more live or disabled load balancing endpoints.
Similarly network.addLBBackend() requires slight tweaking to properly
manage the disabled set.
Finally, this change requires propagating the status of disabled
service endpoints via the networkDB. Accordingly, the patch includes
both code to generate and handle service update messages. It also
augments the service structure with a ServiceDisabled boolean to convey
whether an endpoint should ultimately be removed or just disabled.
This, naturally, required a rebuild of the protocol buffer code as well.
Signed-off-by: Chris Telfer <ctelfer@docker.com>
This commit introduces the possibility to enable a debug mode
for the networkDB, this will allow the opening of a tcp port
on localhost that will expose the networkDB api for debugging
purposes.
The API can be discovered using curl localhost:<port>/help
It support json output if passed json as URL query parameter
option and pretty printing if passing json=pretty
All the binaries values are serialized in base64 encoding, this
can be skip passing the unsafe option as url query parameter
A simple go client will follow up
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
Separate the hostname from the node identifier. All the messages
that are exchanged on the network are containing a nodeName field
that today was hostname-uniqueid. Now being encoded as strings in
the protobuf without any length restriction they plays a role
on the effieciency of protocol itself. If the hostname is very long
the overhead will increase and will degradate the performance of
the database itself that each single cycle by default allows 1400
bytes payload
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
Move the sandbox init logic into the go routine that handles
peer operations.
This is to avoid deadlocks in the use of the pMap.Lock for the
network
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
- Introduce the possibility to specify the max buffer length
in network DB. This will allow to use the whole MTU limit of
the interface
- Add queue stats per network, it can be handy to identify the
node's throughput per network and identify unbalance between
nodes that can point to an MTU missconfiguration
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
* Correct SetMatrix documentation
The SetMatrix is a generic data structure, so the description
should not be tight to any specific use
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
* Service Discovery reuse name and serviceBindings deletion
- Added logic to handle name reuse from different services
- Moved the deletion from the serviceBindings map at the end
of the rmServiceBindings body to avoid race with new services
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
* Avoid race on network cleanup
Use the locker to avoid the race between the network
deletion and new endpoints being created
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
* CleanupServiceBindings to clean the SD records
Allow the cleanupServicebindings to take care of the service discovery
cleanup. Also avoid to trigger the cleanup for each endpoint from an SD
point of view
LB and SD will be separated in the future
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
* Addressed comments
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
* NetworkDB deleteEntry has to happen
If there is an error locally guarantee that the delete entry
on network DB is still honored
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
changed the ipMap to SetMatrix to allow transient states
Compacted the addSvc and deleteSvc into a one single method
Updated the datastructure for backends to allow storing all the information needed
to cleanup properly during the cleanupServiceBindings
Removed the enable/disable Service logic that was racing with sbLeave/sbJoin logic
Add some debug logs to track further race conditions
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
The channel ch.C is never closed.
Added the listen of the ch.Done() to guarantee
that the goroutine is exiting once the event channel
is closed
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
- It specifies whether the network driver can
provide containers connectivity across hosts.
- As of now, the data scope of the driver was
being overloaded with this notion.
- The driver scope information is still valid
and it defines whether the data allocation
of the network resources can be done globally
or only locally.
- With the scope network option, user can now
force a network as swarm scoped
regardless of the driver data scope.
- In case the network is configured as swarm scoped,
and the network driver is multihost capable,
a network DB instance will be launched for it.
Signed-off-by: Alessandro Boch <aboch@docker.com>
This change cleans up the SetClusterProvider method.
Swarm calls the SetClusterProvider to pass to libnetwork the pointer
of the provider from which libnetwork can fetch all the information to
initialize the internal agent.
The method can be and is called multiple times passing the same value,
with the previous logic that was erroneusly spawning multiple go routines that
were making possiblea race between an agentInit and an agentClose.
The new logic aims to disallow it by checking for the provider passed and
ensuring that if the provider is already present there is nothing to do because
there is already an active go routine that is ready to process cluster events.
Moreover a patch on moby side takes care of clearing up the Cluster Events
dispacthing using only 1 channel to handle all the events types.
This will also guarantee in order event handling because now all the events are
piped into one single channel.
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
Change in the provider interface to let the provider
return the whole list of managers.
This will allow the netwrok db to have multiple choice
to establish the first adjacencies
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
- concurrent swarm join and daemon stop seen in
integration tests may cause agentSetup to access
a nil clusterProvider, resulting in a panic
Signed-off-by: Alessandro Boch <aboch@docker.com>
During configuration in SWARM mode is now possible to pass an additional
parameter --data-path-addr <ip|interface>.
The information is going to be used to configure which is the interface
that is going to be used for the data path for global scope drivers.
Up to now the only driver really using this extra parameter is the
overlay driver.
Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
- Currently when a non-named container with network aliases
is connected to a swarm attachable network, its aliases are
not added to the service records.
This is not in line with what we do when connecting to
a local scope network, or to a kv-store based overlay network.
Signed-off-by: Alessandro Boch <aboch@docker.com>
With the introduction of networkdb, the node discovery events were not
sent to the drivers. This commit generates the node discovery events and
sents it to the drivers interested in it.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
- This fixes a panic in memberlist.Leave() because called
after memberlist.shutdown = false
It happens because of two interlocking calls to NetworkDB.clusterLeave()
It is easily reproducible with two back-to-back calls
to docker swarm init && docker swarm leave --force
While the first clusterLeave() is waiting for sendNodeEvent(NodeEventTypeLeave)
to timeout (5 sec) a second clusterLeave() is called. The second clusterLeave()
will end up invoking memberlist.Leave() after the previous call already did
the same, therefore after memberlist.shutdown was set false.
- The fix is to have agentClose() acquire the agent instance and reset the
agent pointer right away under lock. Then execute the closing/leave functions
on the agent instance.
Signed-off-by: Alessandro Boch <aboch@docker.com>
When a gossip join failure happens do not return early in the call chain
because a join failure is most likely transient and the retry logic
built in the networkdb is going to retry and succeed. Returning early
makes the initialization of ingress network/sandbox to not happen which
causes a problem even after the gossip join on retry is successful.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
If user provided a non-zero listen address, honor that and bind only to
that address. Right now it is not honored and we always bind to all ip
addresses in the host.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Avoid by reinitializing the channel immediately after closing the
channel within a lock. Also change the wait code to cache the channel in
stack be retrieving it from controller and wait on the stack copy of the
channel.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Currently the initDone notification is provided immediately after
initializing the cluster. This may be fine for the first manager. But
for all subsequent nodes which join the cluster we need to wait until
the node completes the joining to the gossip cluster inorder to
synchronize the gossip network clock with other nodes. If we don't have
uptodate clock the updates that this node provides to the cluster may be
discarded by the other nodes if they have entries which are yet to be
reaped but have a better clock.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Fixed certain spurious overlay errors which were not errors at all but
showing up everytime service tasks are started in the engine.
Also added a check to make sure a delete is valid by checking the
incoming endpoint id wih the one in peerdb just to make sure if the
delete from gossip is not stale.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>