As part of daemon init, network and ipam drivers are passed a
pluginstore object that implements the plugin/getter interface. Use this
interface methods in libnetwork to interact with network plugins. This
interface provides the new and improved pluginv2 functionality and falls
back to pluginv1 (legacy) if necessary.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
When dynamic networks are created and there is a race in creation of the
same network from two different tasks then one of them will fail while
the other will succeed. For service tasks this is not a big problem
because they will be rescheduled again. But for attachment tasks this
can be a problem since they won't get recreated and making the whole
connection fail. Fixed it by serializing network creation for the
network with the same id and trying to see if the id is present after
coming out of wait.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
This also allows pubslied services to be accessible from containers on bridge
networks on the host
Signed-off-by: Santhosh Manohar <santhosh@docker.com>
Avoid by reinitializing the channel immediately after closing the
channel within a lock. Also change the wait code to cache the channel in
stack be retrieving it from controller and wait on the stack copy of the
channel.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
When leaving the entire gossip cluster or when leaving a network
specific gossip cluster, we may not have had a chance to cleanup service
bindings by way of gossip updates due to premature closure of gossip
channel. Make sure to cleanup all service bindings since we are not
participating in the cluster any more.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
When a node leaves the swarm cluster, we should cleanup the ingress
network and sandbox. This makes sure that when the next time the node
joins the swarm it will be able to update the cluster with the right
information.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
While trying to update loadbalancer state index the service both on id
and portconfig. From libnetwork point of view a service is not just
defined by its id but also the ports it exposes. When a service updates
its port its id remains the same but its portconfigs change which should
be treated as a new service in libnetwork in order to ensure proper
cleanup of old LB state and creation of new LB state.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
When adding a loadbalancer to a sandbox, the sandbox may have a valid
namespace but it might not have populated all the dependent network
resources yet. In that case do not populate that endpoint's loadbalancer
into that sandbox yet. The loadbalancer will be populated into the
sandbox when it is done populating all the dependent network resources.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
If the IPAM pools are not reserved before resource cleanup happens then
the resource release will not happen correctly.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
When leaving a cluster the agentInitDone should be re-initialized so tha
when a new cluster is initialized this is usable.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Agent initialization wait method is added to make sure callers for
controller methods which depend on agent initialization to be complete
can wait on it.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Ingress load balancer is achieved via a service sandbox which acts as
the proxy to translate incoming node port requests and mapping that to a
service entry. Once the right service is identified, the same internal
loadbalancer implementation is used to load balance to the right backend
instance.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
- Also restore older behavior where overlap check is not run
when preferred pool is specified. Got broken by recent changes
Signed-off-by: Alessandro Boch <aboch@docker.com>
Add a notion of service in libnetwork so that a group of endpoints
which form a service can be treated as such so that service level
features can be added on top. Initially as part of this PR the support
to assign a name to the said service is added which results in DNS
queries to the service name to return all the IPs of the backing
endpoints so that DNS RR behavior on the service name can be achieved.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
libnetwork agent mode is a mode where libnetwork can act as a local
agent for network and discovery plumbing alone while the state
management is done elsewhere. This completes the support for making
libnetwork and its associated drivers to be completely independent of a
k/v store(if needed) and work purely based on the state information
passed along by some some external controller or manager. This does not
mean that libnetwork support for decentralized state management via a
k/v store is removed.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
With the introduction of a driver generic gossip in libnetwork it is not
necessary for drivers to run their own gossip protocol (like what
overlay driver is doing currently) but instead rely on the gossip
instance run centrally in libnetwork. In order to achieve this, certain
enhancements to driver api are needed. This api aims to provide these
enhancements.
The new api provides a way for drivers to register interest on table
names of their choice by returning a list of table names of interest as
a response to CreateNetwork. By doing that they will get notified if a
CRUD operation happened on the tables of their interest, via the newly
added EventNotify call.
Drivers themselves can add entries to any table during a Join call by
invoking AddTableEntry method any number of times during the Join
call. These entries lifetime is the same as the endpoint itself. As soon
as the container leaves the endpoint, those entries added by driver
during that endpoint's Join call will be automatically removed by
libnetwork. This action may trigger notification of such deletion to all
driver instances in the cluster who have registered interest in that
table's notification.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>