With the introduction of a driver generic gossip in libnetwork it is not
necessary for drivers to run their own gossip protocol (like what
overlay driver is doing currently) but instead rely on the gossip
instance run centrally in libnetwork. In order to achieve this, certain
enhancements to driver api are needed. This api aims to provide these
enhancements.
The new api provides a way for drivers to register interest on table
names of their choice by returning a list of table names of interest as
a response to CreateNetwork. By doing that they will get notified if a
CRUD operation happened on the tables of their interest, via the newly
added EventNotify call.
Drivers themselves can add entries to any table during a Join call by
invoking AddTableEntry method any number of times during the Join
call. These entries lifetime is the same as the endpoint itself. As soon
as the container leaves the endpoint, those entries added by driver
during that endpoint's Join call will be automatically removed by
libnetwork. This action may trigger notification of such deletion to all
driver instances in the cluster who have registered interest in that
table's notification.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Currently the libnetwork function `NewNetwork` does not allow
caller to pass a network ID and it is always generated internally.
This is sufficient for engine use. But it doesn't satisfy the needs
of libnetwork being used as an independent library in programs other
than the engine. This enhancement is one of the many needed to
facilitate a generic libnetwork.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Currently driver management logic is tightly coupled with
libnetwork package and that makes it very difficult to
modularize it and use it separately. This PR modularizes
the driver management logic by creating a driver registry
package.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
With the current implementation, a config relaod event causes all the
datastores to reinitialize and that impacts objects with Persist=false
such as none and host network.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
- ... on ungraceful shutdown during network create
- Allow forceful deletion of network
- On network delete, first mark the network for deletion
- On controller creation, first forcely remove any network
that is marked for deletion.
Signed-off-by: Alessandro Boch <aboch@docker.com>
- Move DiscoverNew() and DiscoverDelete() methods into the new interface
- Add DatastoreUpdate notification
- Now this interface can be implemented by any drivers, not only network drivers
Signed-off-by: Alessandro Boch <aboch@docker.com>
Stale sandbox and endpoints are cleaned up during controller init.
Since we reuse the exact same code-path, for sandbox and endpoint
delete, they try to load the plugin and it causes daemon startup
timeouts since the external plugin containers cant be loaded at that
time. Since the cleanup is actually performed for the libnetwork core
states, we can force delete sandbox and endpoint even if the driver is
not loaded.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
- So that a DHCP based plugin can express it needs
the endpoint MAC address when requested for an IP address.
- In such case libnetwork will allocate one if not already
provided by user
Signed-off-by: Alessandro Boch <aboch@docker.com>
- Remove from contract predefined errors which are no longer
valid (ex. ErrInvalidIpamService, ErrInvalidIpamConfigService)
- Do not use network driver error for ipam load failure in controller.go
- Bitseq to expose two well-known errors (no more bit available, bit is already set)
- Default ipam to report proper well-known error on RequestAddress()
based on bitseq returned error
- Default ipam errors to comply with types error interface
Signed-off-by: Alessandro Boch <aboch@docker.com>
At times, when checkpointed sandbox from store cannot be
cleaned up properly we still retain the sandbox in both
the store and in memory. But this sandbox store may not
contain important configuration information from docker.
So when docker requests a new sandbox, instead of using
it as is, reconcile the sandbox state from store with the
the configuration information provided by docker. To do this
mark the sandbox from store as stub and never reveal it to
external searches. When docker requests a new sandbox, update
the stub sandbox and clear the stub flag.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
There is a race in os sandbox sharing code where two containers which
are sharing the os sandbox try to recreate the os sandbox again which
might result in destroying the os sandbox and recreating it. Since the
os sandbox sharing is happening only for default sandbox, refactored the
code to create os sandbox only once inside a `sync.Once` api so that it
happens exactly once and gets reused by other containers. Also disabled
deleting this os sandbox.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Since we share the host sandbox with many containers we
need to serialize creation of the sandbox. Otherwise
container starts may see the namespace path in inconsistent
state.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
When we bootup cleanup all dangling local
endpoints since they are not needed anymore.
The only reason it can happen is when the process
went down ungracefully after an endpoint is
created but before join is successfull.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Currently endpoint count is maintained as part of
network object and the endpoint count gets updated
frequently while the rest of network is quite stable.
Because of the frequent updates to endpoint count the
network object is getting marshalled and unmarshalled
ferquently. This is causing a lot of churn and transient
memory usage. Fix this by creating a deparate object of
endpoint count so that only that gets updated.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Currently when docker exits ungracefully it may leave
dangling sandboxes which may hold onto precious network
resources. Added checkpoint state for sandboxes which
on boot up will be used to clean up the sandboxes and
network resources.
On bootup the remaining dangling state in the checkpoint
are read and cleaned up before accepting any new
network allocation requests.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Always on watching of networks and endpoints can
affect scalability of the cluster beyond a few nodes.
Remove pro active watching and watch only the objects
you are interested in.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
* integrated hostdiscovery package with the new Docker Discovery
* Integrated hostdiscovery package with libnetwork core
* removed libnetwork_discovery tag
* Introduced driver apis for discovery events
* moved overlay driver to make use of the discovery events
* Using Docker Discovery service.
* Changed integration-tests to make use of the new discovery
Signed-off-by: Madhu Venugopal <madhu@docker.com>
- DisableBridgeCreation is misleading. change it to DefaultBridge
- Dont fail the init if localstore cannot be initialized
- added a convenience function to get endpoint for a container
Signed-off-by: Madhu Venugopal <madhu@docker.com>
1. Don't save localscope endpoints to localstore for now.
2. Add common function updateToStore/deleteFromStore to store KVObjects.
3. Merge `getNetworksFromGlobalStore` and `getNetworksFromLocalStore`
4. Add `n.isGlobalScoped` before `n.watchEndpoints` in `addNetwork`
5. Fix integration-tests
6. Fix test failure in drivers/remote/driver_test.go
7. Restore network to store if deleteNework failed
Currently the driver configuration is pushed through a separate
api. This makes driver configuration possible at any arbitrary
time. This unncessarily complicates the driver implementation.
More importantly the driver does not get access to it's
configuration before it can do the handshake with libnetwork.
This make the internal drivers a little bit different to
external plugins which can get their configuration before the handshake
with libnetwork.
This PR attempts to fix that mismatch between internal drivers and
external plugins.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
- Convinience API which detaches the sandbox from
all endpoints, resets and reapply config options,
setup discovery files, reattach to the endpoints.
No change to the osl sandbox in use.
Signed-off-by: Alessandro Boch <aboch@docker.com>
- Maps 1 to 1 with container's networking stack
- It holds container's specific nw options which
before were incorrectly owned by Endpoint.
- Sandbox creation no longer coupled with Endpoint Join,
sandbox and endpoint have now separate lifecycle.
- LeaveAll naturally replaced by Sandbox.Delete
- some pkg and file renaming in order to have clear
mapping between structure name and entity ("sandbox")
- Revisited hosts and resolv.conf handling
- Removed from JoinInfo interface capability of setting hosts and resolv.conf paths
- Changed etchosts.Build() to first write the search domains and then the nameservers
Signed-off-by: Alessandro Boch <aboch@docker.com>