For ungraceful daemon restarts, libnetwork has sandbox cleanup logic to
remove any stale & dangling resources. But, if the store is down during
the daemon restart, then the cleanup logic would not be able to perform
complete cleanup. During such cases, the sandbox has been removed. With
this fix, we retain the sandbox if the store is down and the endpoint
couldnt be cleaned. When the container is later restarted in docker
daemon, we will perform a sandbox cleanup and that will complete the
cleanup round.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Added IT cases for external connectivity check for bridge
and overlay networks, both initially and after a restart.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Reconciling persistent state after configuring driver. If not
the networks will not be initialized properly based on certain
driver config settings like enabling IP tables etc.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Added an IT case for checking proper /etc/hosts
handling in the overlay network. This also to see
if there are any stale entries in the /etc/hosts
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Cleanup the service db for the network when the last
container on the network leaves on the host. This is
because we stop watching the network after the last
container leaves and so if we keep the service db
around it might be kept uptodate with containers
joining and leaving in other hosts. The service
db will populated properly when a container joins
this network at a later point in time.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
- Currently when a sandbox disconnect from a network
the network's services are not removed from the
sandbox's /etc/hosts file
Signed-off-by: Alessandro Boch <aboch@docker.com>
Overlay driver allows local containers to communicate in overly network
even when the serf is not fully inited. But when the container leaves an
overlay network, it gets stuck waiting on a nil notifyCh, when the serf
is not fully initialized.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Currently the local containers of a global scope
network will get it's service records updated
from both a local update and global update. There
is no way to check if this is a local endpoint when
a remote update comes in via watch because we add
the endpoint to local endpoint list during join, while
the remote update happens during createendpoint.
The right thing to do is update the local endpoint list
and start watching during createndpoint and remove the watch
during delete endpoint. But this might result in the container
getting it's own record in it's /etc/hosts. So added a filtering
logic to filter out self records when updating the container's
/etc/hosts
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
A local endpoint is known to the watch database only
during Join. But the same endpoint can be known to the
watch database as remote endpoint well before the Join
because a CreateEndpoint updates the endpoint to the store.
So on Join when you come to know that this is indeed a
local endpoint remove it from remote endpoint list and add it
to local endpoint list.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Introduced a path level lock to synchronize updates
to /etc/hosts writes. A path level cache is maintained
to only synchronize only at the file level.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>