Scenario:
Daemon is ungracefully shutdown and leaves plugins running (no
live-restore).
Daemon comes back up.
The next time a container tries to use that plugin it will cause a
daemon panic because the plugin client is not set.
This fixes that by ensuring that the plugin does get shutdown.
Note, I do not think there would be any harm in just re-attaching to the
running plugin instead of shutting it down, however historically we shut
down plugins and containers when live-restore is not enabled.
[kir@: consolidate code to deleteTaskAndContainer, a few minor nits]
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
This reverts commit 0c2821d6f2.
Due to other changes this is no longer needed and resolves some other
issues with plugins.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Setting up the mounts on the host increases chances of mount leakage and
makes for more cleanup after the plugin has stopped.
With this change all mounts for the plugin are performed by the
container runtime and automatically cleaned up when the container exits.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: John Howard <jhoward@microsoft.com>
The re-coalesces the daemon stores which were split as part of the
original LCOW implementation.
This is part of the work discussed in https://github.com/moby/moby/issues/34617,
in particular see the document linked to in that issue.
Instead of duplicating the same if condition per plugin manager directory,
use one if condition and a for-loop.
Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>
Signed-off-by: John Howard <jhoward@microsoft.com>
This PR has the API changes described in https://github.com/moby/moby/issues/34617.
Specifically, it adds an HTTP header "X-Requested-Platform" which is a JSON-encoded
OCI Image-spec `Platform` structure.
In addition, it renames (almost all) uses of a string variable platform (and associated)
methods/functions to os. This makes it much clearer to disambiguate with the swarm
"platform" which is really os/arch. This is a stepping stone to getting the daemon towards
fully multi-platform/arch-aware, and makes it clear when "operating system" is being
referred to rather than "platform" which is misleadingly used - sometimes in the swarm
meaning, but more often as just the operating system.
libcontainerd has a bunch of platform dependent code and huge interfaces
that are a pain implement.
To make the plugin manager a bit easier to work with, extract the plugin
executor into an interface and move the containerd implementation to a
separate package.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This prevents mounts in the plugins dir from leaking into other
namespaces which can prevent removal (`device or resource busy`),
particularly on older kernels.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Enables other subsystems to watch actions for a plugin(s).
This will be used specifically for implementing plugins on swarm where a
swarm controller needs to watch the state of a plugin.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Before this patch, if the plugin's `config.json` is successfully removed
but the main plugin state dir could not be removed for some reason (e.g.
leaked mount), it will prevent the daemon from being able to be
restarted.
This patches changes this to atomically remove the plugin such that on
daemon restart we can detect that there was an error and re-try. It also
changes the logic so that it only logs errors on restore rather than
erroring out the daemon.
This also removes some code which is now duplicated elsewhere.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
When the daemon is configured to run with an authorization-plugin and if
the plugin is disabled, the daemon continues to send API requests to the
plugin and expect it to respond. But the plugin has been disabled. As a
result, all API requests are blocked. Fix this behavior by removing the
disabled plugin from the authz middleware chain.
Tested using riyaz/authz-no-volume-plugin and observed that after
disabling the plugin, API request/response is functional.
Fixes#31836
Signed-off-by: Anusha Ragunathan <anusha.ragunathan@docker.com>
TestPluginTrustedInstall revealed a race in the plugin shutdown logic,
where the exit channel signal was sent even before the propagated mounts
were unmounted. If the same plugin was enabled, it would try to setup
propagated mounts *before* it was unmounted resulting in errors.
This change fixes the behavior by waiting until the unmount completes on
disable before marking the plugin as disabled.
Signed-off-by: Anusha Ragunathan <anusha.ragunathan@docker.com>
This persists the "propagated mount" for plugins outside the main
rootfs. This enables `docker plugin upgrade` to not remove potentially
important data during upgrade rather than forcing plugin authors to hard
code a host path to persist data to.
Also migrates old plugins that have a propagated mount which is in the
rootfs on daemon startup.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Use resolving to repo info as the split point between the
legitimate reference package and forked reference package.
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
The `digest` data type, used throughout docker for image verification
and identity, has been broken out into `opencontainers/go-digest`. This
PR updates the dependencies and moves uses over to the new type.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Move plugins to shared distribution stack with images.
Create immutable plugin config that matches schema2 requirements.
Ensure data being pushed is same as pulled/created.
Store distribution artifacts in a blobstore.
Run init layer setup for every plugin start.
Fix breakouts from unsafe file accesses.
Add support for `docker plugin install --alias`
Uses normalized references for default names to avoid collisions when using default hosts/tags.
Some refactoring of the plugin manager to support the change, like removing the singleton manager and adding manager config struct.
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Legacy plugins expect host-relative paths (such as for Volume.Mount).
However, a containerized plugin cannot respond with a host-relative
path. Therefore, this commit modifies new volume plugins' paths in Mount
and List to prepend the container's rootfs path.
This introduces a new PropagatedMount field in the Plugin Config.
When it is set for volume plugins, RootfsPropagation is set to rshared
and the path specified by PropagatedMount is bind-mounted with rshared
prior to launching the container. This is so that the daemon code can
access the paths returned by the plugin from the host mount namespace.
Signed-off-by: Tibor Vass <tibor@docker.com>
v2/Plugin struct had fields that were
- purely used by the manager.
- unsafely exposed without proper locking.
This change fixes this, by moving relevant fields to the manager as well
as making remaining fields as private and providing proper accessors for
them.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
This fix tries to fix the issue raised in 28684:
1. Duplicate plugin create with the same name will override the old plugin reference
2. In case an error happens in the middle of the plugin creation, plugin directories
in `/var/lib/docker/plugins` are not cleaned up.
This fix update the plugin store so that `Add()` will return an error if a plugin
with the same name already exist.
This fix also will clean up the directory in `/var/lib/docker/plugins` in case
an error happens in the middle of the plugin creation.
This fix fixes 28684.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
As part of making graphdrivers support pluginv2, a PluginGetter
interface was necessary for cleaner separation and avoiding import
cycles.
This commit creates a PluginGetter interface and makes pluginStore
implement it. Then the pluginStore object is created in the daemon
(rather than by the plugin manager) and passed to plugin init as
well as to the different subsystems (eg. graphdrivers, volumedrivers).
A side effect of this change was that some code was moved out of
experimental. This is good, since plugin support will be stable soon.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
Legacy plugins (aka pluginv1) calls in libnetwork are replaced with
calls using the new plugin model (aka pluginv2). pkg/plugins is still
used for managing the http client connections to the plugin.
This commit makes the necessary changes in docker/docker. Part 2 will
will take care of the libnetwork changes.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
Split plugin package into `store` and `v2/plugin`. Now the functionality
is clearly delineated:
- Manager: Manages the global state of the plugin sub-system.
- PluginStore: Manages a collection of plugins (in memory and on-disk)
- Plugin: Manages the single plugin unit.
This also facilitates splitting the global PluginManager lock into:
- PluginManager lock to protect global states.
- PluginStore lock to protect store states.
- Plugin lock to protect individual plugin states.
Importing "github.com/docker/docker/plugin/store" will provide access
to plugins and has lesser dependencies when compared to importing the
original monolithic `plugin package`.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
handleLegacy is a flag to indicate whether daemon is supporting legacy
plugins. When the time comes to remove support for legacy plugins,
flipping this bool is all that will be needed to remove legacy plugin
support. This can be a global variable rather than be embedded in the
manager, thereby cleaning up code.
Also rename to allowV1PluginsFallback for clarity.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
The main intent of handling plugin exit is for graceful shutdown
of plugins during daemon shutdown. So avoid plugin lookup during
plugin exits caused by other reasons (eg. force remove)
Signed-off-by: Anusha Ragunathan <anusha@docker.com>