libcontainerd/supervisor: fix data race

The monitorDaemon() goroutine calls startContainerd() then blocks on
<-daemonWaitCh to wait for it to exit. The startContainerd() function
would (re)initialize the daemonWaitCh so a restarted containerd could be
waited on. This implementation was race-free because startContainerd()
would synchronously initialize the daemonWaitCh before returning. When
the call to start the managed containerd process was moved into the
waiter goroutine, the code to initialize the daemonWaitCh struct field
was also moved into the goroutine. This introduced a race condition.

Move the daemonWaitCh initialization to guarantee that it happens before
the startContainerd() call returns.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit dd20bf4862)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This commit is contained in:
Cory Snider 2024-02-01 15:37:09 -05:00 committed by Sebastiaan van Stijn
parent fca702de7f
commit d22068f8e3
No known key found for this signature in database
GPG key ID: 76698F39D527CE8C

View file

@ -185,12 +185,13 @@ func (r *remote) startContainerd() error {
runtime.LockOSThread()
defer runtime.UnlockOSThread()
err := cmd.Start()
startedCh <- err
if err != nil {
startedCh <- err
return
}
r.daemonWaitCh = make(chan struct{})
startedCh <- nil
// Reap our child when needed
if err := cmd.Wait(); err != nil {
r.logger.WithError(err).Errorf("containerd did not exit successfully")