This implementation is very naive compared to the PulseAudio one.
Instead of using a callback implemented by the audio server connection
to push audio to the buffer, we have to poll on a timer to check when
we need to push the audio buffers. Implementing cross-process condition
variables into the audio queue class could allow us to avoid polling,
which may prove beneficial to CPU usage.
Audio timestamps will be accurate to the number of samples available,
but will count in increments of about 100ms and run ahead of the actual
audio being pushed to the device by the server.
Buffer underruns are completely ignored for now as well, since the
`AudioServer` has no way to know how many samples are actually written
in a single audio buffer.
This change was a long time in the making ever since we obtained sample
rate awareness in the system. Now, each client has its own sample rate,
accessible via new IPC APIs, and the device sample rate is only
accessible via the management interface. AudioServer takes care of
resampling client streams into the device sample rate. Therefore, the
main improvement introduced with this commit is full responsiveness to
sample rate changes; all open audio programs will continue to play at
correct speed with the audio resampled to the new device rate.
The immediate benefits are manifold:
- Gets rid of the legacy hardware sample rate IPC message in the
non-managing client
- Removes duplicate resampling and sample index rescaling code
everywhere
- Avoids potential sample index scaling bugs in SoundPlayer (which have
happened many times before) and fixes a sample index scaling bug in
aplay
- Removes several FIXMEs
- Reduces amount of sample copying in all applications (especially
Piano, where this is critical), improving performance
- Reduces number of resampling users, making future API changes (which
will need to happen for correct resampling to be implemented) easier
I also threw in a simple race condition fix for Piano's audio player
loop.
This is a sensible separation of concerns that mirrors the WindowServer
IPC split. On the one hand, there is the "normal" audio interface, used
for clients that play audio, which is the primary service of
AudioServer. On the other hand, there is the management interface,
which, like the WindowManager endpoint, provides higher-level control
over clients and the server itself.
The reasoning for this split are manifold, as mentioned we are mirroring
the WindowServer split. Another indication to the sensibility of the
split is that no single audio client used the APIs of both interfaces.
Also, useless audio queues are no longer created for managing clients
(since those don't even exist, just like there's no window backing
bitmap for window managing clients), eliminating any bugs that may occur
there as they have in the past.
Implementation-wise, we just move all the APIs and implementations from
the old AudioServer into the AudioManagerServer (and respective clients,
of course). There is one point of duplication, namely the hardware
sample rate. This will be fixed in combination with per-client sample
rate, eliminating client-side resampling and the related update bugs.
For now, we keep one legacy API to simplify the transition.
The new AudioManagerServer also gains a hardware sample rate change
callback to have exact symmetry on the main server parameters (getter,
setter, and callback).
That's what this class really is; in fact that's what the first line of
the comment says it is.
This commit does not rename the main files, since those will contain
other time-related classes in a little bit.
This was used in exactly one place, to avoid sending multiple
CustomEvents to the enqueuer thread in Audio::ConnectionToServer.
Instead of this, we now just send a CustomEvent and wake the enqueuer
thread. If it wakes up and has multiple CustomEvents, they get delivered
and ignored in no time anyway. Since they only get ignored if there's
no work to be done, this seems harmless.
This was changed with a recent move to MutexLocker, but the exact
previous behavior is safer as it holds the lock for the minimum amount
of time in both locations. We don't want to introduce these kinds of
subtle bugs.
The audio enqueuer thread goes to sleep when there is no more audio data
present, and through normal Core::EventLoop events it can be woken up.
However, that waking up only happens when the thread is not currently
running, so that the wake-up events don't queue up and cause weirdness.
The atomic variable responsible for keeping track of whether the thread
is active can lead to a racy deadlock however, where the audio enqueuer
thread will never wake up again despite there being audio data to
enqueue. Consider this scenario:
- Main thread calls into async_enqueue. It detects that according to the
atomic variable, the other thread is still running, skipping the event
queue wake.
- Enqueuer thread has just finished playing the last chunk of audio and
detects that there is no audio left. It enters the if block with the
dbgln "Reached end of provided audio data..."
- Main thread enqueues audio, making the user sample queue non-empty.
- Enqueuer thread does not check this condition again, instead setting
the atomic variable to indicate that it is not running. It exits into
an event loop sleep.
- Main thread exits async_enqueue. The calling audio enqueuing system
(see e.g. Piano, but all of them function similarly) will wait until
the enqueuer thread has played enough samples before async_enqueue is
called again. However, since the enqueuer thread will never play any
audio, this condition is never fulfilled and audio playback deadlocks
This commit fixes that by allowing the event loop to not enqueue an
event that already exists, therefore overloading the audio enqueuer
event loop by at maximum one message in weird situations. We entirely
get rid of the atomic variable and the race condition is prevented.
This will conflict with apps that don't use this thread and it also
creates unnecessary overhead for non-enqueuing clients like AudioApplet.
Use the new Thread is_started info to start the thread only if necessary
(on first call to async_enqueue).