...from "bytes" to "size_in_bytes".
I thought at first that `bytes` meant the variable contained
bytes that the decoder would read from.
Also, this variable is called `sz` (for `size`) in the spec.
No behavior change.
In addition, this renames the internal timer to better suit its new
purpose since the playback state handlers were added. Not only is it
used to time frame presentations, but also to poll the queue when
seeking or buffering.
Running decoding on the main thread can cause frames to be delayed due
to the presentation timer waiting for a decode to finish. If a frame
takes too long to decode, this delay can be quite visible. Therefore,
decoding has been moved to a separate thread, with frames sent to the
main thread through a thread-safe queue.
This results in frame times going from being late by up to 16ms to a
very consistent ~1-2ms.
This allows us to create a PlaybackManager from a file which has already
been mapped, instead of passing a file name.
This means that anyone who uses `PlaybackManager` can now use LibFSAC :)
The framebuffer size was reduced in f2c0cee, but this caused some niche
block layouts to write outside of the frame.
This could be fixed by adding checks to see if a block being predicted/
reconstructed is within the frame, but the branches introduced by that
reduce performance slightly. Therefore, it's better to keep the
framebuffer sized according to the decoded frame size in 8x8 blocks so
that any block can be decoded without bounds checking.
A test was added to ensure that this continues to work.
Some occasional cases could cause the accumulator to overflow and have
an incorrect result. It would be nice to use a smaller accumulator, but
it seems not to be correct. :^(
We now cast to i16 to allow 128-bit vectorization to make use of one
whole register instead of having to split the loop into multiple.
This results in about a 5% reduction in performance in my testing.
We don't need to run through the whole floating-point color converter
for videos that use sRGB transfer characteristics and BT.709 color
primaries. This commit adds a new templated inlining function to
ColorConverter to do a very fast fixed-point YCbCr to RGB conversion.
With the fast path, frame conversion times go from ~7.8ms down to
~3.7ms. The fast path can benefit a lot more from extra SIMD vector
width, as well.
Clang was reluctant to inline these for some reason. However, inlining
them seems to be quite beneficial, reducing decoding time in an intra-
heavy video by about 21% (~12.7s -> ~10.0s).
Quantizers are a constant for the whole frame, except when segment
features override them, in which case they are a constant per segment
ID. We take advantage of this by pre-calculating those after reading
the quantization parameters and segmentation features for a frame.
This results in a small 1.5% improvement (~12.9s -> ~12.7s).
This throws out some ugly `#define`s we had that were taking the role
of an enum anyway. We now have some nice getters in the contexts that
take the place of the combo of `seg_feature_active()` and then doing a
lookup in `FrameContext::m_segmentation_features` directly.
Bit reversals are used very often in intra-predicted frames. Turning
these into a constexpr lookup table reduces the branching needed for
block transforms significantly. This reduces the times spent decoding
an intra-heavy 1080p video by about 9% (~14.3s -> ~12.9s).
Previously, the block sizes would be checked at runtime to
determine the transform size to apply for residuals. Making the block
sizes into constant expressions allows all the loops to be unrolled
and reduces branching significantly.
This results in about a 26% improvement (~18s -> ~13.2s) in speed in an
intra-heavy test video.
Inter-prediction convolution filters are selected based on the
subpixel position determined for the motion vector relative to the
block being predicted. The subpixel position 0 only uses one single
sample in the center of the convolution, not averaging any other
samples. Let's call this a copy.
Reference frames can also be a different size relative to the frame
being predicted, but in almost every case, that scale will be 1:1
for every single frame in a video.
Taking into account these facts, we can create multiple fast paths for
inter prediction. These fast paths are only active when scaling is 1:1.
If we are doing a copy in both dimensions, then we can do a straight
memcpy from the reference frame to the output block buffer. In videos
where there is no motion, this is a dramatic speedup.
If we are doing a copy in one dimension, we can just do one convolution
and average directly into the output block buffer.
If we aren't doing a copy in either dimension, we can still cut out a
few operations from the convolution loops, since we only need to
advance our samples by whole pixels instead of subpixels.
These fast paths result in about a 34% improvement (~31.2s -> ~20.6s)
in a video which relies heavily on intra-predicted blocks due to high
motion. In videos with less motion, the improvement will be even
greater.
Also, note that the accumulators in these faster loops are only 16-bit.
High bit-depth videos will overflow those, so for now the fast path is
only used for 8-bit videos.
A typo caused the Y scale value to never be used, so if a reference
frame's aspect ratio didn't match up with the current frame's, it would
decode incorrectly.
Some comments have been added to clarify the frame-constants used in
the function as well.
This moves all the frame size calculation to `FrameContext`, where the
subsampling is easily accessible to determine the size for each plane.
The internal framebuffer size has also been reduced to the exact frame
size that is output.
The division in the `round_mv_...()` functions contained in the motion
vector selection process was done by bit shifting right. However, since
bit shifting negative values will truncate towards the negative end, it
was flooring instead of rounding.
This changes it to match the spec and rely on the compiler to simplify
down to a bit shift.
Previously, the `Parser::decode_tiles()` function wouldn't wait for the
tile-decoding workers to finish before exiting the function, which
could mean that the data the threads are working with could become
invalid if the decoder is deleted after an error is encountered.
Using malloc does not invoke T's constructor, nor were were invoking T's
constructor ourselves. Accessing T without invoking its constructor is
undefined behavior.
This adds a new WorkerThread class to run one task asynchronously,
and allow waiting for that thread to finish its work.
TileContexts are placed into multiple tile column vectors with their
streams to read from pre-created. Once those are ready, the threads can
start their work on each vector separately. The main thread waits for
those tasks to finish, then sums up the syntax element counts for each
tile that was decoded.
Previously, we were incorrectly wrapping an error from `BooleanDecoder`
initialization in a `DecoderErrorCategory::Memory` error. This caused
an incorrect error message in VideoPlayer. Now it will instead return
`DecoderErrorCategory::Corrupted`.
Extending the borders on reference frames so that motion vectors that
point outside the reference frame allows `predict_inter_block()` to
avoid some branches to clamp the sample coordinates in its loops.
This results in about a 25% improvement in decode time of a motion-
heavy YouTube video (~20.8s -> ~15.6s).
Moving the clamping of the coordinates of the reference frame samples
as well as some bounds checks outside of the loop reduces the branches
needed in the `predict_inter_block()` significantly.
This results in a whopping ~41% improvement in decode performance
of an inter-prediction-heavy YouTube video (~35.4s -> ~20.8s).
Changing the calculation of reference frame scale factors to be done on
a per-frame basis reduces the amount of work done in
`predict_inter_block()`, which is a big hotspot in most videos.
This reduces decode times in a test video from YouTube by about 5%
(~37.2s -> ~35.4s).
This changes the order of the loop copying data to a reference frame
store so that it copies each row in a contiguous line rather than
copying a column at a time, which caused unnecessary branches.
This reduces the decode time on a fairly long 720p YouTube video by
about 14.5% (~43.5s to ~37.2s).
Previously, there was some leftover logic in `SeekingStateHandler`
which would avoid presenting a frame if it didn't find a frame both
before and after the target frame. However, this logic was unnecessary
as the `on_enter()` function would check if it needed to present a
frame and exit seeking if not.
This allows seeking to succeed if the Seeking handler cannot find a
frame before the one to be seeked to, which could occur if the first
frame of a video is not at timestamp 0.
This adds a timestamp to the debug output when presenting a frame, so
it can be clear the frame spacing between a presented frame and a state
change.
The seeking state will also now print when it early-exits if the seek
point is within the current sample's duration.
StartingStateHandler and SeekingStateHandler were declaring their own
`bool m_playing` fields (from previous code where there was no base
class).
In the case of SeekingStateHandler, this only made the logging wrong.
For StartingStateHandler, however, this meant that it was not using
the boolean passed as a parameter to the constructor to define the
state that would be transitioned to after the Starting state finished.
This meant that when the Stopping state replaced itself with the
Starting state, playback would not resume when Starting state exits.
To pass events from LibVideo's PlaybackManager to interested parties, we
currently dispatch Core::Event objects that outside callers listen for.
Dispatching events in this manner rely on a Core::EventLoop. In order to
use PlaybackManager from LibWeb, change this mechanism to instead use a
set of callbacks to inform callers of events.
Using uninitialized primitives is undefined behavior. In this case, all
videos would autoplay on Serenity, but not play at all on Lagom, due to
some m_playing booleans being uninitialized.
This copies the video data from the Matroska document into the Track
structure that outside users have access to. Because Track can actually
represent other media types, this is set up such that the Track can hold
metadata for those other types when they are needed.
This is needed for LibWeb's HTMLMediaElement implementation.