`create_standard_deck()` is the usual 52-card deck, but more custom
setups (such as Spider's multiples-of-one-suit) can be created by
passing suit counts to `create_deck()`.
This file implements the POSIX APIs from <regex.h>, and is not suitable
for inclusion in a Lagom build. If we do include it, it will override
the host's regex functions and wreak havoc if it's resolved before the
host's implementation.
By passing AF_UNSPEC to getaddrinfo, we're telling the system's
implementation that we are ok getting either (or both) IPv4 and IPv6
addresses in our result. On my Ubuntu 22.04 system, the first addrinfo
returned for "www.google.com" holds an IPv6 address, which when
interpreted as an IPv4 sockaddr_in gives an address of 0.0.0.0.
This fixes TestTLSHandshake in Lagom locally.
This filter mimics the functionality from Photoshop and other image
editors, allowing you to adjust the hue, saturation, and lightness of
an image. I always found this a very handy feature :^)
This fixes some off-by-one wrapping issues that became visible when
running on x86_64. The problem still existed on i686, but by chance
did not show up due to a -/+ 0.000001 difference between the two.
For testing purposes, this allows opening of any filename by passing it
as an argument.
Additionally, there is a --benchmark option that will just call decode
for 100 frames and then exit, printing the time spent in the decoder.
Previously, saved probability tables were being inserted, causing the
Vector to increase in size when it should say fixed at a size of 4. This
changes the Vector to an Array<T, 4> which will default-initalize and
allow assigning to any index without previously setting size.
Integer overflow could sometimes occur due to counts going above 255,
where the values should instead be clamped at their maximum to avoid
wrapping to 0.
This fixes an issue causing frame 3 of the test video to fail to parse
because a reference vector was incorrectly within the range for a high
precision delta vector read.
This allows the second shown frame of the VP9 test video to be decoded,
as the second chunk uses a superframe to encode a reference frame and
a second to inter predict between the keyframe and the reference frame.
This enables the second frame of the test video to be decoded.
It appears that the test video uses a superframe (group of multiple
frames) for the first chunk of the file, but we haven't implemented
superframe parsing.
We also ignore the show_frame flag, so for now, this
means that the second frame read out is shown when it should not be. To
fix this, another error type needs to be implemented that is "thrown" to
decoder's client so they know to send another sample buffer.
The above interpolation filter mode was being taken from the left side
instead, causing some parsing errors.
This also changes the magic number 3 to SWITCHABLE_FILTERS.
Unfortunately, the spec uses the magic number, so this value was taken
instead from the reference codec, libvpx.
This gets the decoder closer to fully parsing the second frame without
any errors. It will still be unable to output an inter-predicted frame.
The lack of output causes VideoPlayer to crash if it attempts to read
the buffers for frame 1, so it is still limited to the first frame.
This changes MotionVector by removing the cpp file and moving all
functions to the header, where they are now declared as constexpr
so that they can be compile-time evaluated in LookupTables.h.
For testing purposes, the output buffer is taken directly from the
decoder and displayed in an image widget.
The first keyframe can be displayed, but the second will not decode
so VideoPlayer will stop at frame 0 for now.
This implements a BT.709 YCbCr to RGB conversion in VideoPlayer, but
that should be moved to a library for handling color space conversion.
The first keyframe of the test video can be decoded with these changes.
Raw memory allocations in the Parser have been replaced with Vector or
Array to avoid memory leaks and OOBs.
This allows runtime strings, so we can format the errors to make them
more helpful. Errors in the VP9 decoder will now print out a function,
filename and line number for where a read or bitstream requirement
has failed.
The DecoderErrorCategory enum will classify the errors so library users
can show general user-friendly error messages, while providing the
debug information separately.
Any non-DecoderErrorOr<> results can be wrapped by DECODER_TRY to
return from decoder functions. This will also add the extra information
mentioned above to the error message.
init_bool will now check whether there is enough data in the bitstream
for the range coding size to be fully read.
exit_bool will now read the entire padding element regardless of size,
which the spec does not specify a limit on.
Reads will now be done in larger chunks at a time.
The public read_byte() function was removed in favor of a private
fill_reservoir() function which will be used to fill the 64-bit
reservoir field which will then be bit-shifted and masked as necessary
for subsequent arbitrary bit-sized reads.
read_f(n) was renamed to read_bits to be clearer about its use.
The interpolation filter value is not set when reading an intra-only
frame, so printing this for the first keyframe of the file was printing
"220", which is invalid.
In particular, StringView::contains(char) is often used with a u32
code point. When this is done, the compiler will for some reason allow
data corruption to occur silently.
In fact, this is one of two reasons for the following OSS Fuzz issue:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=49184
This is probably a very old bug.
In the particular case of URLParser, AK::is_url_code_point got confused:
return /* ... */ || "!$&'()*+,-./:;=?@_~"sv.contains(code_point);
If code_point is a large code point that happens to have the correct
lower bytes, AK::is_url_code_point is then convinced that the given
code point is okay, even if it is actually problematic.
This commit fixes *only* the silent data corruption due to the erroneous
conversion, and does not fully resolve OSS-Fuzz#49184.
Add ability to use values passed to grid-template-columns and
grid-template-rows for CSS Grid layout within a repeat() function.
E.g. grid-template-columns: repeat(2, 50px); means to have two columns
of 50px width each.
Hopefully no one else will forget to call set_prototype with the cached
prototype they just retrieved from a realm and spend a long time
wondering why their object has no properties...
Get rid of the bespoke NavigatorObject class and use the modern IDL
strategies for creating platform objects to re-implement Navigator and
its associcated mixin interfaces. While we're here, implement it in a
way that brings WorkerNavigator up to spec :^)
We can now properly add the prototypes and constructors to the global
object of the Worker's inner realm, so we don't need this window for
anything anymore.
Instead, create a tree of Parsers all pointing to a top-level Parser.
All module imports and interfaces are stored at the top level, instead
of in a static map. This allows creating multiple IDL::Parsers in the
same process without them stepping on each others toes.
The intent is to use these to autogenerate prototype declarations for
Window and WorkerGlobalScope classes.
And the spec links are just nice to have :^)
When the indicated column-span is greater than the implicit grid (like
in cases when the grid has the default size of 1x1, and the column is
supposed to span any number greater than that), then previously were
crashing.
Fixes a bug in the maybe_add_column() implementation of the
OccupationGrid. Previously were checking for the width of the grid based
off of the first row, and so when augmenting the column count row-by-row
the latter rows would have differing column counts.
Also, were doing an unnecessary + 1 which I imagine comes from before
when I wasn't quite clear on whether I was referring to columns by
index or by the css-value of the column (column 1 in the css is
column index 0).
This implementation includes a first cut at run the unfocusing steps
from the spec, with many things left unimplemented.
The viewport related spec steps in particular don't seem to map to
LibWeb concepts, which makes figuring out if things are properly focused
much more difficult.