...and replace it with AngleOrCalculated.
This has the nice bonus effect of actually handling `calc()` for angles
in a transform function. :^) (Previously we just would have asserted.)
This is intended as a replacement for Length and friends each holding a
`RefPtr<CalculatedStyleValue>`. Instead, let's make the types explicit
about whether they are calculated or not. This then means a Length is
always a Length, and won't require including `StyleValue.h`.
As noted, it's probably nicer for LengthOrCalculated to live in
`Length.h`, but we can't do that until Length stops including
`StyleValue.h`.
Due to CSSImportRule::has_import_result() being backwards, we never
actually entered imported style sheets when traversing style rules or
media queries.
With this fixed, we no longer need the "collect style sheets" step in
StyleComputer, as normal for_each_effective_style_rule() will now
actually find all the rules. :^)
The constructor is now only concerned with creating the required
streams, which means that it no longer fails for XZ streams with
invalid headers. Instead, everything is parsed and validated during the
first read, preparing us for files with multiple streams.
This was causing a huge slowdown when loading some pages with weirdly
huge number of style sheets. For example, amazon.com has over 200 style
elements, which meant we had to resort the StyleSheetList 200 times.
(And sorting itself was slow because it has to compare DOM positions.)
Instead of sorting, we now look for the correct insertion point when
adding new style sheets, and we start the search from the end, which is
where style sheets are typically added in the vast majority of cases.
This removes a 600ms time sink when loading Amazon on my machine! :^)
Setting the `data` of a text node already triggers `children changed`
per spec, so there's no need for an explicit call.
This avoids parsing every HTMLStyleElement sheet twice. :^)
We currently only fill a buffer when it is empty. So if it has 1 byte
and 16 KB was requested, only that 1 byte would be returned. Instead,
attempt to refill the buffer when it's size is less than the requested
size.
When reading, we currently only fill a BufferedStream's buffer when it
is empty, and only with 1 KB of data. This means that while the buffer
defaults to a size of 16 KB, at least 15 KB is always unused.
The current implementation of `Array<T, 0>` has a zero-length C array as
its storage type. While this is accepted as a GNU extension, when
compiling with Clang 16, an UBSan error is raised every time an object
is accessed whose only field is a zero-length array.
This is likely a bug in Clang 16's implementation of UBSan, which has
been reported here: https://github.com/llvm/llvm-project/issues/61775
We now select between nearest neighbor and bilinear filtering when
scaling images in CRC2D.drawImage().
This patch also adds CRC2D.imageSmoothingQuality but it's ignored for
now as we don't have a bunch of different quality levels to map it to.
Work towards #17993 (Ruffle Flash Player)
We currently decode back-references one byte at a time, while writing
that byte back out to the output buffer. This is only necessary when the
back-reference refers to itself, i.e. when the back-reference distance
is less than its length. In other cases, we can read the entire back-
reference block in one shot.
Using the "enwik8" file as a test (100MB uncompressed, commonly used in
benchmarks: https://www.mattmahoney.net/dc/enwik8.zip), decompression
time decreases from:
5.8s to 4.89s on Serenity (cold)
2.3s to 1.72s on Serenity (warm)
1.6s to 1.06s on Linux
Else:
AK/BitStream.h:218:24:
error: inline function '...::lsb_mask<unsigned char>' is not
defined [-Werror,-Wundefined-inline]
static constexpr T lsb_mask(T bits)
^
Huffman codes have a useful property in that they are prefix codes. That
is, a set of bits representing a Huffman-coded symbol is never a prefix
of another symbol. This allows us to create a table, where each index in
the table are integers whose prefix is the entry's corresponding Huffman
code.
With Deflate, we can have codes up to 16 bits in length, thus creating a
prefix table with 2^16 entries. So instead of creating a table fit all
possible codes, we use a cutoff of 8-bit codes. Codes larger than 8 bits
fall back to the binary search method.
Using the "enwik8" file as a test (100MB uncompressed, commonly used in
benchmarks: https://www.mattmahoney.net/dc/enwik8.zip), decompression
time decreases from 3.527s to 2.585s on Linux.
We current buffer one byte of data from the underlying stream. And when
we pull bits off that buffer, we do so 1 or 8 bits at a time (depending
on whether the buffer is byte aligned). The 1-bit-at-a-time loop is by
far the most common during e.g. GZIP decompression.
This replaces the u8 buffer with a u64. And instead of looping at all,
we perform bitwise operations to extract the desired number of bits.
Using the "enwik8" file as a test (100MB uncompressed, commonly used in
benchmarks: https://www.mattmahoney.net/dc/enwik8.zip), decompression
time decreases from:
242s to 35s on Serenity
11.125s to 3.527s on Linux
Note that BigEndianInputBitStream can also use the same techniques,
and some of the methods here may make sense to live in an endianness-
agnostic base class. The focus is GZIP right now though, which only
uses the little endian stream.
We currently mix normal and bit streams during GZIP decompression, where
the latter is a wrapper around the former. This isn't causing issues now
as the underlying bit stream buffer is a byte, so the normal stream can
pick up where the bit stream left off.
In order to increase the size of that buffer though, the normal stream
will not be able to assume it can resume reading after the bit stream.
The buffer can easily contain more bits than it was meant to read, so
when the normal stream resumes, there may be N bits leftover in the bit
stream that the normal stream was meant to read.
To avoid weird behavior when mixing streams, this changes the GZIP
decompressor to always read from a bit stream.
Previously we used a parsing context with no access to the document, so
any URLs in url() functions would become invalid.
Fixes the images on Steam's store carousel, which sets
Element.style.backgroundImage to url() functions.
Some fonts (like the Bootstrap Icons webfont) have bogus glyph offsets
in the `loca` table that point past the end of the `glyf` table.
AFAICT other rasterizers simply ignore these glyphs and treat them as if
they were missing. So let's do the same.
This makes https://changelog.serenityos.org/ actually work! :^)
The link to documentation is buried near the bottom of the README,
and most people don't realize that we have documentation within the
repository because of this. This commit adds handy links that take you
to the appropriate parts of the README, which should improve
discoverability by a lot. The idea is inspired by other repositories
having a similar "common links" area at the top of their READMEs.
In situations where we need a width to calculate the intrinsic height of
a flex item, we use the fit-content width as a stand-in. However, we
also need to clamp it to any min-width and max-width properties present.
The spec tells us that for glyph offsets in the "loca" table, if an
offset appears twice in a row, the index of the first one refers to a
glyph without an outline (such as the space character). We didn't check
for this, which would cause us to render a glyph outline where there
should have been nothing.
This mirrors String::from_utf8(StringView).
Jakt will use this to construct strings instead of just assuming the
allocation will succeed, lowering the API difference between
Jakt::String and AK::String by one API :^)