This change removes wrappers inherited from Gfx::Typeface for WOFF and
WOFF2 fonts. The only purpose they served is owning of ttf ByteBuffer
produced by decoding a WOFF/WOFF2 font. Now new FontData class is
responsible for holding ByteBuffer when a font is constructed from
non-externally owned memory.
It currently doesn't support animated image.
Note that Gfx::Bitmap has no support for get_pixel when the format is
RGBA8888. This is why it has been removed from the tests.
Previously, `SVGSVGBox` would have a natural aspect ratio of 0 if it
had a viewbox with zero width. This led to a division by zero, causing
a crash.
Found by Domato.
Previously calling `PaintableBox::set_scroll_offset()` with a
PaintableBox whose content size was larger than its scrollble overflow
rect would cause a crash.
Found by Domato.
The underlying CPU-specific instructions for operating on UTF-16 strings
behave differently for null inputs. Add an explicit check for this state
for consistency.
The underlying CPU-specific instructions for operating on UTF-8 strings
behave differently for null inputs. Add an explicit check for this state
for consistency.
We currently display scroll bars for the JS console and its parent tab
container. We want the console output to be separately scrollable from
the tab content, but since both containers are scrollable, we end up
with nested scroll bars. This also makes actually scrolling feel pretty
awkward.
Prevent this by making the tab container non-scrollable when the JS
console is shown.
We had a const and non-const version of this function, with slightly
different behavior (oops!)
This patch consolidates the implementations and keeps only the correct
behavior in there.
Fixes an issue where comments were not collapsible on Hacker News.
Skia painter is visibly faster than LibGfx painter and has more complete
CSS transforms support. With this change:
- On Linux, it will try to use Vulkan-backend with fallback to
CPU-backend
- On macOS it will try to use Metal-backend with fallback to
CPU-backend
- headless-browser always runs with CPU-backend in layout mode
LibGfx's output is consistent across different platforms, which allows
us to have one set of expectations for screenshot tests. This
consistency will not hold for Skia, where features like antialiasing and
gradient color interpolation vary slightly depending on the platform. In
upcoming changes, we are going to switch to using Skia as the default
painter, which leaves us with the following options:
- Have per-platform screenshot test expectations.
- Limit screenshot tests to run only on one platform and maintain a
single set of expectation files.
For now, I have decided to choose the latter option, using Linux as it
seems to be the most popular platform among developers.
These test work with LibGfx painter but won't longer work after
switching to Skia, because it produces slightly different antialiasing,
rounding in color blending, etc.
These were being immediately stored in JS::GCPtrs (and dutifully visited
by HTMLParser), so creating temporary handles for them was a complete
waste of time.
When loading a canned version of reddit.com, we end up parsing many many
shadow tree style sheets of roughly ~170 KiB text each.
None of them have '\r' or '\f', yet we spend 2-3 ms for each sheet just
looping over and reconstructing the text to see if we need to normalize
any newlines.
This patch makes the common case faster in two ways:
- We use TextCodec::Decoder::to_utf8() instead of process()
This way, we do a one-shot fast validation and conversion to UTF-8,
instead of using the generic code-point-at-a-time callback API.
- We scan for '\r' and '\f' before filtering, and if neither is present,
we simply use the unfiltered string.
With these changes, we now spend 0 ms in the filtering function for the
vast majority of style sheets I've seen so far.
This way, we still perform UTF-8 validation, but don't go through the
slow generic code path that rebuilds the decoded string one code point
at a time.
This was a bottleneck when loading a canned copy of reddit.com, which
ended up being ~120 MiB large.
- Time spent decoding UTF-8 before this change: 1192 ms
- Time spent decoding UTF-8 after this change: 154 ms
That's still a long time, but 7.7x faster is nothing to sneeze at! :^)
Note that if the input fails UTF-8 validation, we still fall back to
the slow path and insert replacement characters per the WHATWG Encoding
spec: https://encoding.spec.whatwg.org/#utf-8-decode