We had a const and non-const version of this function, with slightly
different behavior (oops!)
This patch consolidates the implementations and keeps only the correct
behavior in there.
Fixes an issue where comments were not collapsible on Hacker News.
Skia painter is visibly faster than LibGfx painter and has more complete
CSS transforms support. With this change:
- On Linux, it will try to use Vulkan-backend with fallback to
CPU-backend
- On macOS it will try to use Metal-backend with fallback to
CPU-backend
- headless-browser always runs with CPU-backend in layout mode
LibGfx's output is consistent across different platforms, which allows
us to have one set of expectations for screenshot tests. This
consistency will not hold for Skia, where features like antialiasing and
gradient color interpolation vary slightly depending on the platform. In
upcoming changes, we are going to switch to using Skia as the default
painter, which leaves us with the following options:
- Have per-platform screenshot test expectations.
- Limit screenshot tests to run only on one platform and maintain a
single set of expectation files.
For now, I have decided to choose the latter option, using Linux as it
seems to be the most popular platform among developers.
These test work with LibGfx painter but won't longer work after
switching to Skia, because it produces slightly different antialiasing,
rounding in color blending, etc.
These were being immediately stored in JS::GCPtrs (and dutifully visited
by HTMLParser), so creating temporary handles for them was a complete
waste of time.
When loading a canned version of reddit.com, we end up parsing many many
shadow tree style sheets of roughly ~170 KiB text each.
None of them have '\r' or '\f', yet we spend 2-3 ms for each sheet just
looping over and reconstructing the text to see if we need to normalize
any newlines.
This patch makes the common case faster in two ways:
- We use TextCodec::Decoder::to_utf8() instead of process()
This way, we do a one-shot fast validation and conversion to UTF-8,
instead of using the generic code-point-at-a-time callback API.
- We scan for '\r' and '\f' before filtering, and if neither is present,
we simply use the unfiltered string.
With these changes, we now spend 0 ms in the filtering function for the
vast majority of style sheets I've seen so far.
This way, we still perform UTF-8 validation, but don't go through the
slow generic code path that rebuilds the decoded string one code point
at a time.
This was a bottleneck when loading a canned copy of reddit.com, which
ended up being ~120 MiB large.
- Time spent decoding UTF-8 before this change: 1192 ms
- Time spent decoding UTF-8 after this change: 154 ms
That's still a long time, but 7.7x faster is nothing to sneeze at! :^)
Note that if the input fails UTF-8 validation, we still fall back to
the slow path and insert replacement characters per the WHATWG Encoding
spec: https://encoding.spec.whatwg.org/#utf-8-decode
This change makes OpenType::Name::string_for_id handle fonts whose names
are UTF-16-encoded (along with handling UTF-8-encoded names).
Otherwise, without this change, the existing code assumes the names are
UTF-8-encoded, fails gracelessly if they’re not, and crashes.
Fixes https://github.com/LadybirdBrowser/ladybird/issues/75
This is the expected behavior per the HTML spec. Fixes an issue where
styling these elements wouldn't have the expected effect unless you also
set the display property.
This change makes the notes-push.yml·workflow fetch the entire history
on each push, rather than just the one HEAD commit it otherwise sees.
Because we push multiple commits from PR merges, git-gloss need access
on each push to an arbitrary number of commits (however many commits
were pushed in a PR that got merged). There’s no easy way from GH
Actions to know how may commits got pushed at the same time. So we
instead fetch the whole history.
For performance, rather than slowly incrementing the capacity of the
rope string's buffer, compute an approximate length for that buffer to
be reserved up front.
Currently, invoking StringBuilder::to_string will re-allocate the string
data to construct the String. This is wasteful both in terms of memory
and speed.
The goal here is to simply hand the string buffer over to String, and
let String take ownership of that buffer. To do this, StringBuilder must
have the same memory layout as Detail::StringData. This layout is just
the members of the StringData class followed by the string itself.
So when a StringBuilder is created, we reserve sizeof(StringData) bytes
at the front of the buffer. StringData can then construct itself into
the buffer with placement new.
Things to note:
* StringData must now be aware of the actual capacity of its buffer, as
that can be larger than the string size.
* We must take care not to pass ownership of inlined string buffers, as
these live on the stack.
Instead of allowing arbitrarily large values (which could eventually
overflow an i32), let's just cap them at the same limit as Firefox does.
Found by Domato.
This change, for each commit pushed/merged to master:
- causes new git Notes with GitHub PR/issue/reviewer/author links to be
auto-generated for that commit
- pushes the updated refs/notes/commits tree+references back to the repo