Skia now uses GPU-accelerated painting on Linux if Vulkan is available.
Most of the performance gain is currently negated by reading the GPU
backend back into RAM to pass it to the Browser process. In the future,
this could be improved by sharing GPU-allocated memory across the
Browser and WebContent processes.
GPU painter that uses AccelGfx is slower and way less complete compared
to both default Gfx::Painter and Skia painter. It does not make much
sense to keep it, considering Skia painter already uses Metal backend on
macOS by default and there is an option to enable GPU-accelerated
backend on linux.
This large commit also refactors LibWebView's process handling to use
a top-level Application class that uses a new WebView::Process class to
encapsulate the IPC-centric nature of each helper process.
We currently build debug and release versions of vcpkg dependencies. We
will most commonly only need the release version, so let's default to
that to approximately halve our dependency build time.
VP9 continues to function, but this also allows AV1 to be decoded. With
this commit, H.264 is still non-functional, as the decoder requires
some extra initial data from the track definition in the Matroska file.
For SerenityOS, we parse emoji metadata from the UCD to learn emoji
groups, subgroups, names, etc. We used this information only in the
emoji picker dialog. It is entirely unused within Ladybird.
This removes our dependence on the UCD emoji file, as we no longer
need any of its information. All we need to know is the file path to
our custom emoji, which we get from Meta/emoji-file-list.txt.
There are a couple of differences here due to using ICU:
1. Titlecasing behaves slightly differently. We previously transformed
"123dollars" to "123Dollars", as we would use word segmentation to
split a string into words, then transform the first cased character
to titlecase. ICU doesn't go quite that far, and leaves the string
as "123dollars". While this is a behavior change, the only user of
this API is the `text-transform: capitalize;` CSS rule, and we now
match the behavior of other browsers.
2. There isn't an API to compare strings with case insensitivity without
allocating case-folded strings for both the left- and right-hand-side
strings. Our implementation was previously allocation-free; however,
in a benchmark, ICU is still ~1.4x faster.
This changes the Sanitizer configs to build all the vcpkg dependencies
with our specified CFLAGS and CXXFLAGS for ASAN and UBSAN.
Unfortunately, we can't yet enable actually compiling them with
sanitizers enabled, because this causes test failures that need to be
investigated.
This uses ICU for all of the Intl.PluralRules prototypes, which lets us
remove all data from our plural rules generator.
Plural rules depend directly on internal data from the number formatter,
so rather than creating a separate Locale::PluralRules class (which will
make accessing that data awkward), this adds plural rules APIs to the
existing Locale::NumberFormat.
This uses ICU for the Intl.NumberFormat `format` and `formatToParts`
prototypes. It does not yet port the range formatter prototypes.
Most of the new code in LibLocale/NumberFormat is simply mapping from
ECMA-402 types to ICU types. Beyond that, the only algorithmic change is
that we have to mutate the output from ICU for `formatToParts` to match
what is expected by ECMA-402. This is explained in NumberFormat.cpp in
`flatten_partitions`.
This lets us remove most data from our number format generator. All that
remains are numbering system digits and symbols, which are relied upon
still for other interfaces (e.g. Intl.DateTimeFormat). So they will be
removed in a future patch.
Note: All of the changes to the test files in this patch are now aligned
with both Chrome and Safari.
Note: We keep locale parsing and syntactic validation as-is. ECMA-402
places additional restrictions on locales above what is required by the
Unicode spec. ICU doesn't provide methods that let us easily check those
restrictions, whereas LibLocale does. Other browsers also implement
their own validators here.
This introduces a locale cache to re-use parsed locale data and various
related structures (not doing so has a non-negligible performance impact
on Intl tests).
The existing APIs for canonicalization and display names are pretty
intertwined, so they must both be adapted at once here. The results of
canonicalization are slightly different on some edge cases. But the
changed results are actually now aligned with Chrome and Safari.
If we get a suggestion from fontconfig, we try those fonts first, before
falling back on the hard coded list of known suitable fonts for each
generic family.