This will help a lot with developing chromes for different UI frameworks
where we can see which helper classes and processes are really using Qt
vs just using it to get at helper data.
As a bonus, remove Qt dependency from WebDriver.
This will ensure that we don't leak any memory while playing back
audio.
There is an expectation value in the test that is only set to true when
PulseAudio is present for the moment. When any new implementation is
added for other libraries/platforms, we should hopefully get a CI
failure due to unexpected success in creating the `PlaybackStream`.
To ensure that we clean up our PulseAudio connection whenever audio
output is not needed, add `PulseAudioContext::weak_instance()` to allow
us to check whether an instance exists without creating one.
We got some errors while loading https://twinings.co.uk/ about this
interface missing, and it looked fairly simple so I sketched it out.
Note that I did leave some FIXMEs where it's not clear exactly which
metrics we should be returning.
This object is available as `window.internals` (or just `internals`) and
is only accessible while running in "test mode".
This first version only has one API: gc(), which triggers a garbage
collection immediately.
In the future, we can add more APIs here to help us test parts of the
engine that are hard or impossible to reach via public web APIs.
The pretty-print gdb helpers were not updated since the
DeprecatedString change. This commit introduces a new printer for String
and renames the old one to DeprecatedString.
This adds an abstract `Audio::PlaybackStream` class to allow cross-
platform audio playback to be done in an opaque manner by applications
in both Serenity and Lagom.
Currently, the only supported audio API is PulseAudio, but a Serenity
implementation should be added shortly as well.
Since the existing Promise class is designed with deferred tasks on the
main thread only, we need a new class that will ensure we can handle
promises that are resolved/rejected off the main thread.
This new class ensures that the callbacks are only called on the same
thread that the promise is fulfilled from. If the callbacks are not set
before the thread tries to fulfill the promise, it will spin until they
are so that they will run on that thread.
Rework the write_if_changed logic to properly truncate the output file.
The original logic would not truncate the file, leading to extra junk
at the end of files if the IDL files shrunk between builds.
3dd3120a8a changed open mode from Write to
ReadWrite, which stopped truncating files.
This could be noticed by building a slightly older branch, as compiling
it gave compile errors.
Now that all the classes for Ladybird are in the Ladybird namespace, we
don't need them named as Ladybird::FooBarLadybird. For the Qt-specific
classes, we can tack on a Qt at the end for clarity, but FontPlugin and
ImageCodecPlugin no longer have anything to do with Qt.
LibTLS still can't access many parts of the web, so let's hide this
behind a flag (with all the plumbing that entails).
Hopefully this can encourage folks to improve LibTLS's algorithm support
:^).
Re-organize our helper files here a bit, to make a clearer distinction
between Qt-specific helpers and generic non-serenity helpers.
A future commit will move Lagom specific code from LibSQL to ladybird
as well, so that we can see about future generic apis for spawning
helper procesess.
The Qt relationship was removed in de31a8a4, so let's acknowledge that
and make it clearer to potential non-Qt ports that this file is usable
by them :^).
This reduces the number of tasks to schedule, and the complexity of the
build system integrations for the BindingsGenerator. As a bonus, we move
the "only write if changed" feature into the generator to reduce the
build system load on generated files for this generator.
The LocaleData generator currently stores vectors of unique instances of
CLDR data (e.g. languages, currencies, etc.). For each CLDR file that we
parse, we linearly search through those vectors to decide if the current
field being parsed is unique. Given the size of the CLDR, this adds up
to quite a bit of time.
Augment these vectors with a hash map to store the index of each unique
instance in those vectors. This allows for quickly checking if a field
is unique, and to later look up those indices.
We do not apply this technique to every bit of CLDR data here. For
example, CLDR::character_orders contains only 2 entries. In that case,
it is quicker to search the vector than it is to hash a string key.
This reduces the runtime of GenerateLocaleData from to 2.03s to 1.09s.
Similar to languages and currencies, extract the loop to collect the
unique set of date fields to a preprocessing function. This alone does
not yield any performance improvement, but combined with an upcoming
patch will make the parse_locale_date_fields() a bit faster.
We currently parse each CLDR calendar, then decide based on its primary
key whether we want to skip it. Instead, we can decide to skip it based
on its file name.
This reduces the runtime of GenerateLocaleData from 2.03s to 1.97s.
The LocaleData generator has to read a few of the CLDR files more than
once, to e.g. prepare some data up front (for reasons why, see commits
c86f7a6 and 0b69e9f). This takes non-neglible time, especially for large
JSON files such as currencies.json. So in these cases, cache the parsed
JSON in a map.
This reduces the runtime of GenerateLocaleData from 2.32s to 2.03s.
These were used to generate specialized tables. Now that those tables
have been migrated to general 2-stage lookup tables, these fields are
all unused.
Similar to commit 0652cc4, we now generate 2-stage lookup tables for
case conversion information. Only about 1500 code points are actually
cased. This means that case information is rather highly compressible,
as the blocks we break the code points into will generally all have no
casing information at all.
In total, this change:
* Does not change the size of libunicode.so (which is nice because,
generally, the 2-stage lookup tables are expected to trade a bit
of size for performance).
* Reduces the runtime of the new benchmark test case added here from
1.383s to 1.127s (about an 18.5% improvement).
There is no functional change here. This information will compose the
upcoming multistage casing tables in an upcoming patch. Extract it to
its own struct to prepare for that.