The pretty-print gdb helpers were not updated since the
DeprecatedString change. This commit introduces a new printer for String
and renames the old one to DeprecatedString.
This adds an abstract `Audio::PlaybackStream` class to allow cross-
platform audio playback to be done in an opaque manner by applications
in both Serenity and Lagom.
Currently, the only supported audio API is PulseAudio, but a Serenity
implementation should be added shortly as well.
Since the existing Promise class is designed with deferred tasks on the
main thread only, we need a new class that will ensure we can handle
promises that are resolved/rejected off the main thread.
This new class ensures that the callbacks are only called on the same
thread that the promise is fulfilled from. If the callbacks are not set
before the thread tries to fulfill the promise, it will spin until they
are so that they will run on that thread.
Rework the write_if_changed logic to properly truncate the output file.
The original logic would not truncate the file, leading to extra junk
at the end of files if the IDL files shrunk between builds.
3dd3120a8a changed open mode from Write to
ReadWrite, which stopped truncating files.
This could be noticed by building a slightly older branch, as compiling
it gave compile errors.
Now that all the classes for Ladybird are in the Ladybird namespace, we
don't need them named as Ladybird::FooBarLadybird. For the Qt-specific
classes, we can tack on a Qt at the end for clarity, but FontPlugin and
ImageCodecPlugin no longer have anything to do with Qt.
LibTLS still can't access many parts of the web, so let's hide this
behind a flag (with all the plumbing that entails).
Hopefully this can encourage folks to improve LibTLS's algorithm support
:^).
Re-organize our helper files here a bit, to make a clearer distinction
between Qt-specific helpers and generic non-serenity helpers.
A future commit will move Lagom specific code from LibSQL to ladybird
as well, so that we can see about future generic apis for spawning
helper procesess.
The Qt relationship was removed in de31a8a4, so let's acknowledge that
and make it clearer to potential non-Qt ports that this file is usable
by them :^).
This reduces the number of tasks to schedule, and the complexity of the
build system integrations for the BindingsGenerator. As a bonus, we move
the "only write if changed" feature into the generator to reduce the
build system load on generated files for this generator.
The LocaleData generator currently stores vectors of unique instances of
CLDR data (e.g. languages, currencies, etc.). For each CLDR file that we
parse, we linearly search through those vectors to decide if the current
field being parsed is unique. Given the size of the CLDR, this adds up
to quite a bit of time.
Augment these vectors with a hash map to store the index of each unique
instance in those vectors. This allows for quickly checking if a field
is unique, and to later look up those indices.
We do not apply this technique to every bit of CLDR data here. For
example, CLDR::character_orders contains only 2 entries. In that case,
it is quicker to search the vector than it is to hash a string key.
This reduces the runtime of GenerateLocaleData from to 2.03s to 1.09s.
Similar to languages and currencies, extract the loop to collect the
unique set of date fields to a preprocessing function. This alone does
not yield any performance improvement, but combined with an upcoming
patch will make the parse_locale_date_fields() a bit faster.
We currently parse each CLDR calendar, then decide based on its primary
key whether we want to skip it. Instead, we can decide to skip it based
on its file name.
This reduces the runtime of GenerateLocaleData from 2.03s to 1.97s.
The LocaleData generator has to read a few of the CLDR files more than
once, to e.g. prepare some data up front (for reasons why, see commits
c86f7a6 and 0b69e9f). This takes non-neglible time, especially for large
JSON files such as currencies.json. So in these cases, cache the parsed
JSON in a map.
This reduces the runtime of GenerateLocaleData from 2.32s to 2.03s.
These were used to generate specialized tables. Now that those tables
have been migrated to general 2-stage lookup tables, these fields are
all unused.
Similar to commit 0652cc4, we now generate 2-stage lookup tables for
case conversion information. Only about 1500 code points are actually
cased. This means that case information is rather highly compressible,
as the blocks we break the code points into will generally all have no
casing information at all.
In total, this change:
* Does not change the size of libunicode.so (which is nice because,
generally, the 2-stage lookup tables are expected to trade a bit
of size for performance).
* Reduces the runtime of the new benchmark test case added here from
1.383s to 1.127s (about an 18.5% improvement).
There is no functional change here. This information will compose the
upcoming multistage casing tables in an upcoming patch. Extract it to
its own struct to prepare for that.
There is no functional change here. This just adjusts the changes made
in commit 0652cc4 to be a bit more generic for code point casing tables.
We currently only generate property tables, which boil down to a vector
of booleans. Casing tables will be a struct of varying types, so this
generalizes some of the generator to prepare for that ahead of time, to
make the upcoming casing patch smaller / easier to grok.
When generating code point property tables, we currently binary search
the code point range lists for each property to decide if a code point
has that property. However, we are both iterating over the code points
and through the sorted properties in order. This means we do not need
to search code point ranges that are below the current code point at
all. We can even remove the code point ranges that fall below the
current code point, as we will not see a code point in those ranges
again.
On my machine, this reduces the run time of GenerateUnicodeData from
3.4 seconds to 1.2 seconds.
The idl file lists are used for two things:
1. As inputs for `generate_window_or_worker_interfaces`
2. In a loop in `generate_idl_bindings` and the loop variable
is passed to `rebase_path`
Both these cases can handle a normal fully-qualified GN path,
so there's no need for the "abspath".
No behavior change.
Currently, the `isobmff` utility will only print the media file type
info from the FileTypeBox (major brand and compatible brands), as well
as the names and sizes of top-level boxes.
This disables running benchmark test cases on Lagom in CI. When we run
unit tests on-board Serenity, run-tests sets this environment variable.
We do not use that utility to run unit tests on Lagom, so we've been
running benchmark tests unnecessarily.
When markdown-check is built, it outputs hundreds of lines of "ignoring
this and that link because reasons". This is extremely not helpful when
trying to figure out exactly which check failed on your commit. Also
remove the timing numbers from lint-ci.sh These are just noise and also
don't help to figure out which pre-commit check failed. Ideally the
output on fail should be "[OK]: Check A" for all the passing checks and
"[FAIL] Check N" with the required context for the failed check.
We currently produce a single table for all categories of code point
properties (GeneralCategory, Script, etc.). Each row contains a field
indicating the range of code points to which that property applies. At
runtime, we then do a binary search through that table to decide if a
code point has a property.
This changes our approach to generate a 2-stage lookup table for each of
those categories. There is an in-depth explanation of these tables above
the new `create_code_point_tables` method. The end effect is that code
point property lookup is reduced from a binary search to constant-time
array lookups.
In total, this change:
* Increases the size of libunicode.so from 2.7 MB to 2.9 MB.
* Reduces the runtime of the new benchmark test case added here from
3.576s to 1.020s (a 3.5x speedup).
* In a profile of resizing a TextEditor window with a 3MB file open,
the runtime of checking if a code point has a word break property
reduces from ~81% to ~56%.
The next commit will need a type from LibUnicode/CharacterTypes.h. To
avoid conflicts between that header's CodePointRange and the one that is
defined in the code generator, just use the public definition.
We started generating this data in commit 0505e03, but it was unused.
It's still not used, so let's remove it, rather than bloating the size
of libunicode.so with unused data. If we need it in the future, it's
trivial to add back.
Note we *have* always used the block name data from that commit, and
that is still present here.
The AST interpreter is still available behind a new `--ast` flag.
We switch to testing with bytecode in the big run-tests battery on
SerenityOS. Lagom CI continues running both AST and BC.