This adds an abstract `Audio::PlaybackStream` class to allow cross-
platform audio playback to be done in an opaque manner by applications
in both Serenity and Lagom.
Currently, the only supported audio API is PulseAudio, but a Serenity
implementation should be added shortly as well.
Since the existing Promise class is designed with deferred tasks on the
main thread only, we need a new class that will ensure we can handle
promises that are resolved/rejected off the main thread.
This new class ensures that the callbacks are only called on the same
thread that the promise is fulfilled from. If the callbacks are not set
before the thread tries to fulfill the promise, it will spin until they
are so that they will run on that thread.
Rework the write_if_changed logic to properly truncate the output file.
The original logic would not truncate the file, leading to extra junk
at the end of files if the IDL files shrunk between builds.
3dd3120a8a changed open mode from Write to
ReadWrite, which stopped truncating files.
This could be noticed by building a slightly older branch, as compiling
it gave compile errors.
LibTLS still can't access many parts of the web, so let's hide this
behind a flag (with all the plumbing that entails).
Hopefully this can encourage folks to improve LibTLS's algorithm support
:^).
This reduces the number of tasks to schedule, and the complexity of the
build system integrations for the BindingsGenerator. As a bonus, we move
the "only write if changed" feature into the generator to reduce the
build system load on generated files for this generator.
The LocaleData generator currently stores vectors of unique instances of
CLDR data (e.g. languages, currencies, etc.). For each CLDR file that we
parse, we linearly search through those vectors to decide if the current
field being parsed is unique. Given the size of the CLDR, this adds up
to quite a bit of time.
Augment these vectors with a hash map to store the index of each unique
instance in those vectors. This allows for quickly checking if a field
is unique, and to later look up those indices.
We do not apply this technique to every bit of CLDR data here. For
example, CLDR::character_orders contains only 2 entries. In that case,
it is quicker to search the vector than it is to hash a string key.
This reduces the runtime of GenerateLocaleData from to 2.03s to 1.09s.
Similar to languages and currencies, extract the loop to collect the
unique set of date fields to a preprocessing function. This alone does
not yield any performance improvement, but combined with an upcoming
patch will make the parse_locale_date_fields() a bit faster.
We currently parse each CLDR calendar, then decide based on its primary
key whether we want to skip it. Instead, we can decide to skip it based
on its file name.
This reduces the runtime of GenerateLocaleData from 2.03s to 1.97s.
The LocaleData generator has to read a few of the CLDR files more than
once, to e.g. prepare some data up front (for reasons why, see commits
c86f7a6 and 0b69e9f). This takes non-neglible time, especially for large
JSON files such as currencies.json. So in these cases, cache the parsed
JSON in a map.
This reduces the runtime of GenerateLocaleData from 2.32s to 2.03s.
These were used to generate specialized tables. Now that those tables
have been migrated to general 2-stage lookup tables, these fields are
all unused.
Similar to commit 0652cc4, we now generate 2-stage lookup tables for
case conversion information. Only about 1500 code points are actually
cased. This means that case information is rather highly compressible,
as the blocks we break the code points into will generally all have no
casing information at all.
In total, this change:
* Does not change the size of libunicode.so (which is nice because,
generally, the 2-stage lookup tables are expected to trade a bit
of size for performance).
* Reduces the runtime of the new benchmark test case added here from
1.383s to 1.127s (about an 18.5% improvement).
There is no functional change here. This information will compose the
upcoming multistage casing tables in an upcoming patch. Extract it to
its own struct to prepare for that.
There is no functional change here. This just adjusts the changes made
in commit 0652cc4 to be a bit more generic for code point casing tables.
We currently only generate property tables, which boil down to a vector
of booleans. Casing tables will be a struct of varying types, so this
generalizes some of the generator to prepare for that ahead of time, to
make the upcoming casing patch smaller / easier to grok.
When generating code point property tables, we currently binary search
the code point range lists for each property to decide if a code point
has that property. However, we are both iterating over the code points
and through the sorted properties in order. This means we do not need
to search code point ranges that are below the current code point at
all. We can even remove the code point ranges that fall below the
current code point, as we will not see a code point in those ranges
again.
On my machine, this reduces the run time of GenerateUnicodeData from
3.4 seconds to 1.2 seconds.
Currently, the `isobmff` utility will only print the media file type
info from the FileTypeBox (major brand and compatible brands), as well
as the names and sizes of top-level boxes.
We currently produce a single table for all categories of code point
properties (GeneralCategory, Script, etc.). Each row contains a field
indicating the range of code points to which that property applies. At
runtime, we then do a binary search through that table to decide if a
code point has a property.
This changes our approach to generate a 2-stage lookup table for each of
those categories. There is an in-depth explanation of these tables above
the new `create_code_point_tables` method. The end effect is that code
point property lookup is reduced from a binary search to constant-time
array lookups.
In total, this change:
* Increases the size of libunicode.so from 2.7 MB to 2.9 MB.
* Reduces the runtime of the new benchmark test case added here from
3.576s to 1.020s (a 3.5x speedup).
* In a profile of resizing a TextEditor window with a 3MB file open,
the runtime of checking if a code point has a word break property
reduces from ~81% to ~56%.
The next commit will need a type from LibUnicode/CharacterTypes.h. To
avoid conflicts between that header's CodePointRange and the one that is
defined in the code generator, just use the public definition.
We started generating this data in commit 0505e03, but it was unused.
It's still not used, so let's remove it, rather than bloating the size
of libunicode.so with unused data. If we need it in the future, it's
trivial to add back.
Note we *have* always used the block name data from that commit, and
that is still present here.
The AST interpreter is still available behind a new `--ast` flag.
We switch to testing with bytecode in the big run-tests battery on
SerenityOS. Lagom CI continues running both AST and BC.
Rather than splitting the Iterator type and its AOs into two files,
let's combine them into one file to match every other JS runtime object
that we have.
This is an editorial change in the ECMA-262 spec. See:
https://github.com/tc39/ecma262/commit/956e5af
This splits the GetIterator AO into two AOs, to remove some recursion
and to (soon) remove optional parameters.
The dollar sign is a special character in POSIX shells and in the Ninja
build file format. If the file name contains a `$`, something goes wrong
in the escaping/unescaping of this symbol, and CMake/GCC/Clang generate
invalid dependency files where not all instances of `$` are escaped
properly. Because of this, Ninja fails to rebuild `$262Object.cpp` if
the headers included by it have changed.
Stale `$262Object.cpp.o` files have been the cause of mysterious crashes
multiple times which only go away after doing a clean build. Let's
prevent these from happening again by removing the `$` from the
filename.
This commit makes it possible to let properties accept easing functions
as values, which will be used in a later commit to implement
animation-timing-function.
The main missing features are rootMargin, proper nested browsing
context support and content clip/clip-path support.
This makes images appear on some sites, such as YouTube and
howstuffworks.com.
Using the cross-page links, we can generate a directed graph showing the
topology of which pages refer to other pages. This is not just for fun:
the links show how often a page is linked (since links are not
deduplicated on purpose), which pairs of pages only have links in one
direction (where a link in the other direction may be useful), which
groups of closely-interlinked pages exist, and which pages have few or
no links to other pages.
The EXTRA_MARKDOWN_CHECK_ARGS argument to the check-markdown script can
be used to inject the -g flag for generating the graph on all manpages.
Apart from the class used audio fuzzers have identical behavior: Create
a memory stream from the fuzzer input and pass this to the loader, then
try to load audio until an error occurs. Since the loader plugins need
to have the same static create() function anyways for LibAudio itself,
we can unify the fuzzer implementations and reduce code duplication.
This utility will learn tricks such as extracting images from PDFs and
dumping tables from PDFs so that we can create code from specs.
It also allows testing LibPDF things in lagom, and allows testing
reading large amounts of PDFs using a shell script.