Unlike most data in the CLDR, hour cycles are not stored on a per-locale
basis. Instead, they are keyed by a string that is usually a region, but
sometimes is a locale. Therefore, given a locale, to determine the hour
cycles for that locale, we:
1. Check if the locale itself is assigned hour cycles.
2. If the locale has a region, check if that region is assigned hour
cycles.
3. Otherwise, maximize that locale, and if the maximized locale has
a region, check if that region is assigned hour cycles.
4. If the above all fail, fallback to the "001" region.
Further, each locale's default hour cycle is the first assigned hour
cycle.
This hasn't mattered yet by chance, because the source for all enums
contains names of the same case. But the enum generated for hour cycle
regions will have mixed case. Sort them case-insensitively in order to
traverse these names in the same order in both generate_enum and
generate_mapping.
Similar to number formatting, the data for date-time formatting will be
located in its own generated file. This extracts the cldr-dates package
from the CLDR and sets up the generator plumbing to create the date-time
data files.
Currently, we generate separate data files for locale and number format
related tables/methods, but provide public accessors for all of the data
in one Locale.h file. Rather than continuing this trend for date-time,
relative time, etc. formatting, it's a bit easier to reason about if the
public accessors are also in separate files.
At the moment we just check if we *can* render a simple triangle, we do
not yet actually test if the image is indeed the triangle we wanted.
This test also outputs the rendered image when GL_DEBUG is enabled to a
file called "picture.bmp" for manual verification.
Co-authored-by: sunverwerth <s.unverwerth@serenityos.org>
Previously, a libc-like out-of-line error information was used in the
loader and its plugins. Now, all functions that may fail to do their job
return some sort of Result. The universally-used error type ist the new
LoaderError, which can contain information about the general error
category (such as file format, I/O, unimplemented features), an error
description, and location information, such as file index or sample
index.
Additionally, the loader plugins try to do as little work as possible in
their constructors. Right after being constructed, a user should call
initialize() and check the errors returned from there. (This is done
transparently by Loader itself.) If a constructor caused an error, the
call to initialize should check and return it immediately.
This opportunity was used to rework a lot of the internal error
propagation in both loader classes, especially FlacLoader. Therefore, a
couple of other refactorings may have sneaked in as well.
The adoption of LibAudio users is minimal. Piano's adoption is not
important, as the code will receive major refactoring in the near future
anyways. SoundPlayer's adoption is also less important, as changes to
refactor it are in the works as well. aplay's adoption is the best and
may serve as an example for other users. It also includes new buffering
behavior.
Buffer also gets some attention, making it OOM-safe and thereby also
propagating its errors to the user.
This wasn't particularly difficult, and there's not much use for the
nicer interface yet either. While unveil() is of limited use in js(1)
as it should be able to open arbitrary files, I feel like we should be
able to add a pledge() call.
As noted by ECMA-402, if a supported locale contains all of a language,
script, and region subtag, then the implementation must also support the
locale without the script subtag. The most complicated example of this
is the zh-TW locale.
The list of locales in the CLDR database does not include zh-TW or its
maximized zh-Hant-TW variant. Instead, it inlcudes the zh-Hant locale.
However, zh-Hant-TW is listed in the default-content locale list in the
cldr-core package. This defines an alias from zh-Hant-TW to zh-Hant. We
must then also support the zh-Hant-TW alias without the script subtag:
zh-TW. This transitively maps zh-TW to zh-Hant, which is a case quite
heavily tested by test262.
Previously, we were just copying the locale data into default-content
locales (for example, copying the "en" data into "en-US"). Instead, we
can just define the default-content locales as aliases to their main
locales.
This will be used for locale aliases as well. Also rename the "property"
field in this struct to "name", as it no longer is only used for
property aliases.
Also add slightly richer parse errors now that we can include a string
literal with returned errors.
This will allow us to use TRY() when working with JSON data.
This wasn't the case for compact patterns, but unit patterns can contain
multiple (up to 2, really) identifiers that must each be recognized by
LibJS.
Each generated NumberFormat object now stores an array of identifiers
parsed. The format pattern itself is encoded with the index into this
array for that identifier, e.g. the compact format string "0K" will
become "{number}{compactIdentifier:0}".
This field is currently used to store the StringView into the compact
name/symbol in the format string. Units will need to store a similar
field, so rename the field to be more generic, and extract the parser
for it.
The compact scale of each formatting rule was precomputed in commit:
be69eae651
Using the formula: compact scale = magnitude - pattern scale
This computation was off-by-one.
For example, consider the format key "10000-count-one", which maps to
"00 thousand" in en-US. What we are really after is the exponent that
best represents the string "thousand" for values greater than 10000
and less than 100000 (the next format key). We were previously doing:
log10(10000) - "00 thousand".count("0") = 2
Which clearly isn't what we want. Instead, if we do:
log10(10000) + 1 - "00 thousand".count("0") = 3
We get the correct exponent for each format key for each locale.
This commit also renames the generated variable from "compact_scale" to
"exponent" to match the terminology used in ECMA-402.
For example, in en-US, the decimal, long compact pattern for numbers
between 10,000 and 100,000 is "00 thousand". In that pattern, "thousand"
is the compact identifier, and the generated format pattern is now
"{number} {compactIdentifier}". This also generates that identifier as
its own field in the NumberFormat structure.
Most locales have a single grouping size (the number of integer digits
to be written before inserting a grouping separator). However some have
a primary and secondary size. We parse the primary size as the size used
for the least significant integer digits, and the secondary size for the
most significant.
In order to implement Intl.NumberFormat.prototype.formatToParts, do not
replace {currency} keys in the format pattern before ECMA-402 tells us
to. Otherwise, the array return by formatToParts will not contain the
expected currency key.
Early replacement was done to avoid resolving the currency display more
than once, as it involves a couple of round trips to search through
LibUnicode data. So this adds a non-standard method to NumberFormat to
do this resolution and cache the result.
Another side effect of this change is that LibUnicode must replace unit
format patterns of the form "{0} {1}" during code generation. These were
previously skipped during code generation because LibJS would just
replace the keys with the currency display at runtime. But now that the
currency display injection is delayed, any {0} or {1} keys in the format
pattern will cause PartitionNumberPattern to abort.
Currencies are a bit strange; the layout of currency data in the CLDR is
not particularly compatible with what ECMA-402 expects. For example, the
currency format in the "en" and "ar" locales for the Latin script are:
en: "¤#,##0.00"
ar: "¤\u00A0#,##0.00"
Note how the "ar" locale has a non-breaking space after the currency
symbol (¤), but "en" does not. This does not mean that this space will
appear in the "ar"-formatted string, nor does it mean that a space won't
appear in the "en"-formatted string. This is a runtime decision based on
the currency display chosen by the user ("$" vs. "USD" vs. "US dollar")
and other rules in the Unicode TR-35 spec.
ECMA-402 shies away from the nuances here with "implementation-defined"
steps. LibUnicode will store the data parsed from the CLDR however it is
presented; making decisions about spacing, etc. will occur at runtime
based on user input.
For example, there isn't a unique set of data for the en-US locale;
rather, it defaults to the data for the en locale. See this commit for
much more detail: 357c97dfa8
These are used when formatting a number as currency with a display
option of "name" (e.g. for USD, the name is "US Dollars" in en-US).
These patterns appear in the CLDR in a different manner than other
number formats that are pluralized. They are of the form "{0} {1}",
therefore do not undergo subpattern replacements.
Currently, LibUnicode is only parsing and generating the "long" style of
currency display names. However, the CLDR contains "short" and "narrow"
forms as well that need to be handled. Parse these, and update LibJS to
actually respect the "style" option provided by the user for displaying
currencies with Intl.DisplayNames.
Note: There are some discrepencies between the engines on how style is
handled. In particular, running:
new Intl.DisplayNames('en', {type:'currency', style:'narrow'}).of('usd')
Gives:
SpiderMoney: "USD"
V8: "US Dollar"
LibJS: "$"
And running:
new Intl.DisplayNames('en', {type:'currency', style:'short'}).of('usd')
Gives:
SpiderMonkey: "$"
V8: "US Dollar"
LibJS: "$"
My best guess is V8 isn't handling style, and just returning the long
form (which is what LibJS did before this commit). And SpiderMoney can
handle some styles, but if they don't have a value for the requested
style, they fall back to the canonicalized code passed into of().
The data used for number formatting is going to grow quite a bit when
the cldr-units package is parsed. To prevent the generated UnicodeLocale
file from growing outrageously large, the number formatting data can go
into its own file. To prepare for this, move code that will be common
between the generators for UnicodeLocale and UnicodeNumberFormat to the
utility header.
This will be needed for the ComputeExponentForMagnitude AO for compact
formatting, namely step 5b:
Let exponent be an implementation- and locale-dependent (ILD) integer
by which to scale a number of the given magnitude in compact notation
for the current locale.
A number formatting pattern in the CLDR contains one or two entries,
delimited by a semi-colon. Previously, LibUnicode was just storing the
entire pattern as one string. This changes the generator to split the
pattern on that delimiter and generate the 3 unique patterns expected by
ECMA-402.
The rules for generating the 3 patterns are as follows:
* If the pattern contains 1 entry, it is the zero pattern. The positive
pattern is the zero pattern prepended with {plusSign}. The negative
pattern is the zero pattern prepended with {minusSign}.
* If the pattern contains 2 entries, the first is the zero pattern, and
the second is the negative pattern. The positive pattern is the zero
pattern prepended with {plusSign}.
The number system data in the CLDR contains information on how to format
numbers in a locale-dependent manner. Start parsing this data, beginning
with numeric symbol strings. For example the symbol NaN maps to "NaN" in
the en-US locale, and "非數值" in the zh-Hant locale.
Some locales in the CLDR have alternate default numbering systems listed
under "defaultNumberingSystem-alt-*", e.g.:
"defaultNumberingSystem": "arab",
"defaultNumberingSystem-alt-latn": "latn",
"otherNumberingSystems": {
"native": "arab"
},
We were previously only parsing "defaultNumberingSystem" and
"otherNumberingSystems". This odd format appears to be an artifact of
converting from XML.
This isn't particularly important because this generates code that is
quite hidden from outside callers. But when viewing the generated code,
it's a bit nicer to read e.g. enum identifiers such as "MinusSign"
rather than "Minussign".
First off, this verifies that an initial value is always provided in
Properties.json for each property.
Second, it verifies that parsing that initial value succeeds.
This means that a call to `property_initial_value()` will always return
a valid StyleValue. :^)
This file contains the list of locales which default to their parent
locale's values. In the core CLDR dataset, these locales have their own
files, but they are empty (except for identity data). For example:
https://github.com/unicode-org/cldr/blob/main/common/main/en_US.xml
In the JSON export, these files are excluded, so we currently are not
recognizing these locales just by iterating the locale files.
This is a prerequisite for upgrading to CLDR version 40. One of these
default-content locales is the popular "en-US" locale, which defaults to
"en" values. We were previously inferring the existence of this locale
from the "en-US-POSIX" locale (many implementations, including ours,
strip variants such as POSIX). However, v40 removes the "en-US-POSIX"
locale entirely, meaning that without this change, we wouldn't know that
"en-US" exists (we would default to "en").
For more detail on this and other v40 changes, see:
https://cldr.unicode.org/index/downloads/cldr-40#h.nssoo2lq3cba
This changes Web::Bindings::throw_dom_exception_if_needed() to return a
JS::ThrowCompletionOr instead of an Optional. This allows callers to
wrap the invocation with a TRY() macro instead of making a follow-up
call to should_return_empty(). Further, this removes all invocations to
vm.exception() in the generated bindings.
This also required converting URLSearchParams::for_each and the callback
function it invokes to ThrowCompletionOr. With this, the ReturnType enum
used by WrapperGenerator is removed as all callers would be using
ReturnType::Completion.
Both at the same time because many of them call construct() in call()
and I'm not keen on adding a bunch of temporary plumbing to turn
exceptions into throw completions.
Also changes the return value of construct() to Object* instead of Value
as it always needs to return an object; allowing an arbitrary Value is a
massive foot gun.
The old versions were renamed to JS_DECLARE_OLD_NATIVE_FUNCTION and
JS_DEFINE_OLD_NATIVE_FUNCTION, and will be eventually removed once all
native functions were converted to the new format.
Adds support for methods whose last parameter is a variadic DOMString.
We constructor a Vector<String> of the remaining arguments to pass to
the C++ implementation.
Note our Attribute class is what the spec refers to as just "Attr". The
main differences between the existing implementation and the spec are
just that the spec defines more fields.
Attributes can contain namespace URIs and prefixes. However, note that
these are not parsed in HTML documents unless the document content-type
is XML. So for now, these are initialized to null. Web pages are able to
set the namespace via JavaScript (setAttributeNS), so these fields may
be filled in when the corresponding APIs are implemented.
The main change to be aware of is that an attribute is a node. This has
implications on how attributes are stored in the Element class. Nodes
are non-copyable and non-movable because these constructors are deleted
by the EventTarget base class. This means attributes cannot be stored in
a Vector or HashMap as these containers assume copyability / movability.
So for now, the Vector holding attributes is changed to hold RefPtrs to
attributes instead. This might change when attribute storage is
implemented according to the spec (by way of NamedNodeMap).
This adds the ParamatizedType, as `Vector<String>` doesn't encode the
full type information. It is a separate struct as you can't have
`Vector<Type>` inside of `Type`. This also makes Type RefCounted
because I had to make parse_type return a pointer to make dynamic
casting work correctly.
The reason I made it RefCounted instead of using a NonnullOwnPtr is
because it causes compiler errors that I don't want to figure out right
now.
Apparently it breaks the fuzzer build. There's probably a better fix
for this, but for now just unbreak the fuzzer build.
Keep this for non-fuzzer builds though since it's apparently a 17%
speedup for running test262 tests :^)
Lagom: Build with -fno-no-semantic-interposition
We build with this in non-lagom builds, and serenity's gcc even adds it
to its CC1_SPEC. Let's use it for lagom too.
Reduces the number of dynamic relocations in liblagom-js.so.0.0.0 (per
`objdump -R`) from 15133 to 14534, and increases its size back to 91M
(95156800 bytes), probably due to more inlining being possible.
This might help perf of lagom binaries.
We build with this in non-lagom builds, so there's no reason not
to use it in lagom builds as well.
Reduces the size of liblagom-js.so.0.0.0 from 94M to 90M
(from 98352784 to 93831056 bytes to be exact).
Typically size_t is used for indices, but we can take advantage of the
knowledge that there is approximately only 46K unique strings in the
generated UnicodeLocale.cpp file. Therefore, we can get away with using
u16 to store indices. There is a VERIFY that will fail if we ever exceed
the limits of u16.
On x86_64 builds, this reduces libunicode.so from 9.2 MiB to 7.3 MiB.
On i686 builds, this reduces libunicode.so from 3.9 MiB to 3.3 MiB.
These savings are entirely in the .rodata section of the shared library.
Note there are a couple of type differences between the spec and the IDL
file added in this commit. For example, we will need to support a type
of Variant to handle spec types such as "(double or sequence<double>)".
But for now, this allows web pages to construct an IntersectionObserver
with any valid type.
The *_from_string() and resolve_*_alias() generated methods are the last
remaining users of HashMap in the LibUnicode generated files (read: the
last methods not using compile-time structures). This converts these
methods to use an array containing pairs of hash values to the desired
lookup value.
Because this code generation is the same between GenerateUnicodeData.cpp
and GenerateUnicodeLocale.cpp, this adds a GeneratorUtil.h header to the
LibUnicode generators to contain the method that generates the methods.
This concept is not present in ECMAScript, and it bothers me every time
I see it.
It's only used by WrapperGenerator, and even there only relevant in two
places, so let's fully remove it from LibJS and use a simple ternary
expression instead:
cpp_name = js_name.is_null() && legacy_null_to_empty_string
? String::empty()
: js_name.to_string(global_object);
Previously this would generate the following code:
JS::Value foo_value;
if (!foo.is_undefined())
foo_value = foo;
Which is dangerous as we're passing an empty value around, which could
be exposed to user code again. This is fine with "= null", for which it
also generates:
else
foo_value = JS::js_null();
So, in summary: a value of type `any`, not `required`, with no default
value and no initializer from user code will now default to undefined
instead of an empty value.
Meta/Lagom/ReadMe.md never had any other name; not sure how that typo
happened.
The link to the non-existent directory is especially vexing because the
text goes on to explain that we don't want such a directory to exist.
Found by running markdown-checker, and 'wget'ing all external links.
The list-format strings used for Intl.ListFormat are small, but quite
heavily duplicated. For example, the string "{0}, {1}" appears 6,519
times. Generate unique strings for this data to avoid duplication.
In the generated UnicodeLocale.cpp file, there are 296,408 strings for
localizations of languages, territories, scripts, currencies & keywords.
Of these, only 43,848 (14.8%) are actually unique, so there are quite a
large number of duplicated strings.
This generates a single compile-time array to store these strings. The
arrays for the localizations now store an index into this single array
rather than duplicating any strings.
Some CLDR languages.json / territories.json files contain localizations
for some lanuages/territories that are otherwise not present in the CLDR
database. We already don't generate anything in UnicodeLocale.cpp for
these anomalies, but this will stop us from even storing that data in
the generator's memory.
This doesn't affect the output of the generator, but will have an effect
after an upcoming commit to unique-ify all of the strings in the CLDR.
There are only 112 code points with special casing rules, so this array
is quite small (compared to the size 34,626 UnicodeData hash map that is
also storing this data). Removing all casing rules from UnicodeData will
happen in a subsequent commit.
Currently, all casing information (simple and special) are stored in a
compile-time array of size 34,626, then statically copied to a hash map
at runtime. In an effort to reduce the resulting memory usage, store the
simple casing rules in standalone compile-time arrays. The uppercase map
is size 1,450 and the lowercase map is size 1,433. Any code point not in
a map will implicitly have an identity mapping.
Having IDL constructors call FooWrapper::create(impl) directly was
creating a wrapper directly without telling the impl object about the
wrapper. This meant that we had wrapped C++ objects with a null
wrapper() pointer.
This introduces 3 classes: NodeList, StaticNodeList and LiveNodeList.
NodeList is the base of the static and live versions. Static is a
snapshot whereas live acts on the underlying data and thus inhibits
the same issues we have currently with HTMLCollection.
They were split into separate classes to not have them weirdly
mis-mashed together.
The create functions for static and live both return a NNRP to the base
class. This is to prevent having to do awkward casting at creation
and/or return, as the bindings expect to see the base NodeList only.
Instead of setting it to the default object prototype and then
immediately setting it again via internal_set_prototype_of, we can just
set it directly in the parent constructor call.
Since we don't support IDL typedefs or unions yet, the responsibility
of verifying the type of the argument is temporarily moved from the
generated Wrapper to the implementation.
This patch makes both of these classes inherit from RefCounted and
Bindings::Wrappable, plus some minimal rejigging to allow us to keep
using them internally while also exposing them to web content.
This adds support for the [Unscopable] extended attribute to attributes
and functions.
I believe it should be applicable to all interface members, but I
haven't done that here.
This currently only supports pair iterables (i.e. iterable<key, value>)
support for value iterables (i.e. iterable<value>) is left as TODO().
Since currently our cmake setup calls the WrapperGenerator separately
and unconditionally for each (hard-coded) output file iterable wrappers
have to be explicitly marked so in the CMakeLists.txt declaration, we
could likely improve this in the future by querying WrapperGenerator
for the outputs based on the IDL.
This patch essentially just splits the non return-specific logic from
generate_return_statement (i.e. the wrapping of the cpp value into
a javascript one) into a separate function generate_wrap_statement that
can be used to wrap any cpp value during wrapper generation.
This custom attribute will be used for objects that hold onto arbitrary
JS::Value's. This is needed as JS::Handle can only be constructed for
objects that implement JS::Cell, which JS::Value doesn't.
This works by overriding the `visit_edges` function in the wrapper.
This overridden function calls the base `visit_edges` and then forwards
it to the underlying implementation.
This will be used for CustomEvent, which must hold onto an arbitrary
JS::Value for it's entire lifespan.
A legacy platform object is a non-global platform object that
implements a special operation. A special operation is a getter, setter
and/or deleter. This is particularly used for old collection types,
such as HTMLCollection, NodeList, etc.
This will be used to make these spec-compliant and remove their custom
wrappers. Additionally, it will be used to implement collections that
we don't have yet, such as DOMStringMap.
This does a few things, that are hard to separate. For a while now, it's
been confuzing what `StyleValue::is_foo()` actually means. It sometimes
was used to check the type, and sometimes to see if it could return a
certain value type. The new naming scheme is:
- `is_length()` - is it a LengthStyleValue?
- `as_length()` - casts it to LengthStyleValue
- `has_length()` - can it return a Length?
- `to_length()` - gets the internal value out (eg, Length)
This also means, no more `static_cast<LengthStyleValue const&>(*this)`
stuff when dealing with StyleValues. :^)
Hopefully this will be a bit clearer going forward. There are lots of
places using the original methods, so I'll be going through them to
hopefully catch any issues.
These now crash as VM::call() uses ThrowExceptionOr<T>, which refuses to
hold an empty JS::Value as its non-exception result.
We only need to return an empty value when should_return_empty() says
so for the return value of throw_dom_exception_if_needed().
Co-authored-by: Luke Wilde <lukew@serenityos.org>
For `number` and `integer` types, you can add a range afterwards to add
a range check, using similar syntax to that used in the CSS specs. For
example:
```json
"font-weight": {
...
"valid-types": [
"number [1,1000]"
],
...
}
```
This limits any numbers to the range `1 <= n <= 1000`.
Previously, we have not been validating the values for CSS declarations
inside the Parser. This causes issues, since we should be discarding
invalid style declarations, so that previous ones are used instead. For
example, in this code:
```css
.foo {
width: 2em;
width: orange;
}
```
... the `width: orange` declaration overwrites the `width: 2em` one,
even though it is invalid. According to the spec, `width: orange` should
be rejected at parse time, and discarded, leaving `width: 2em` as the
resulting value.
Many properties (mostly shorthands) are parsed specially, and so they
are already rejected if they are invalid. But for simple properties, we
currently accept any value. With `property_accepts_value()`, we can
check if the value is valid in `parse_css_value()`, and reject it if it
is not.
We already expand shorthands in the cascade, so there's no need to
preserve them in the output.
This patch reorganizes the CSS::PropertyID enum values so that we can
easily iterate over all shorthand or longhand properties.
This patch adds a basic initial implementation of these API's.
Since LibWeb currently doesn't support workers, this implementation of
messaging doesn't bother with serializing and deserializing messages.
When parsing shorthand values, we'd like to use
`property_initial_value()` to get their longhand property values,
instead of hard-coding them as we currently do. That involves
recursively calling that function while the `initial_values` map is
being initialized, which causes problems because the shorthands appear
alphabetically before their longhand components, so the longhands aren't
initialized yet!
The solution here is to perform 2 passes when generating the code,
outputting properties without "longhands" first, and the rest after.
This could potentially cause issues when shorthands have multiple
levels, in particular `border` -> `border-color` -> `border-left-color`.
But, we do not currently define a default value for `border`, and
`border-color` takes only a single value, so it's fine for now. :^)
Multi-lib distros like Gentoo and Fedora install lagom-core.so into
lagom-install/lib64 rather than lib. Set the install RPATH based on
CMAKE_INSTALL_LIBDIR to avoid the wrong path being set in the binaries.
Also apply macOS specific RPATH rules to fix the build on that platform.
Replace the old logic where we would start with a host build, and swap
all the CMake compiler and target variables underneath it to trick
CMake into building for Serenity after we configured and built the Lagom
code generators.
The SuperBuild creates two ExternalProjects, one for Lagom and one for
Serenity. The Serenity project depends on the install stage for the
Lagom build. The SuperBuild also generates a CMakeToolchain file for the
Serenity build to use that replaces the old toolchain file that was only
used for Ports.
To ensure that code generators are rebuilt when core libraries such as
AK and LibCore are modified, developers will need to direct their manual
`ninja` invocations to the SuperBuild's binary directory instead of the
Serenity binary directory.
This commit includes warning coalescing and option style cleanup for the
affected CMakeLists in the Kernel, top level, and runtime support
libraries. A large part of the cleanup is replacing USE_CLANG_TOOLCHAIN
with the proper CMAKE_CXX_COMPILER_ID variable, which will no longer be
confused by a host clang compiler.
This common strategy of having a serenity_option() macro defined in
either the Lagom or top level CMakeLists.txt allows us to do two things:
First, we can more clearly see which options are Serenity-specific,
Lagom-specific, or common between the target and host builds.
Second, it enables the upcoming SuperBuild changes to set() the options
in the SuperBuild's CMake cache and forward each target's options to the
corresponding ExternalProject.
This makes it so we don't need to specify the full path to all the
helper scripts we include() from different places in the codebase and
feels a lot cleaner.
We'll use this to prevent repeating common tool dependencies. They all
depend on LibCore and AK only. We also want to encapsulate common
install rules for them.
This namespace will be used for all interfaces defined in the URL
specification, like URL and URLSearchParams.
This has the unfortunate side-effect of requiring us to use the fully
qualified AK::URL name whenever we want to refer to the AK class, so
this commit also fixes all such references.
This lets you query if a given Quirk applies to a given PropertyID.
Currently this applies only to the "Hashless hex color" and "Unitless
length" quirks.
This removes the awkward String::replace API which was the only String
API which mutated the String and replaces it with a new immutable
version that returns a new String with the replacements applied. This
also fixes a couple of UAFs that were caused by the use of this API.
As an optimization an equivalent StringView::replace API was also added
to remove an unnecessary String allocations in the format of:
`String { view }.replace(...);`
There's only a couple of cases like this, but there are some locale
paths in the CLDR that contain variants. For example, there isn't a
en-US path, but there is a en-US-POSIX path. This interferes with the
operation to search for locales by name. The algorithm is such that
searching for en-US will not result in en-US-POSIX being found. To
resolve this, we should remove variants from the locale name.
This data informs consumers how to join lists of values. For example,
in en-US, the list ["a", "b", "c"] formatted to a string should become
"a, b, and c".
This is to simply the Default Case Conversion implementation. Otherwise,
the implementation would need to determine which special casing rule to
apply, instead of just picking the first match.
The amount of aliases in the likely-subtags dataset is quite large, so
this also needed to change the way the data is generated. Otherwise, the
compiler would complain about the size of the generated code.
Previously, a static method was generated that would effectively parse
the dataset into a HashMap of Unicode::LanguageID at runtime. We now
perform that parsing at generation-time, and instead generate an Array
of a structure similar to Unicode::LanguageID (we cannot use the same
structure because it contains String and Optional, which cannot be used
at compile-time).
The DOM specification says that the primary use case for these is to
give Promises abort semantics. It is also a prerequisite for Fetch,
as it is used to make Fetch abortable.
a
CLDR contains a set of likely subtag data where, given a locale, you can
resolve what is the most likely language, script, or territory of that
locale. This data is needed for resolving territory aliases. These
aliases might contain multiple territories, and we need to resolve which
of those territories is most likely correct for a locale.
Note that the likely subtag data is quite huge (a few thousand entries).
As an optimization encouraged by the spec, we only generate the smallest
subset of this data that we actually need (about 150 entries).
Most alias substitutions are "simple", meaning that alias matching is
done by examining a single locale subtag. However, there are a handful
of "complex" aliases where matching is done by examining multiple
subtags. For example, the variant subtag "lojban" causes the locale
"art-lojban" to be canonicalized to "jbo", but only when the language
subtag is "art" (i.e. this should not occur for the locale "en-lojban").
This generates a method to perform complex alias matching.
CLDR contains a set of aliases for languages, territories, etc. that no
longer are meant to be used (e.g. due to deprecation). For example, the
language "aam" is deprecated and should be canonicalized as "aas".
This is needed so all headers and files exist on disk, so that
the sonar cloud analyzer can find them when executing the compilation
commands contained in compile_commands.json, without actually building.
Co-authored-by: Andrew Kaster <akaster@serenityos.org>
This allows us to remove all the add_subdirectory calls from the top
level CMakeLists.txt that referred to targets linking LagomCore.
Segregating the host tools and Serenity targets helps us get to a place
where the main Serenity build can simply use a CMake toolchain file
rather than swapping all the compiler/sysroot variables after building
host libraries and tools.
Moving this helper CMake file to the centralized Meta/CMake folder helps
to get a better grasp on what extra files are required for the build,
and what files are generated.
While we're at it, don't use add_compile_definitions for
ENABLE_UNICODE_DATA, which only needs to be seen by LibUnicode sources.
The top-level CMakeLists.txt already automatically detects ccache, but
CI will invoke CMake with Lagom's CMakeLists.txt. Add an option to Lagom
to do the same detection.
This standard CMake option controls whether add_library() calls will
use STATIC or SHARED by default. The flag is set to on by default
since that's what we want for normal CI jobs and local builds and the
test262 runner, but disabled for oss-fuzz builds.
This should finally fix the oss-fuzz build after it was broken in #9017
oss-fuzz un-breakage was verified by running the following commands in
the oss-fuzz repo:
python infra/helper.py build_image serenity
python infra/helper.py build_fuzzers --sanitizer address --engine afl \
--architecture x86_64 serenity /path/to/local/checkout/Meta/Lagom
python infra/helper.py check_build --sanitizer address --engine afl \
--architecture x86_64 serenity
This supports some binary property matching. It does not support any
properties not yet parsed by LibUnicode, nor does it support value
matching (such as Script_Extensions=Latin).
Split the Lagom build into shared libraries to match the Serenity build.
This reduces the cognitive load when trying to edit the Lagom CMakeLists
significantly. It also reduces the amount of source files that must be
compiled to run each test or host program significantly.
Also re-organize all the build rules into sections. And reorganize the
CMakeLists file in general.
LibTTF has a concrete dependency on LibGfx for things like Gfx::Bitmap,
and LibGfx has a concrete dependency in the TTF::Font class in
Gfx::FontDatabase. This circular dependency works fine for Serenity and
Lagom Linux builds of the two libraries. It also works fine for static
library builds on Lagom macOS builds.
However, future changes will make Lagom use shared libraries, and
circular library dependencies are not tolerated in macOS.
This is primarily to allow using LibUnicode within LibJS and its REPL.
Note: this seems to be the first time that a Lagom dependency requires
generated source files. For this to work, some of Lagom's CMakeLists.txt
commands needed to be re-organized to include the CMake files that fetch
and parse UnicodeData.txt. The paths required to invoke the generator
also differ depending on what is currently building (SerenityOS vs.
Lagom as part of the Serenity build vs. a standalone Lagom build).
These are usually incorrect, and people sometimes forget to add the
correct values as a result of them being optional, so they should just
be specified explicitly.
This removes all usages of the non-standard define_property helper
method and replaces all it's usages with the specification required
alternative or with define_direct_property where appropriate.
All GUI applications currently load all TTF fonts on startup
(to populate the Gfx::FontDatabase. This could probably be smarter.)
Before this patch, everyone would open the files and read them into
heap-allocated storage. Now we simply mmap() them instead. :^)
This commit adds a bunch of passes, the most interesting of which is a
pass that merges blocks together, and a pass that places blocks that
flow into each other next to each other, and a very simply pass that
removes duplicate basic blocks.
Note that this does not remove the jump at the end of each block in that
pass to avoid scope creep in the passes.
Previously, AK::Function would accept _any_ callable type, and try to
call it when called, first with the given set of arguments, then with
zero arguments, and if all of those failed, it would simply not call the
function and **return a value-constructed Out type**.
This lead to many, many, many hard to debug situations when someone
forgot a `const` in their lambda argument types, and many cases of
people taking zero arguments in their lambdas to ignore them.
This commit reworks the Function interface to not include any such
surprising behaviour, if your function instance is not callable with
the declared argument set of the Function, it can simply not be
assigned to that Function instance, end of story.
Previously ByteBuffer::grow() behaved like Vector<T>::resize().
However the function name was somewhat ambiguous - and so this patch
updates ByteBuffer to behave more like Vector<T> by replacing grow()
with resize() and adding an ensure_capacity() method.
This also lets the user change the buffer's capacity without affecting
the size which was not previously possible.
Additionally this patch makes the capacity() method public (again).
This only tests "can it be parsed", but the goal of this commit is to
provide a test framework that can be built upon :)
The conformance tests are downloaded, compiled* and installed only if
the INCLUDE_WASM_SPEC_TESTS cmake option is enabled.
(*) Since we do not yet have a wast parser, the compilation is delegated
to an external tool from binaryen, `wasm-as`, which is required for the
test suite download/install to succeed.
This *does* run the tests in CI, but it currently does not include the
spec conformance tests.