If the property for GetByValue in Generator::load_from_reference
is a calculated value this would be stored in an allocated
register and returned from the function. Not all callers want
this information however, so now only give it out when asked for.
Reduced the instruction count for Kraken/ai-astar.js function
"neighbours" from 214 to 192.
The only user is currently String::equals_ignoring_case, but LibRegex
will need to do the same case-folded comparison with UTF-32 data. As it
turns out, the comparison works with all Unicode view types without much
fuss.
Let's replace this bool with an `enum class` in order to enhance
readability. This is done by repurposing `MappedFile`'s `OpenMode` into
a shared `enum` simply called `Mode`.
Before usage of GetGlobal was prevented whenever eval() is present in
the scope chain.
With this change GetGlobal is emitted for `g` in the following program:
```js
function screw_everything_up() {
eval("");
}
var g;
g;
```
It makes Octane/mandreel.js benchmark run 2x faster :)
When wrapping dictionary members, generate_wrap_statement was called
with the pattern "auto {} = ...", where "..." was determined based on
the variable's type. However, in generate_wrap_statement, if a type is
nullable it generates an if statement, so this would end up generating
something along the lines of
if (!retval.member.has_value()) {
auto wrapped_member0_value = JS::js_null();
} else {
auto wrapped_member0_value = JS::Value(...);
}
...which makes the declaration inaccessible. It now generates the same
code, but the "auto" declaration (now an explicit JS::Value declaration)
is outside of the if-statement.
The main issues are using Structured{Serialize,Deserailize} instead of
Structured{Serialize,Deserialize}WithTransfer and the temporary
execution context usage for StructuredDeserialize.
Allows Discord to load once again, as it uses a postMessage scheduler
to render components, including the main App component. The callback
checked the (previously) non-existent source attribute of the
MessageEvent and returned if it was not the main window.
Fixes the Twitch cookie consent banner saying "failed integrity check"
for unknown reasons, but presumably related to the source and origin
attributes.
This commit replaces the 5 fuzzers that previously tested LibTextCodec
with a single fuzzer. We now rely on the fuzzer to generate the
encoding and separate it from the encoded data with a magic separator.
This increases the overall coverage of LibTextCodec and eliminates the
possibility of the same error being generated by multiple fuzzers.
When building fuzzers for Oss-Fuzz using `BuildFuzzers.sh --oss-fuzz`,
fuzzer dictionary files are now copied to the `$OUT` directory. This
allows them to be used automatically by the corresponding fuzzer.
We now return an error where `parse_atom()` would have previously
returned an empty StringView. This is consistent with RFC3501, which
says that an atom consists of one or more characters.
This prevents a few cases where parsing an invalid atom could lead to
an infinite loop.
Previously, the regression tests for OSS-Fuzz issues 62033 and 63296
used test case files directly from OSS-Fuzz. These files are invalid
in multiple ways because they have been generated by a fuzzer. This
commit replaces these files with ones that only expose the issue being
tested.
Read the basic lists as spans, and use those when looking for kerning.
Kerning lookup still does bit-casting for now. As for CBLC, the data is
a bit complicated.
A few closely-related changes:
- Move our definitions of the OpenType spec's "data types" into their
own header file.
- Add definitions for the integer types there too, for completeness.
(Plus Uint16 matches the spec term, and is less verbose than
BigEndian<u16>.)
- Include Traits for the non-BigEndian types so that we can read them
from Streams. (BigEndian<integer-type> already has this.)
- Use the integer types in our struct definitions.
As a bonus, this fixes a bug in Hmtx, which read the left-side bearings
as i16 instead of BigEndian<i16>.
Do more checks at load time, including categorizing the subtables and
producing our own directory of them.
The format for Kern is a little complicated, so use a Stream instead of
manual offsets.