Previously, constructing a `UnsignedBigInteger::from_base()` could
produce an incorrect result if the input string contained a valid
Base36 digit that was out of range of the given base. The same method
would also crash if the input string contained an invalid Base36 digit.
An error is now returned in both these cases.
Constructing a BigFraction from string is now also fallible, so that we
can handle the case where we are given an input string with invalid
digits.
This commit un-deprecates DeprecatedString, and repurposes it as a byte
string.
As the null state has already been removed, there are no other
particularly hairy blockers in repurposing this type as a byte string
(what it _really_ is).
This commit is auto-generated:
$ xs=$(ack -l \bDeprecatedString\b\|deprecated_string AK Userland \
Meta Ports Ladybird Tests Kernel)
$ perl -pie 's/\bDeprecatedString\b/ByteString/g;
s/deprecated_string/byte_string/g' $xs
$ clang-format --style=file -i \
$(git diff --name-only | grep \.cpp\|\.h)
$ gn format $(git ls-files '*.gn' '*.gni')
When iterating over an iterable, we get back a JS object with the fields
"value" and "done".
Before this change, we've had two dedicated instructions for retrieving
the two fields: IteratorResultValue and IteratorResultDone. These had no
fast path whatsoever and just did a generic [[Get]] access to fetch the
corresponding property values.
By replacing the instructions with GetById("value") and GetById("done"),
they instantly get caching and JIT fast paths for free, making iterating
over iterables much faster. :^)
26% speed-up on this microbenchmark:
function go(a) {
for (const p of a) {
}
}
const a = [];
a.length = 1_000_000;
go(a);
This patch makes IteratorRecord an Object. Although it's not exposed to
author code, this does allow us to store it in a VM register.
Now that we can store it in a VM register, we don't need to convert it
back and forth between IteratorRecord and Object when accessing it from
bytecode.
The big win here is avoiding 3 [[Get]] accesses on every iteration step
of for..of loops. There are also a bunch of smaller efficiencies gained.
20% speed-up on this microbenchmark:
function go(a) {
for (const p of a) {
}
}
const a = [];
a.length = 1_000_000;
go(a);
When all the variables in a for..in/of block's lexical scope have been
turned into locals, we don't need to create and immediately abandon an
empty environment for them.
This avoid environment allocation in cases like this:
function foo(a) {
for (const x of a) {
}
}
This will not meaningfully affect short array literals, but it does
give us a bit of extra perf when evaluating huge array expressions like
in Kraken/imaging-darkroom.js
Until now, the unwind context stack has not been maintained by jitted
code, which meant we were unable to support the `with` statement.
This is a first step towards supporting that by making jitted code
call out to C++ to update the unwind context stack when entering/leaving
unwind contexts.
We also introduce a new "Catch" bytecode instruction that moves the
current exception into the accumulator. It's always emitted at the start
of a "catch" block.
This patch makes it possible for JS::Object::internal_set() to populate
a CacheablePropertyMetadata, and uses this to implement a basic
monomorphic cache for the most common form of property write access.
If the property for GetByValue in Generator::load_from_reference
is a calculated value this would be stored in an allocated
register and returned from the function. Not all callers want
this information however, so now only give it out when asked for.
Reduced the instruction count for Kraken/ai-astar.js function
"neighbours" from 214 to 192.
In case of {get func() {}, set func() {}} we were wrongly setting the
function name to 'func' and then later trying to replace an empty name
with 'get func'/'set func' which failed.
Instead, set the name to 'get func'/'set func' right away.
The code in put_by_property_key is kept, for when that is called
by put_by_value.
This is currently only used in the bytecode dump to annotate to where
unwinds lead per block, but will be hooked up to the virtual machine in
the next commit.
The following snippet would cause "i" to be incremented twice(!):
let a = []
let i = 0
a[++i] += 0
This patch solves the issue by remembering the base object and property
name for computed MemberExpression LHS in codegen. We the store the
result of the assignment to the same object and property (instead of
computing the LHS again).
3 new passes on test262. :^)
We should initialize jump targets when constructing the jump instruction
instead of doing it later. This was already the case in all construction
sites but one. This first patch converts all those sites to pass final
targets to the constructor directly.
When there is no `catch` parameter to bind the error, we don't need
to allocate an environment, since there's nothing to add to it.
This avoids one environment allocation every time we catch like this:
try {
...
} catch {
...
}
Our implementation of environment.CreateImmutableBinding(name, true)
in this AO was not correctly initializing const variables in strict
mode. This would mean that constant declarations in for loop bodies
would not throw if they were modified.
To fix this, add a new parameter to CreateVariable to set strict mode.
Also remove the vm.is_strict mode check here, as it doesn't look like
anywhere in the spec will change strict mode depending on whether the
script itself is running in script mode or not.
This fixes two of our test-js tests, no change to test262.
This works by adding source start/end offset to every bytecode
instruction. In the future we can make this more efficient by keeping
a map of bytecode ranges to source ranges in the Executable instead,
but let's just get traces working first.
Co-Authored-By: Andrew Kaster <akaster@serenityos.org>
If we're inside of a `with` statement scope, we have to take care to
extract the correct `this` value for use in calls when calling a method
on the binding object via an Identifier instead of a MemberExpression.
This makes Vue.js work way better in the bytecode VM. :^)
Also, 1 new pass on test262.
CreateVariable is not needed for locals because they are not stored in
environment and created binding will not be used. Also if all variables
in loop initialization sections are local then CreateLexicalEnvironment
and LeaveLexicalEnvironment can also be ommitted.
The RegExpLiteral AST node already has the parsed regex::Parser::Result
so let's plumb that over to the bytecode executable instead of reparsing
the regex every time NewRegExp is executed.
~12% speed-up on language/literals/regexp/S7.8.5_A2.1_T2.js in test262.
Using a special instruction to access global variables allows skipping
the environment chain traversal for them and going directly to the
module/global environment. Currently, this instruction only caches the
offset for bindings that belong to the global object environment.
However, there is also an opportunity to cache the offset in the global
declarative record.
This change results in a 57% increase in speed for
imaging-gaussian-blur.js in Kraken.
When building an object from an object expression, we don't want to
go through the full property setting machinery. This patch adds a new
PropertyKind::DirectKeyValue for PutById which guarantees that the
property becomes an own property.
This fixes an issue where setting the "__proto__" property in object
expressions wasn't working right.
12 new passes on test262. :^)
The instructions GetById and GetByIdWithThis now remember the last-seen
Shape, and if we see the same object again, we reuse the property offset
from last time without doing a new lookup.
This allows us to use Object::get_direct(), bypassing the entire lookup
machinery and saving lots of time.
~23% speed-up on Kraken/ai-astar.js :^)
- Update ECMAScriptFunctionObject::function_declaration_instantiation
to initialize local variables
- Introduce GetLocal, SetLocal, TypeofLocal that will be used to
operate on local variables.
- Update bytecode generator to emit instructions for local variables
By using Identifier class to represent the name of a class expression,
it becomes possible to consistently store information within the
identifier object, indicating whether the name refers to a local
variable or not.
This avoids the overhead of allocating a new Array on every function
call, saving a substantial amount of time and avoiding GC thrash.
This patch only makes use of Op::Call in CallExpression. There are other
places we should codegen this op. We should also do the same for super
expression calls.
~5% speed-up on Kraken/stanford-crypto-ccm.js
Forcing every function call to allocate a new Array just to accommodate
spread parameters is not very nice, so let's start moving towards making
this a special case rather than the general (and only) case.
While the completion value of a variable declaration is specified to be
empty, we might already have a completion value in the accumulator from
a previous statement. Preserve it so as to avoid clobbering it.
This fixes 6 tests on test262.
This makes them trivially copyable, which is an assumption multiple
optimizations use when rebuilding the instruction stream.
This fixes most optimized crashes in the test262 suite.
We do this by moving the `LoadImmediate undefined` instruction to a
separate basic block which jumps to the case's block unconditionally.
We enter a case initially using this wrapper, but when falling through,
we directly jump to the next case's block.