Instead of emitting a NewBigInt instruction to construct a primitive
bigint from a parsed literal, we now instantiate the BigInt on the heap
during codegen.
Instead of emitting a NewString instruction to construct a primitive
string from a parsed literal, we now instantiate the PrimitiveString on
the heap during codegen.
`var` declarations can have duplicates, but duplicate `let` or `const`
bindings are a syntax error.
Because of this, we can sink `let` and `const` directly into the
preferred_dst if available. This is not safe for `var` since the
preferred_dst may be used in the initializer.
This patch fixes the issue by simply skipping the preferred_dst
optimization for `var` declarations.
When compiling code like this:
x = { foo: x }
We don't want to put a new JS::Object in `x` until *after* we've
evaluated `x` for the `foo` field.
This fixes an issue when loading https://puter.com/ :^)
Instead of having Call refer to a range of VM registers, it now has
a trailing list of argument operands as part of the instruction.
This means we no longer have to shuffle every argument value into
a register before making a call, making bytecode smaller & faster. :^)
Instead of splitting the postfix variants into ToNumeric + Inc/Dec,
we now have dedicated PostfixIncrement and PostfixDecrement instructions
that handle both outputs in one go.
This patch moves us away from the accumulator-based bytecode format to
one with explicit source and destination registers.
The new format has multiple benefits:
- ~25% faster on the Kraken and Octane benchmarks :^)
- Fewer instructions to accomplish the same thing
- Much easier for humans to read(!)
Because this change requires a fundamental shift in how bytecode is
generated, it is quite comprehensive.
Main implementation mechanism: generate_bytecode() virtual function now
takes an optional "preferred dst" operand, which allows callers to
communicate when they have an operand that would be optimal for the
result to go into. It also returns an optional "actual dst" operand,
which is where the completion value (if any) of the AST node is stored
after the node has "executed".
One thing of note that's new: because instructions can now take locals
as operands, this means we got rid of the GetLocal instruction.
A side-effect of that is we have to think about the temporal deadzone
(TDZ) a bit differently for locals (GetLocal would previously check
for empty values and interpret that as a TDZ access and throw).
We now insert special ThrowIfTDZ instructions in places where a local
access may be in the TDZ, to maintain the correct behavior.
There are a number of progressions and regressions from this test:
A number of async generator tests have been accidentally fixed while
converting the implementation to the new bytecode format. It didn't
seem useful to preserve bugs in the original code when converting it.
Some "does eval() return the correct completion value" tests have
regressed, in particular ones related to propagating the appropriate
completion after control flow statements like continue and break.
These are all fairly obscure issues, and I believe we can continue
working on them separately.
The net test262 result is a progression though. :^)
This is pure prep work for refactoring the bytecode to use more operands
instead of only registers.
generate_bytecode() virtuals now return an Optional<Operand>, and the
idea is to return an Operand referring to the value produced by this
AST node.
They also take an Optional<Operand> "preferred_dst" input. This is
intended to communicate the caller's preference for an output operand,
if any. This will be used to elide temporaries when we can store the
result directly in a local, for example.
Previously, constructing a `UnsignedBigInteger::from_base()` could
produce an incorrect result if the input string contained a valid
Base36 digit that was out of range of the given base. The same method
would also crash if the input string contained an invalid Base36 digit.
An error is now returned in both these cases.
Constructing a BigFraction from string is now also fallible, so that we
can handle the case where we are given an input string with invalid
digits.
This commit un-deprecates DeprecatedString, and repurposes it as a byte
string.
As the null state has already been removed, there are no other
particularly hairy blockers in repurposing this type as a byte string
(what it _really_ is).
This commit is auto-generated:
$ xs=$(ack -l \bDeprecatedString\b\|deprecated_string AK Userland \
Meta Ports Ladybird Tests Kernel)
$ perl -pie 's/\bDeprecatedString\b/ByteString/g;
s/deprecated_string/byte_string/g' $xs
$ clang-format --style=file -i \
$(git diff --name-only | grep \.cpp\|\.h)
$ gn format $(git ls-files '*.gn' '*.gni')
When iterating over an iterable, we get back a JS object with the fields
"value" and "done".
Before this change, we've had two dedicated instructions for retrieving
the two fields: IteratorResultValue and IteratorResultDone. These had no
fast path whatsoever and just did a generic [[Get]] access to fetch the
corresponding property values.
By replacing the instructions with GetById("value") and GetById("done"),
they instantly get caching and JIT fast paths for free, making iterating
over iterables much faster. :^)
26% speed-up on this microbenchmark:
function go(a) {
for (const p of a) {
}
}
const a = [];
a.length = 1_000_000;
go(a);
This patch makes IteratorRecord an Object. Although it's not exposed to
author code, this does allow us to store it in a VM register.
Now that we can store it in a VM register, we don't need to convert it
back and forth between IteratorRecord and Object when accessing it from
bytecode.
The big win here is avoiding 3 [[Get]] accesses on every iteration step
of for..of loops. There are also a bunch of smaller efficiencies gained.
20% speed-up on this microbenchmark:
function go(a) {
for (const p of a) {
}
}
const a = [];
a.length = 1_000_000;
go(a);
When all the variables in a for..in/of block's lexical scope have been
turned into locals, we don't need to create and immediately abandon an
empty environment for them.
This avoid environment allocation in cases like this:
function foo(a) {
for (const x of a) {
}
}
This will not meaningfully affect short array literals, but it does
give us a bit of extra perf when evaluating huge array expressions like
in Kraken/imaging-darkroom.js
Until now, the unwind context stack has not been maintained by jitted
code, which meant we were unable to support the `with` statement.
This is a first step towards supporting that by making jitted code
call out to C++ to update the unwind context stack when entering/leaving
unwind contexts.
We also introduce a new "Catch" bytecode instruction that moves the
current exception into the accumulator. It's always emitted at the start
of a "catch" block.
This patch makes it possible for JS::Object::internal_set() to populate
a CacheablePropertyMetadata, and uses this to implement a basic
monomorphic cache for the most common form of property write access.
If the property for GetByValue in Generator::load_from_reference
is a calculated value this would be stored in an allocated
register and returned from the function. Not all callers want
this information however, so now only give it out when asked for.
Reduced the instruction count for Kraken/ai-astar.js function
"neighbours" from 214 to 192.
In case of {get func() {}, set func() {}} we were wrongly setting the
function name to 'func' and then later trying to replace an empty name
with 'get func'/'set func' which failed.
Instead, set the name to 'get func'/'set func' right away.
The code in put_by_property_key is kept, for when that is called
by put_by_value.
This is currently only used in the bytecode dump to annotate to where
unwinds lead per block, but will be hooked up to the virtual machine in
the next commit.
The following snippet would cause "i" to be incremented twice(!):
let a = []
let i = 0
a[++i] += 0
This patch solves the issue by remembering the base object and property
name for computed MemberExpression LHS in codegen. We the store the
result of the assignment to the same object and property (instead of
computing the LHS again).
3 new passes on test262. :^)
We should initialize jump targets when constructing the jump instruction
instead of doing it later. This was already the case in all construction
sites but one. This first patch converts all those sites to pass final
targets to the constructor directly.
When there is no `catch` parameter to bind the error, we don't need
to allocate an environment, since there's nothing to add to it.
This avoids one environment allocation every time we catch like this:
try {
...
} catch {
...
}
Our implementation of environment.CreateImmutableBinding(name, true)
in this AO was not correctly initializing const variables in strict
mode. This would mean that constant declarations in for loop bodies
would not throw if they were modified.
To fix this, add a new parameter to CreateVariable to set strict mode.
Also remove the vm.is_strict mode check here, as it doesn't look like
anywhere in the spec will change strict mode depending on whether the
script itself is running in script mode or not.
This fixes two of our test-js tests, no change to test262.
This works by adding source start/end offset to every bytecode
instruction. In the future we can make this more efficient by keeping
a map of bytecode ranges to source ranges in the Executable instead,
but let's just get traces working first.
Co-Authored-By: Andrew Kaster <akaster@serenityos.org>
If we're inside of a `with` statement scope, we have to take care to
extract the correct `this` value for use in calls when calling a method
on the binding object via an Identifier instead of a MemberExpression.
This makes Vue.js work way better in the bytecode VM. :^)
Also, 1 new pass on test262.
CreateVariable is not needed for locals because they are not stored in
environment and created binding will not be used. Also if all variables
in loop initialization sections are local then CreateLexicalEnvironment
and LeaveLexicalEnvironment can also be ommitted.
The RegExpLiteral AST node already has the parsed regex::Parser::Result
so let's plumb that over to the bytecode executable instead of reparsing
the regex every time NewRegExp is executed.
~12% speed-up on language/literals/regexp/S7.8.5_A2.1_T2.js in test262.
Using a special instruction to access global variables allows skipping
the environment chain traversal for them and going directly to the
module/global environment. Currently, this instruction only caches the
offset for bindings that belong to the global object environment.
However, there is also an opportunity to cache the offset in the global
declarative record.
This change results in a 57% increase in speed for
imaging-gaussian-blur.js in Kraken.
When building an object from an object expression, we don't want to
go through the full property setting machinery. This patch adds a new
PropertyKind::DirectKeyValue for PutById which guarantees that the
property becomes an own property.
This fixes an issue where setting the "__proto__" property in object
expressions wasn't working right.
12 new passes on test262. :^)
The instructions GetById and GetByIdWithThis now remember the last-seen
Shape, and if we see the same object again, we reuse the property offset
from last time without doing a new lookup.
This allows us to use Object::get_direct(), bypassing the entire lookup
machinery and saving lots of time.
~23% speed-up on Kraken/ai-astar.js :^)
- Update ECMAScriptFunctionObject::function_declaration_instantiation
to initialize local variables
- Introduce GetLocal, SetLocal, TypeofLocal that will be used to
operate on local variables.
- Update bytecode generator to emit instructions for local variables
By using Identifier class to represent the name of a class expression,
it becomes possible to consistently store information within the
identifier object, indicating whether the name refers to a local
variable or not.