This patch adds a new "Peephole" pass for performing small, local
optimizations to bytecode.
We also introduce the first such optimization, fusing a sequence of
some comparison instruction FooCompare followed by a JumpIf into a
new set of JumpFooCompare instructions.
This gives a ~50% speed-up on the following microbenchmark:
for (let i = 0; i < 10_000_000; ++i) {
}
But more traditional benchmarks see a pretty sizable speed-up as well,
for example 15% on Kraken/ai-astar.js and 16% on Kraken/audio-dft.js :^)
Instead of emitting a NewBigInt instruction to construct a primitive
bigint from a parsed literal, we now instantiate the BigInt on the heap
during codegen.
Instead of emitting a NewString instruction to construct a primitive
string from a parsed literal, we now instantiate the PrimitiveString on
the heap during codegen.
Instead of looking these up in the VM execution context stack whenever
we need them, we now just cache them in the interpreter when entering
a new call frame.
Comparing two Values has to call the generic same_value() helper,
and we can avoid this by simply using a stronger type for built-in
native function handlers.
Instead of having Call refer to a range of VM registers, it now has
a trailing list of argument operands as part of the instruction.
This means we no longer have to shuffle every argument value into
a register before making a call, making bytecode smaller & faster. :^)
By handling common cases like Int32 arithmetic directly in the
instruction handler, we can avoid the cost of calling the generic helper
functions in Value.cpp.
Instead of splitting the postfix variants into ToNumeric + Inc/Dec,
we now have dedicated PostfixIncrement and PostfixDecrement instructions
that handle both outputs in one go.
This patch moves us away from the accumulator-based bytecode format to
one with explicit source and destination registers.
The new format has multiple benefits:
- ~25% faster on the Kraken and Octane benchmarks :^)
- Fewer instructions to accomplish the same thing
- Much easier for humans to read(!)
Because this change requires a fundamental shift in how bytecode is
generated, it is quite comprehensive.
Main implementation mechanism: generate_bytecode() virtual function now
takes an optional "preferred dst" operand, which allows callers to
communicate when they have an operand that would be optimal for the
result to go into. It also returns an optional "actual dst" operand,
which is where the completion value (if any) of the AST node is stored
after the node has "executed".
One thing of note that's new: because instructions can now take locals
as operands, this means we got rid of the GetLocal instruction.
A side-effect of that is we have to think about the temporal deadzone
(TDZ) a bit differently for locals (GetLocal would previously check
for empty values and interpret that as a TDZ access and throw).
We now insert special ThrowIfTDZ instructions in places where a local
access may be in the TDZ, to maintain the correct behavior.
There are a number of progressions and regressions from this test:
A number of async generator tests have been accidentally fixed while
converting the implementation to the new bytecode format. It didn't
seem useful to preserve bugs in the original code when converting it.
Some "does eval() return the correct completion value" tests have
regressed, in particular ones related to propagating the appropriate
completion after control flow statements like continue and break.
These are all fairly obscure issues, and I believe we can continue
working on them separately.
The net test262 result is a progression though. :^)
The JIT compiler was an interesting experiment, but ultimately the
security & complexity cost of doing arbitrary code generation at runtime
is far too high.
In subsequent commits, the bytecode format will change drastically, and
instead of rewriting the JIT to fit the new bytecode, this patch simply
removes the JIT instead.
Other engines, JavaScriptCore in particular, have already proven that
it's possible to handle the vast majority of contemporary web content
with an interpreter. They are currently ~5x faster than us on benchmarks
when running without a JIT. We need to catch up to them before
considering performance techniques with a heavy security cost.
perform_call() wants a ReadonlySpan<Value>, so just grab a slice of the
current register window instead of making a MarkedVector.
10% speed-up on this function call microbenchmark:
function callee(a, b, c) { }
function caller(callee) {
for (let i = 0; i < 10_000_000; ++i)
callee(1, 2, 3)
}
caller(callee)
This commit un-deprecates DeprecatedString, and repurposes it as a byte
string.
As the null state has already been removed, there are no other
particularly hairy blockers in repurposing this type as a byte string
(what it _really_ is).
This commit is auto-generated:
$ xs=$(ack -l \bDeprecatedString\b\|deprecated_string AK Userland \
Meta Ports Ladybird Tests Kernel)
$ perl -pie 's/\bDeprecatedString\b/ByteString/g;
s/deprecated_string/byte_string/g' $xs
$ clang-format --style=file -i \
$(git diff --name-only | grep \.cpp\|\.h)
$ gn format $(git ls-files '*.gn' '*.gni')
When iterating over an iterable, we get back a JS object with the fields
"value" and "done".
Before this change, we've had two dedicated instructions for retrieving
the two fields: IteratorResultValue and IteratorResultDone. These had no
fast path whatsoever and just did a generic [[Get]] access to fetch the
corresponding property values.
By replacing the instructions with GetById("value") and GetById("done"),
they instantly get caching and JIT fast paths for free, making iterating
over iterables much faster. :^)
26% speed-up on this microbenchmark:
function go(a) {
for (const p of a) {
}
}
const a = [];
a.length = 1_000_000;
go(a);
This patch makes IteratorRecord an Object. Although it's not exposed to
author code, this does allow us to store it in a VM register.
Now that we can store it in a VM register, we don't need to convert it
back and forth between IteratorRecord and Object when accessing it from
bytecode.
The big win here is avoiding 3 [[Get]] accesses on every iteration step
of for..of loops. There are also a bunch of smaller efficiencies gained.
20% speed-up on this microbenchmark:
function go(a) {
for (const p of a) {
}
}
const a = [];
a.length = 1_000_000;
go(a);
Allows the bytecode interpreter to call the builtins c++
implementation directly without making a javascript call
just as the JIT.
Kraken test speedups: imaging-gaussian-blur.js (1.5x) and
audio-oscillator.js (1.2x)
The number of registers in a call frame never changes, so we can
allocate it at the end of the CallFrame object and save ourselves the
cost of allocating separate Vector storage for every call frame.
Instead of allocating these in a mixture of ways, we now always put
them on the malloc heap, and keep an intrusive linked list of them
that we can iterate for GC marking purposes.
This will not meaningfully affect short array literals, but it does
give us a bit of extra perf when evaluating huge array expressions like
in Kraken/imaging-darkroom.js
Until now, the unwind context stack has not been maintained by jitted
code, which meant we were unable to support the `with` statement.
This is a first step towards supporting that by making jitted code
call out to C++ to update the unwind context stack when entering/leaving
unwind contexts.
We also introduce a new "Catch" bytecode instruction that moves the
current exception into the accumulator. It's always emitted at the start
of a "catch" block.
This patch makes it possible for JS::Object::internal_set() to populate
a CacheablePropertyMetadata, and uses this to implement a basic
monomorphic cache for the most common form of property write access.
If Interpreter::run_and_return_frame is called with a specific entry
point we now map that to a native instruction address, which the JIT
code jumps to after the function prologue.
The previous implementation was calling `backtrace()` for every
function call, which is quite slow.
Instead, this implementation provides VM::stack_trace() which unwinds
the native stack, maps it through NativeExecutable::get_source_range
and combines it with source ranges from interpreted call frames.
This works by walking a backtrace until the currently executing
native executable is found, and then mapping the native address
to its bytecode instruction.
These are then restored upon `ContinuePendingUnwind`.
This stops us from forgetting where we needed to jump when we do extra
try-catches in finally blocks.
Co-Authored-By: Jesús "gsus" Lapastora <cyber.gsuscode@gmail.com>