If Interpreter::run_and_return_frame is called with a specific entry
point we now map that to a native instruction address, which the JIT
code jumps to after the function prologue.
The previous implementation was calling `backtrace()` for every
function call, which is quite slow.
Instead, this implementation provides VM::stack_trace() which unwinds
the native stack, maps it through NativeExecutable::get_source_range
and combines it with source ranges from interpreted call frames.
This works by walking a backtrace until the currently executing
native executable is found, and then mapping the native address
to its bytecode instruction.
These are then restored upon `ContinuePendingUnwind`.
This stops us from forgetting where we needed to jump when we do extra
try-catches in finally blocks.
Co-Authored-By: Jesús "gsus" Lapastora <cyber.gsuscode@gmail.com>
There are a lot of native C++ functions that will be used by both the
bytecode interpreter and jitted code. Let's put them in their own file
instead of having them in Interpreter.cpp.
When we hit the cache in GetGlobal, we don't need the identifier string
at all, so let's defer fetching it until after the cache miss.
7% speed-up on Kraken/imaging-gaussian-blur.js :^)
If we have a cached environment coordinate that hasn't been screwed
by eval(), we can get the value directly without instantiating a
Reference.
15% speed-up on Octane/zlib.js :^)
These functions all have a very common case that can be dealt with a
very simple inline check, often avoiding the need to call an out-of-line
function. This patch moves the common case to inline functions in a new
ValueInlines.h header (necessary due to header dependency issues..)
8% speed-up on the entire Kraken benchmark :^)
This patch adds a fast path to the PutByValue bytecode op that bypasses
a ton of things *if* a set of assumptions hold:
- The property key must be a non-negative Int32
- The base object must not interfere with indexed property access
- The base object must have simple indexed property storage
- The property key must already be present as an own property
- The existing value must not have any accessors defined
If this holds (which it should in many common cases), we can skip all
kinds of checks and poke directly at the property storage, saving time.
16% speed-up on the entire Kraken benchmark :^)
(including: 88% speed-up on Kraken/imaging-desaturate.js)
(including: 55% speed-up on Kraken/audio-fft.js)
(including: 54% speed-up on Kraken/audio-beat-detection.js)
This patch adds a fast path to the GetByValue bytecode op that bypasses
a ton of things *if* a set of assumptions hold:
- The property key must be a non-negative Int32
- The base object must not interfere with indexed property access
- The property key must already be present as an own property
- The existing value must not have any accessors defined
If this holds (which it should in the common case), we can poke directly
at the indexed property storage and save a boatload of time.
10% speed-up on the entire Kraken benchmark :^)
(including: 31% speed-up on Kraken/audio-dft.js)
(including: 23% speed-up on Kraken/stanford-crypto-aes.js)
Instead of calling out to helper functions for flow control (and then
checking control flags on every iteration), we now simply inline those
ops in the interpreter loop directly.
If we don't have a local unwind context to handle the exception, we can
just return right away. This allows us to remove one check from the
inner loop.