The existing code looks innocently correct, implementing the following
step:
3. If IsCallable(func) is false, set func to the intrinsic function
%Object.prototype.toString%.
as
return ObjectPrototype::to_string(vm, global_object);
However, this misses the fact that the next step calls the function with
the previously ToObject()'d this value (`array`):
4. Return ? Call(func, array).
This doesn't happen in the current implementation, which will use the
unaltered this value from the Array.prototype.toString() call, and make
another, unequal object in %Object.prototype.toString%. Since both that
and Array.prototype.toString() do a Get() call on said object, this
behavior is observable (see newly added test).
Fix this by actually doing what the spec says and calling the fallback
function the regular way.
This is taken from the abandoned error stacks proposal, which
already serves as the source of truth for the setter. It only requires
the this value to be an object - if it's not an Error object, the getter
returns undefined.
I have not compared this behavior to the non-standard implementations of
the stack property in other engines, but presumably the spec authors
already did that work.
This change gets the Sentry browser SDK working to a point where it can
actually send uncaught exceptions via the API :^)
On the code path where we are setting a TypedArray from another
TypedArray of the same type, we forgo the spec text and simply do a
memmove between the two ArrayBuffers. However, we forgot to apply
source's byte offset on this code path.
This meant if we tried setting a TypedArray from a TypedArray we got
from .subarray(), we would still copy from the start of the subarray's
ArrayBuffer.
This is because .subarray() returns a new TypedArray with the same
ArrayBuffer but the new TypedArray has a smaller length and a byte
offset that the rest of the codebase is responsible for applying.
This affected pako when it was decompressing a zlib stream that has
multiple zlib chunks in it. To read from the second chunk, it would
set the zlib window TypedArray from the .subarray() of the chunk offset
in the stream's TypedArray. This effectively made the decompressed data
from the second chunk a mis-mash of old data that looked completely
scrambled. It would also cause all future decompression using the same
pako Inflate instance to also appear scrambled.
As a pako comment aptly puts it:
> Call updatewindow() to create and/or update the window state.
> Note: a memory error from inflate() is non-recoverable.
This allows us to properly decompress the large compressed payloads
that Discord Gateway sends down to the Discord client. For example,
for an account that's only in the Serenity Discord, one of the payloads
is a 20 KB zlib compressed blob that has two chunks in it.
Surprisingly, this is not covered by test262! I imagine this would have
been caught earlier if there was such a test :^)
Other engines don't give NaN if there is at least one digit after the
dot for milliseconds. We were much stricter and required exactly three
digits.
But there is real world usage of different amounts of digits such as
discord having three extra trailing zeros.
This follows the ECMA402 spec and means String.prototype.localeCompare
will automatically become actually locale aware once StringCompare is
actually implemented based on UTS #10.
Parse JSON floating point literals properly,
No longer throwing a SyntaxError when the decimal portion
of the number exceeds the capacity of u32.
Added tests to AK/TestJSON and LibJS/builtins/JSON/JSON.parse
The spec version of canonical_numeric_index_string is absurdly complex,
and ends up converting from a string to a number, and then back again
which is both slow and also requires a few allocations and a string
compare.
Instead this patch moves away from using Values to represent canonical
a canonical index. In most cases all we need to know is whether a
PropertyKey is an integer between 0 and 2^^32-2, which we already
compute when we construct a PropertyKey so the existing is_number()
check is sufficient.
The more expensive case is handling strings containing numbers that
don't roundtrip through string conversion. In most cases these turn
into regular string properties, but for TypedArray access these
property names are not treated as normal named properties.
TypedArrays treat these numeric properties as magic indexes that are
ignored on read and are not stored (but are evaluated) on assignment.
For that reason there's now a mode flag on canonical_numeric_index_string
so that only TypedArrays take the cost of the ToString round trip test.
In order to improve the performance of this path this patch includes
some early returns to avoid conversion in cases where we can quickly
know whether a property can round trip.
Before this would assume that the element found in operator++ was still
valid when dereferencing it in operator*.
Since any code can have been run since that increment this is not always
valid.
To further simplify the logic of the iterator we no longer store the
index in an optional.
This implements ordered sets using Maps with a sentinel value, and
includes some extra set tests.
Fixes#11004.
Co-Authored-By: davidot <davidot@serenityos.org>
The current implementation of step 2a sort of manually implemented GetV
with a ToObject + Get combo. But in the call to Get, the receiver wasn't
the correct object. So when invoking toJSON, the receiver was an Object
type rather than a BigInt.
This also adds spec comments to SerializeJSONProperty.
This partially reverts commit a962ee020a.
When the sticky bit is set, the global bit should basically be ignored
except by external callers who want their own special behavior. For
example, RegExp.prototype [ @@match ] will use the global flag to
accumulate consecutive matches. But on the first failure, the regex
loop should break.
This is a normative change in the ECMA-262 spec:
https://github.com/tc39/ecma262/commit/ca53334
Note that this also fixes a few errors where we errantly converted the
stored time value to local time.
The spec defines a StringToBigInt AO which allows for converting binary,
octal, decimal, and hexadecimal strings to a BigInt. Our conversion was
only allowing for decimal strings.