These syscalls are not necessary on their own, and they give the false
impression that a caller could set or get the thread name of any process
in the system, which is not true.
Therefore, move the functionality of these syscalls to be options in the
prctl syscall, which makes it abundantly clear that these operations
could only occur from a running thread in a process that sees other
threads in that process only.
I'm leaving the --use-bytecode CLI option here as a no-op for now, until
we get all the scripts updated. But the program always runs in bytecode
mode now.
These passes have not been shown to actually optimize any JS, and tests
have become very flaky with optimizations enabled. Until some measurable
benefit is shown, remove the optimization passes to reduce overhead of
maintaining bytecode operations and to reduce CI churn. The framework
for optimizations will live on in git history, and can be restored once
proven useful.
There are hundreds of test262 tests with the following metadata line:
flags: []
Other engine runners are apparently able to ignore those lines, so we
should as well.
The JS::VM now owns the one Bytecode::Interpreter. We no longer have
multiple bytecode interpreters, and there is no concept of a "current"
bytecode interpreter.
If you ask for VM::bytecode_interpreter_if_exists(), it will return null
if we're not running the program in "bytecode enabled" mode.
If you ask for VM::bytecode_interpreter(), it will return a bytecode
interpreter in all modes. This is used for situations where even the AST
interpreter switches to bytecode mode (generators, etc.)
Don't try to implement this AO in bytecode. Instead, the bytecode
Interpreter class now has a run() API with the same inputs as the AST
interpreter. It sets up the necessary environments etc, including
invoking the GlobalDeclarationInstantiation AO.
This commit moves the implementation of getopt into AK, and converts its
API to understand and use StringView instead of char*.
Everything else is caught in the crossfire of making
Option::accept_value() take a StringView instead of a char const*.
With this, we must now pass a Span<StringView> to ArgsParser::parse(),
applications using LibMain are unaffected, but anything not using that
or taking its own argc/argv has to construct a Vector<StringView> for
this method.
This includes an Error::create overload to create an Error from a UTF-8
StringView. If creating a String from that view fails, the factory will
return an OOM InternalError instead. VM::throw_completion can also make
use of this overload via its perfect forwarding.
Rather than trying to assume the only two C libraries on Linux are musl
and glibc, this solution fixes musl builds by explicitly checking for
the one C library function we are overwriting.
That being said, we should find another solution to retrieving this
error information from crashing tests. Possibly just overriding the
SIGABRT handler would work. The full solution might require checking
stderr as well as stdout in the test driver though.
This generally seems like a better name, especially if we somehow also
need a better name for "read the entire buffer, but not the entire file"
somewhere down the line.
This will make it easier to support both string types at the same time
while we convert code, and tracking down remaining uses.
One big exception is Value::to_string() in LibJS, where the name is
dictated by the ToString AO.
We have a new, improved string type coming up in AK (OOM aware, no null
state), and while it's going to use UTF-8, the name UTF8String is a
mouthful - so let's free up the String name by renaming the existing
class.
Making the old one have an annoying name will hopefully also help with
quick adoption :^)
...`__attribute__((__noreturn__))`
This is more inline with the definition in glibc's version of the file,
and stops clang from complaining about it originally not being declared
as `[[no_return]]`.
If a test is supposed to fail during parse or early phase we can stop
after parsing. Because phases in modules are not as clear we don't skip
the other parts for modules.
When running a larger set of tests in Serenity the runner would
otherwise trigger a lot of crash reporters. This would then in turn lead
to memory starvation causes more crashes.
We also protect against recursive assert failures, for example due to
being out of memory.
With this change the runner now compiles and runs on Serenity :^).
Since setitimer is not implemented in Serenity we use alarm which
triggers SIGALRM after the timeout. We also don't use a signal handler
as we are doing things that serenity doesn't like/doesn't allow.
Linux dealt with allocating and writing in a signal handler but it is
undefined, so instead we just let the process die by SIGALRM.
This means we instead of reading the output can detect timeouts by
checking how the process died.
For now this is a lagom only application as it is not compatible with
serenity in its current state.
The only change is that it is released under a different license with
permission from all the authors.