We're now getting errors on CI due to gcc-13 being missing. We can
probably be smarter about what packages we install, depending on the
workflow being run. But let's first unblock CI.
The error we get is a bit strange and inconsistent. Some CI runners seem
to already have gcc-13 installed. Others don't and can't find the gcc-13
package without the test Ubuntu toolchain PPA.
With this only `ContinuePendingUnwind` needs to dynamically check if a
scheduled return needs to go through a `finally` block, making the
interpreter loop a bit nicer
This actually allows us to re-introduce the ldd utility as a symlink to
our dynamic loader, so now ldd behaves exactly like on Linux - it will
load all dynamic dependencies for an ELF exectuable.
This has the advantage that running ldd on an ELF executable will
provide an exact preview of how the order in which the dynamic loader
loads the executable and its dependencies.
As a preparation to introducing ldd as a symlink to /usr/lib/Loader.so
we rename the ldd utility to be elfdeps, at its sole purpose is to list
ELF object dependencies, and not how the dynamic loader loads them.
This is useful for testing ELF binaries that expose other functionalites
based on the argv0 string (BuggieBox, for example, does it to determine
which utility to run).
This change essentially puts the DynamicLoader with 2 roles - the first
one is to be invoked by the kernel to dynamically link an ELF executable
in runtime.
The second role is to allow running ELF executables explicitly from
userspace so the kernel runs the DynamicLoader as the "intended" program
but now the DynamicLoader can do its own commandline argument parsing
and run a specified binary, with future options being possible to
implement easily.
This will be used in the DynamicLoader code, as it can't do syscalls via
LibCore code.
Because we can't use most of the LibCore code, we convert the versioning
code in Version.cpp to use LibC uname() function.
Prepare to remove biglock on PCI::Access in a future commit, so we can
ensure we only lock a spinlock on a precise PCI HostController if needed
instead of the entire subsystem.
We never used these virtual methods outside their own implementation,
so let's stop pretending that we should be able to utilize this for
unknown purpose.
The new baked image is a Prekernel and a Kernel baked together now, so
essentially we no longer need to pass the Prekernel as -kernel and the
actual kernel image as -initrd to QEMU, leaving the option to pass an
actual initrd or initramfs module later on with multiboot.
This is essentially the de facto way to interface with FUSE, and as
such, pretty much every port that uses FUSE in any way will depend on
this. Of all the examples that we compile, 'hello', 'hello_ll' and
'passthrough' have been verified to work.
Currently, if building under `nix-shell Toolchain`, serenityOS'
gcc won't build because of hardening options added in nix,
more specifically the breaking format-security.
Instead of SetVariable having 2x2 modes for variable/lexical and
initialize/set, those 4 modes are now separate instructions, which
makes each instruction much less branchy.
The last completion value in a function is not exposed to the language,
since functions always either return something, or undefined.
Given this, we can avoid emitting code that propagates the completion
value from various statements, as long as we know we're generating code
for a context where the completion value is not accessible. In practical
terms, this means that function code gets to do less completion
shuffling, while global and eval code has to keep doing it.
Before of this change, actually setting the m_access to contain the
HasBeen{Readeable,Writable,Executable} bits was done by the method of
Region set_access_bit which added ORing with (access << 4) when enabling
a certain access bit to achieve this.
Now this is changed and when calling set_{readeable,writable,executable}
methods, they will set an appropriate SetOnce flag that could be checked
later.