FixedArray now doesn't expose any infallible constructors anymore.
Rather, it exposes fallible methods. Therefore, it can be used for
OOM-safe code.
This commit also converts the rest of the system to use the new API.
However, as an example, VMObject can't take advantage of this yet,
as we would have to endow VMObject with a fallible static
construction method, which would require a very fundamental change
to VMObject's whole inheritance hierarchy.
The previous implementation had some pretty short cycles and two fixed
points (1711463637 and 2389024350). If two keys hashed to one of these
values insertions and lookups would loop forever.
This version is based on a standard xorshift PRNG with period 2**32-1.
The all-zero state is usually forbidden, so we insert it into the cycle
at an arbitrary location.
This is just to allow removing the 'clang-format off' directive. This
concept is only used within this header, so it doesn't need to be in the
global namespace.
We have whittled away at the usages of these AK::Vector APIs in the
Kernel. This change disables them from being visible when building
the Kernel to make sure no new usages get added.
If we didn't define our own `move` and `forward` and inject it into the
`std` namespace, then we would error just after trying to `using` those,
if no one has included it before. Now, we will include `utility` from
the STD if we aren't replacing the `std`.
This was a premature optimization from the early days of SerenityOS.
The eternal heap was a simple bump pointer allocator over a static
byte array. My original idea was to avoid heap fragmentation and improve
data locality, but both ideas were rooted in cargo culting, not data.
We would reserve 4 MiB at boot and only ended up using ~256 KiB, wasting
the rest.
This patch replaces all kmalloc_eternal() usage by regular kmalloc().
This is an interface to downcast(), which degrades errors into runtime
errors, and allows seemingly-correct-but-not-quite constructs like the
following to appear to compile, but fail at runtime:
Variant<NonnullRefPtr<T>, U> foo = ...;
Variant<RefPtr<T>, U> bar = foo;
The expectation here is that `foo` is converted to a RefPtr<T> if it
contains one, and remains a U otherwise, but in reality, the
NonnullRefPtr<T> variant is simply dropped on the floor, and the
resulting variant becomes invalid, failing the assertion in downcast().
This commit adds a Variant<Ts...>(Variant<NewTs...>) constructor that
ensures that no alternative can be left out at compiletime, for the
users that were using this interface for merely increasing the number of
alternatives (for instance, LibSQL's Value class).
This is a raffinement of 49cbd4dcca.
Previously, the container was scanned to compute the size in the unhappy
path. Now, using `all_of` happy and unhappy path should be fast.
In order to reduce our reliance on __builtin_{ffs, clz, ctz, popcount},
this commit removes all calls to these functions and replaces them with
the equivalent functions in AK/BuiltinWrappers.h.
While watching Andreas' most recent video, I noticed that this function
only worked with 32 bit values, but was a serious performance
bottleneck for the kernel. As such, I reworked it to use `size_t`, so
it now can switch to 64-bit sweeps on 64-bit platforms. This caused
test-js to go from 12.5 seconds hot to 11.5 seconds hot on my machine
when running on KVM x86_64.
The goal of this file is to enable C++ overloaded functions for
standard builtin functions that we use. It contains fallback
implementations for systems that do not have the builtins available.
Before, if we couldn't read enough data out of the buffer, we would re-
fill the buffer and recursively call read(), which in turn reads data
from the buffer into the resliced target span. This incurs very
intensive superflous memmove's when large chunks of data are read from
a buffered stream.
This commit changes the behavior so that when we exhaust the buffer, we
first read any necessary additional data directly into the target, then
fill up the buffer again. Effectively, this results in drastically
reduced overhead from Buffered when reading large contiguous chunks.
Of course, Buffered is designed to speed up data access patterns with
small frequent reads, but it's nice to be able to combine both access
patterns on one stream without penalties either way.
The final performance gain is about an additional 80% of abench decoding
speed.
This unbreaks the /var/run/utmp system which starts out as an empty
string, and is then turned into an object by the first update.
This isn't necessarily the best way for this to work, but it's how
it used to work, so this just fixes the regression for now.
This fixes at least half of our LibC includes in the kernel. The source
of truth for errno codes and their description strings now lives in
Kernel/API/POSIX/errno.h as an enumeration, which LibC includes.