- Use a simple pthread_mutex_t instead of bringing in headers from
LibThreading just to get a mutex.
- Use a normal mutex instead of a recursive one.
- Remove redundant locking in realloc().
When creating uninitialized storage for variables, we need to make sure
that the alignment is correct. Fixes a KUBSAN failure when running
kernels compiled with Clang.
In `Syscalls/socket.cpp`, we can simply use local variables, as
`sockaddr_un` is a POD type.
Along with moving the `alignas` specifier to the correct member,
`AK::Optional`'s internal buffer has been made non-zeroed by default.
GCC emitted bogus uninitialized memory access warnings, so we now use
`__builtin_launder` to tell the compiler that we know what we are doing.
This might disable some optimizations, but judging by how GCC failed to
notice that the memory's initialization is dependent on `m_has_value`,
I'm not sure that's a bad thing.
Previously each malloc size class would keep around a limited number of
unused blocks which were marked with MADV_SET_VOLATILE which could then
be reinitialized when additional blocks were needed.
This changes malloc() so that it also keeps around a number of blocks
without marking them with MADV_SET_VOLATILE. I termed these "hot"
blocks whereas blocks which were marked as MADV_SET_VOLATILE are called
"cold" blocks because they're more expensive to reinitialize.
In the worst case this could increase memory usage per process by
1MB when a program requests a bunch of memory and frees all of it.
Also, in order to make more efficient use of these unused blocks
they're now shared between size classes.
Problem:
- `size_classes` is a C-style array which makes it difficult to use in
algorithms.
- `all_of` algorithm is re-written for the specific implementation.
Solution:
- Change `size_classes` to be an `Array`.
- Directly use the generic `all_of` algorithm instead of
reimplementing.
This implements the macOS API malloc_good_size() which returns the
true allocation size for a given requested allocation size. This
allows us to make use of all the available memory in a malloc chunk.
For example, for a malloc request of 35 bytes our malloc would
internally use a chunk of size 64, however the remaining 29 bytes
would be unused.
Knowing the true allocation size allows us to request more usable
memory that would otherwise be wasted and make that available for
Vector, HashTable and potentially other callers in the future.
The LOCKER() macro appears to have been added to LibThread as a
userspace analog to the previous LOCKER() macro that existed in
the kernel. The kernel version used the macro to inject __FILE__ and
__LINE__ number into the lock acquisition for debugging. However
AK::SourceLocation was used to remove the need for the macro. So
the kernel version no longer exists. The LOCKER() in LibThread doesn't
appear to actually need to be a macro, using the type directly works
fine, and arguably is more readable as it removes an unnecessary
level of indirection.
Legally we could just return a null pointer, however returning a
pointer other than the null pointer is more compatible with
improperly written software that assumes that a null pointer means
allocation failure.
By default malloc manages memory internally in larger blocks. When
one of those blocks is added we initialize a free list by touching
each of the new block's pages, thereby committing all that memory
upfront.
This changes malloc to build the free list on demand which as a
bonus also distributes the latency hit for new blocks more evenly
because the page faults for the zero pages now don't happen all at
once.
SPDX License Identifiers are a more compact / standardized
way of representing file license information.
See: https://spdx.dev/resources/use/#identifiers
This was done with the `ambr` search and replace tool.
ambr --no-parent-ignore --key-from-file --rep-from-file key.txt rep.txt *
calloc() was internally calling malloc_impl() which would scrub out
all the allocated memory with the scrub byte (0xdc). We would then
immediately zero-fill the memory.
This was obviously a waste of time, and our hash tables were doing
it all the time. :^)
(...and ASSERT_NOT_REACHED => VERIFY_NOT_REACHED)
Since all of these checks are done in release builds as well,
let's rename them to VERIFY to prevent confusion, as everyone is
used to assertions being compiled out in release.
We can introduce a new ASSERT macro that is specifically for debug
checks, but I'm doing this wholesale conversion first since we've
accumulated thousands of these already, and it's not immediately
obvious which ones are suitable for ASSERT.
Just ignore all these environment flags if the AT_SECURE flag is set in
the program's auxiliary vector.
This prevents a user from tricking set-uid programs into dumping debug
information via environment flags.