In e7fb70b05, regular kmalloc was changed to return nullptr on
allocation failure instead of crashing. The `kmalloc_aligned_cxx`
wrapper used by the aligned operator new should do the same.
Now that we have a significant amount of code paths handling OOM, lets
enable kmalloc and friends to actually return nullptr. This way we can
start stressing these paths and validating all of they work as expected.
By making these functions static we close a window where we could get
preempted after calling Processor::current() and move to another
processor.
Co-authored-by: Tom <tomut@yahoo.com>
Kernels built with Clang seem to be quite allocation-heavy compared to
their GCC counterparts. We would sometimes end up crashing during boot
because the eternal ranges had no free capacity.
The compiler will use these to allocate objects that have alignment
requirements greater than that of our normal `operator new` (4/8 byte
aligned).
This means we can now use smart pointers for over-aligned types.
Fixes a FIXME.
This was only used by a single class (AK::ByteBuffer) in the kernel
and not in an OOM-safe way.
Now that ByteBuffer no longer uses it, there's no need for the kernel
heap to burden itself with supporting this.
C++14 gave us sized operator delete, but we haven't been taking
advantage of it. Let's get to a point where it can help us by
adding kfree_sized(void*, size_t).
When creating uninitialized storage for variables, we need to make sure
that the alignment is correct. Fixes a KUBSAN failure when running
kernels compiled with Clang.
In `Syscalls/socket.cpp`, we can simply use local variables, as
`sockaddr_un` is a POD type.
Along with moving the `alignas` specifier to the correct member,
`AK::Optional`'s internal buffer has been made non-zeroed by default.
GCC emitted bogus uninitialized memory access warnings, so we now use
`__builtin_launder` to tell the compiler that we know what we are doing.
This might disable some optimizations, but judging by how GCC failed to
notice that the memory's initialization is dependent on `m_has_value`,
I'm not sure that's a bad thing.
In standard C++, operators `new` and `new[]` are guaranteed to return a
valid (non-null) pointer and throw an exception if the allocation
couldn't be performed. Based on this, compilers did not check the
returned pointer before attempting to use them for object construction.
To avoid this, the allocator operators were changed to be `noexcept` in
PR #7026, which made GCC emit the desired null checks. Unfortunately,
this is a non-standard feature which meant that Clang would not accept
these function definitions, as it did not match its expected
declaration.
To make compiling using Clang possible, the special "nothrow" versions
of `new` are implemented in this commit. These take a tag type of
`std::nothrow_t` (used for disambiguating from placement new/etc.), and
are allowed by the standard to return null. There is a global variable,
`std::nothrow`, declared with this type, which is also exported into the
global namespace.
To perform fallible allocations, the following syntax should be used:
```cpp
auto ptr = new (nothrow) T;
```
As we don't support exceptions in the kernel, the only way of uphold the
"throwing" new's guarantee is to abort if the allocation couldn't be
performed. Once we have proper OOM handling in the kernel, this should
only be used for critical allocations, where we wouldn't be able to
recover from allocation failures anyway.
There were a few cases where we could end up logging profiling events
before or after the associated process or thread exists in the profile:
After enabling profiling we might end up with CPU samples before we
had a chance to synthesize process/thread creation events.
After a thread exits we would still log associated kmalloc/kfree
events. Instead we now just ignore those events.
This implements the macOS API malloc_good_size() which returns the
true allocation size for a given requested allocation size. This
allows us to make use of all the available memory in a malloc chunk.
For example, for a malloc request of 35 bytes our malloc would
internally use a chunk of size 64, however the remaining 29 bytes
would be unused.
Knowing the true allocation size allows us to request more usable
memory that would otherwise be wasted and make that available for
Vector, HashTable and potentially other callers in the future.
Ideally we would never allocate under a spinlock, as it has many
performance and potentially functionality (deadlock) pitfalls.
We violate that rule in many places today, but we need a tool to track
them all down and fix them. This change introduces a new macro option
named `KMALLOC_VERIFY_NO_SPINLOCK_HELD` which can catch these
situations at runtime via an assert.
For Kernel OOM hardening to work correctly, we need to be able to
call a "nothrow" version of operator new. Unfortunately the default
"throwing" version of operator new assumes that the allocation will
never return on failure and will always throw an exception. This isn't
true in the Kernel, as we don't have exceptions. So if we call the
normal/throwing new and kmalloc returns NULL, the generated code will
happily go and dereference that NULL pointer by invoking the constructor
before we have a chance to handle the failure.
To fix this we declare operator new as noexcept in the Kernel headers,
which will allow the caller to actually handle allocation failure.
The delete implementations need to match the prototype of the new which
allocated them, so we need define delete as noexcept as well. GCC then
errors out declaring that you should implement sized delete as well, so
this change provides those stubs in order to compile cleanly.
Finally the new operator definitions have been standardized as being
declared with [[nodiscard]] to avoid potential memory leaks. So lets
declares the kernel versions that way as well.
SPDX License Identifiers are a more compact / standardized
way of representing file license information.
See: https://spdx.dev/resources/use/#identifiers
This was done with the `ambr` search and replace tool.
ambr --no-parent-ignore --key-from-file --rep-from-file key.txt rep.txt *
Alot of code is shared between i386/i686/x86 and x86_64
and a lot probably will be used for compatability modes.
So we start by moving the headers into one Directory.
We will probalby be able to move some cpp files aswell.
The system is extremely sensitive to heap allocations during heap
expansion. This was causing frequent OOM panics under various loads.
Work around the issue for now by putting the logging behind
KMALLOC_DEBUG. Ideally dmesgln() & friends would not reqiure any
heap allocations, but we're not there right now.
Fixes#5724.
(...and ASSERT_NOT_REACHED => VERIFY_NOT_REACHED)
Since all of these checks are done in release builds as well,
let's rename them to VERIFY to prevent confusion, as everyone is
used to assertions being compiled out in release.
We can introduce a new ASSERT macro that is specifically for debug
checks, but I'm doing this wholesale conversion first since we've
accumulated thousands of these already, and it's not immediately
obvious which ones are suitable for ASSERT.
There's no real system here, I just added it to various functions
that I don't believe we ever want to call after initialization
has finished.
With these changes, we're able to unmap 60 KiB of kernel text
after init. :^)
If we try to align a number above 0xfffff000 to the next multiple of
the page size (4 KiB), it would wrap around to 0. This is most likely
never what we want, so let's assert if that happens.
Now that we no longer need to support the signal trampolines being
user-accessible inside the kernel memory range, we can get rid of the
"kernel" and "user-accessible" flags on Region and simply use the
address of the region to determine whether it's kernel or user.
This also tightens the page table mapping code, since it can now set
user-accessibility based solely on the virtual address of a page.
The kernel ignored the first 8 MiB of RAM while parsing the memory map
because the kmalloc heaps and the super physical pages lived here. Move
all that stuff inside the .bss segment so that those memory regions are
accounted for, otherwise we risk overwriting boot modules placed next
to the kernel.
If a heap expansion is triggered by allocating from e.g. the
RangeAllocator, which may be holding a spin lock, we cannot
immediately allocate another block of backup memory, which could
require the same locks to be acquired. So, defer allocating the
backup memory
Fixes#4675
Because allocating/freeing regions may require locks that need to
wait on other processors for completion, this needs to be delayed
until it's safer. Otherwise it is possible to deadlock because we're
holding the global heap lock.
By being a bit too greedy and only allocating how much we need for
the failing allocation, we can end up in an infinite loop trying
to expand the heap further. That's because there are other allocations
(e.g. logging, vmobjects, regions, ...) that happen before we finally
retry the failed allocation request.
Also fix allocating in page size increments, which lead to an assertion
when the heap had to grow more than the 1 MiB backup.
It may be impossible to allocate more backup memory after expanding
the heap if memory is running low. In that case we wouldn't allocate
backup memory until trying to expand the heap again. But we also
wouldn't take advantage of using removed memory as backup, which means
that no backup memory would be available when the heap needs to grow
again, causing subsequent expansion to fail because there is no
backup memory.
Add an ExpandableHeap and switch kmalloc to use it, which allows
for the kmalloc heap to grow as needed.
In order to make heap expansion to work, we keep around a 1 MiB backup
memory region, because creating a region would require space in the
same heap. This means, the heap will grow as soon as the reported
utilization is less than 1 MiB. It will also return memory if an entire
subheap is no longer needed, although that is rarely possible.
Rather than hardcoding where the kmalloc pool should be, place
it at the end of the kernel image instead. This avoids corrupting
global variables or other parts of the kernel as it grows.
Fixes#3257
Rather than hardcoding where the kmalloc pool should be, place
it at the end of the kernel image instead. This avoids corrupting
global variables or other parts of the kernel as it grows.
Fixes#3257