By making sure the PhysicalPage instance is fully destructed the
allocators will have a chance to reclaim the PhysicalPageEntry for
free-list purposes. Just pass them the physical address of the page
that was freed, which is enough to lookup the PhysicalPageEntry later.
By moving the PhysicalPage classes out of the kernel heap into a static
array, one for each physical page, we can avoid the added overhead and
easily find them by indexing into an array.
This also wraps the PhysicalPage into a PhysicalPageEntry, which allows
us to re-use each slot with information where to find the next free
page.
We already use PAE for the NX bit, but this changes the PhysicalAddress
structure to be able to hold 64 bit physical addresses. This allows us
to use all the available physical memory.
If a non-const lvalue reference is passed to these constructors, the
converting constructor will be selected instead of the desired copy/move
constructor.
Since I needed to touch `KResultOr` anyway, I made the forwarding
converting constructor use `forward<U>` instead of `move`. This meant
that previously, if a lvalue was passed to it, a move operation took
place even if no `move()` was called on it. Member initializers and
if-else statements have been changed to match our current coding style.
These functions are only used from within `dbgln_if` calls, so in
certain build configurations, they go unused. Similarly to variables, we
now signal to the compiler that we understand that these are not always
in use.
`.text` segments with non-aligned offsets had their lengths applied to
the first page's base address. This meant that in some cases the last
PAGE_SIZE - 1 bytes weren't mapped. Previously, it did not cause any
problems as the GNU ld insists on aligning everything; but that's not
the case with the LLVM toolchain.
The ProtectedDataMutationScope cannot blindly assume that there is only
exactly one thread at a time that may want to unprotect the Process.
Most of the time the big lock guaranteed this, but there are some cases
such as finalization (among others) where this is not necessarily
guaranteed.
This fixes random panics due to access violations when the
ProtectedDataMutationScope protects the Process instance while another
is still modifying it.
Fixes#8512
This class acts like a combined ref-count as well as a spin-lock
(only when adding the first or removing the last reference), allowing
to run a specific action atomically when adding the first or dropping
the last reference.
This adds a formatter function for OwnPtr<KString>. This is added mainly
because lots of dbgln() statements generate Strings (such as absolute
paths) which are only used for debugging. Instead of catching possible
OOM situations at all the dbgln() callsites, this makes it possible to
let the formatter code handle those situations by outputting "[out of
memory]" if the OwnPtr is null.
This adds KLexicalPath::try_join(). As this cannot be done without
allocation, it uses KString and can fail. This patch also uses it at one
place. All the other cases of String::formatted("{}/{}", ...) currently
rely on the return value being a String, which means they cannot easily
be converted to use the new API.
This replaces all uses of LexicalPath in the Kernel with the functions
from KLexicalPath. This also allows the Kernel to stop including
AK::LexicalPath.
This adds KLexicalPath, which are a few static functions which aim to
mostly emulate AK::LexicalPath. They are however constrained to work
with absolute paths only, containing no '.' or '..' path segments and no
consecutive slashes. This way, it is possible to avoid use StringView
for the return values and thus avoid allocating new String objects.
As explained above, the functions are currently very strict about the
allowed input paths. This seems to not be a problem currently. Since the
functions VERIFY this, potential bugs caused by this will become
immediately obvious.
There is a race condition where we would remove a FutexQueue from
our futex map and in the meanwhile another thread started to queue
itself into that very same futex, leading to that thread to wait
forever as no other wake operation could discover that removed
FutexQueue.
This fixes the problem by:
* Tracking imminent waits, which prevents a new FutexQueue from being
deleted that a thread will wait on momentarily
* Atomically marking a FutexQueue as removed, which prevents a thread
from waiting on it before it is actually removed from the futex map.
We were creating a new memory mapping every time WindowServer performed
a buffer flip. This was very visible in whole-system profiles, as the
mapping and unmapping of MMIO registers caused quite a bit of kmalloc()
and kfree() churn.
Avoid this problem by simply keeping the MMIO registers mapped.
This was an old SerenityOS-specific syscall for donating the remainder
of the calling thread's time-slice to another thread within the same
process.
Now that Threading::Lock uses a pthread_mutex_t internally, we no
longer need this syscall, which allows us to get rid of a surprising
amount of unnecessary scheduler logic. :^)
This provides more crucial information to be able to do an addr2line
lookup on a backtrace captured with Thread::backtrace.
Also change the offset to hexadecimal as this is what is require for
addr2line.
This change enforces that paths passed to
VFS::validate_path_against_process_veil are absolute and do not contain
any '..' or '.' parts. We should VERIFY here instead of returning EINVAL
since the code that calls this should resolve non-canonical paths before
calling this function.
Previously, Custody::absolute_path() was called for every call to
validate_path_against_process_veil(). For processes that don't have a
veil, the path is not used by the function. This means that it is
unnecessarily generated. This introduces an overload to
validate_path_against_process_veil(), which takes a Custody const& and
only generates the absolute path if it there is actually a veil and it
is thus needed.
This patch results in a speed up of Assistant's file system cache
building by around 16 percent.
Depending on the driver, the second buffer may not be located right
after the first, e.g. it may be page aligned. This removes this
assumption and queries the driver for the appropriate offset.
Some devices may require DMA transfers to flush the updated buffer
areas prior to flipping. For those devices we track the areas that
require flushing prior to the next flip. For devices that do not
support flipping, but require flushing, we'll simply flush after
updating the front buffer.
This also adds a small optimization that skips these steps entirely for
a screen that doesn't have any updates that need to be rendered.
Specifically, explicitly specify the checked type, use the resulting
value instead of doing the same calculation twice, and break down
calculations to discrete operations to ensure no intermediary overflows
are missed.
Because the remainder variable will always be 0 unless a fault happened
we should not use it to decide if we have nothing left to memset when
finishing the fast path. This caused not all bytes to be zeroed
if the size was not an exact multiple of sizeof(size_t).
Fixes#8352
When retrieving and setting x86 MSRs two registers are required. The
existing setter and getter for the MSR class made this implementation
detail visible to the caller. This changes the setter and getter to
use u64 instead.