This will cause page faults to be generated. Since the previous commits
introduced the handling of page faults, we can now actually correctly
handle page faults.
The code in PageDirectory.cpp now keeps track of the registered page
directories, and actually sets the TTBR0_EL1 to the page table base of
the currently executing thread. When context switching, we now also
change the TTBR0_EL1 to the page table base of the thread that we
context switch into.
The handling of page tables is very architecture specific, so belongs
in the Arch directory. Some parts were already architecture-specific,
however this commit moves the rest of the PageDirectory class into the
Arch directory.
While we're here the aarch64/PageDirectory.{h,cpp} files are updated to
be aarch64 specific, by renaming some members and removing x86_64
specific code.
The class used to look at the x86_64 specific exception code to figure
out what kind of page fault happend, however this refactor allows
aarch64 to use the same class.
Various places in the kernel were manually checking the cs register for
x86_64, however to share this with aarch64 a function in RegisterState
is added, and the call-sites are updated. While we're here the
PreviousMode enum is renamed to ExecutionMode.
Until now the kernel was always executing with SP_EL0, as this made the
initial dropping to EL1 a bit easier. This commit changes this behaviour
to use the corresponding SP_ELx for each exception level.
To make sure that the execution of the C++ code can continue, the
current stack pointer is copied into the corresponding SP_ELx just
before dropping an exception level.
For each exposed PCI device in sysfs, there's a new node called "rom"
and by reading it, it exposes the raw data of a PCI option ROM blob to
a user for examining the blob.
There are now 2 separate classes for almost the same object type:
- EnumerableDeviceIdentifier, which is used in the enumeration code for
all PCI host controller classes. This is allowed to be moved and
copied, as it doesn't support ref-counting.
- DeviceIdentifier, which inherits from EnumerableDeviceIdentifier. This
class uses ref-counting, and is not allowed to be copied. It has a
spinlock member in its structure to allow safely executing complicated
IO sequences on a PCI device and its space configuration.
There's a static method that allows a quick conversion from
EnumerableDeviceIdentifier to DeviceIdentifier while creating a
NonnullRefPtr out of it.
The reason for doing this is for the sake of integrity and reliablity of
the system in 2 places:
- Ensure that "complicated" tasks that rely on manipulating PCI device
registers are done in a safe manner. For example, determining a PCI
BAR space size requires multiple read and writes to the same register,
and if another CPU tries to do something else with our selected
register, then the result will be a catastrophe.
- Allow the PCI API to have a united form around a shared object which
actually holds much more data than the PCI::Address structure. This is
fundamental if we want to do certain types of optimizations, and be
able to support more features of the PCI bus in the foreseeable
future.
This patch already has several implications:
- All PCI::Device(s) hold a reference to a DeviceIdentifier structure
being given originally from the PCI::Access singleton. This means that
all instances of DeviceIdentifier structures are located in one place,
and all references are pointing to that location. This ensures that
locking the operation spinlock will take effect in all the appropriate
places.
- We no longer support adding PCI host controllers and then immediately
allow for enumerating it with a lambda function. It was found that
this method is extremely broken and too much complicated to work
reliably with the new paradigm being introduced in this patch. This
means that for Volume Management Devices (Intel VMD devices), we
simply first enumerate the PCI bus for such devices in the storage
code, and if we find a device, we attach it in the PCI::Access method
which will scan for devices behind that bridge and will add new
DeviceIdentifier(s) objects to its internal Vector. Afterwards, we
just continue as usual with scanning for actual storage controllers,
so we will find a corresponding NVMe controllers if there were any
behind that VMD bridge.
There’s similar RDRAND register (encoded as ‘s3_3_c2_c4_0ʼ) to be
added if needed. RNG CPU feature on Aarch64 guarantees existence of both
RDSEED and RDRAND registers simultaneously—in contrast to x86-64, where
respective instructions are independent of each other.
This is the same address that the x86_64 kernel runs at, and allows us
to run the kernel at a high virtual memory address. Since we now run
completely in high virtual memory, we can also unmap the identity
mapping. Additionally some changes in MMU.cpp are required to
successfully boot.
Since we link the kernel at a high virtual memory address, the addresses
of global variables are also at virtual addresses. To be able to access
them without the MMU enabled, we have to subtract the
KERNEL_MAPPING_BASE.
Compile source files that run early in the boot process without the MMU
enabled, without stack protector and sanitizers. Enabling them will
cause the compiler to insert accesses to global variables, such as
__stack_chk_guard, which cause the CPU to crash, because these variables
are linked at high virtual addresses, which the CPU cannot access
without the MMU enabled.
This is a separate file that behaves similar to the Prekernel for
x86_64, and makes sure the CPU is dropped to EL1, the MMU is enabled,
and makes sure the CPU is running in high virtual memory. This code then
jumps to the usual init function of the kernel.
This was previously hardcoded this to be the physical memory range,
since we identity mapped the memory, however we now run the kernel at
a high virtual memory address.
Also changes PageDirectory.h to store up-to 512 pages, as the code now
needs access to more than 4 pages.
As the kernel is now linked at high address in virtual memory, we cannot
use absolute addresses as they refer to high addresses in virtual
memory. At this point in the boot process we are still running with the
MMU off, so we have to make sure the accesses are using physical memory
addresses.
This function will be used once the kernel runs in high virtual memory
to unmap the identity mapping as userspace will later on use this memory
range instead.
And use it the code that will be part of the early boot process.
The PANIC macro and dbgln functions cannot be used as it accesses global
variables, which in the early boot process do not work, since the MMU is
not yet enabled.
In the upcoming commits, we'll change the kernel to run at a virtual
address in high memory. This commit prepares for that by making sure the
kernel and mmio are mapped into high virtual memory.
A lot of places were relying on AK/Traits.h to give it strnlen, memcmp,
memcpy and other related declarations.
In the quest to remove inclusion of LibC headers from Kernel files, deal
with all the fallout of this included-everywhere header including less
things.
This header has always been fundamentally a Kernel API file. Move it
where it belongs. Include it directly in Kernel files, and make
Userland applications include it via sys/ioctl.h rather than directly.
Reduce inclusion of limits.h as much as possible at the same time.
This does mean that kmalloc.h is now including Kernel/API/POSIX/limits.h
instead of LibC/limits.h, but the scope could be limited a lot more.
Basically every file in the kernel includes kmalloc.h, and needs the
limits.h include for PAGE_SIZE.