The kernel image grew so much that it wasn't possible to jump to the C++
symbol anymore, since this generated a 'relocation truncated' error when
linking.
Let's put the power_state global node into the /sys/kernel directory,
because that directory represents all global nodes and variables being
related to the Kernel. It's also a mutable node, that is more acceptable
being in the mentioned directory due to the fact that all other files in
the /sys/firmware directory are just firmware blobs and are not mutable
at all.
Now that all global nodes are located in the /sys/kernel directory, we
can safely drop the global nodes in /proc, which includes both /proc/net
and /proc/sys directories as well.
This in fact leaves the ProcFS to only have subdirectories for processes
and the "self" symbolic link to reflect the current process being run.
The ProcFS is an utter mess currently, so let's start move things that
are not related to processes-info. To ensure it's done in a sane manner,
we start by duplicating all /proc/ global nodes to the /sys/kernel/
directory, then we will move Userland to use the new directory so the
old directory nodes can be removed from the /proc directory.
If a program needs to execute a dynamic executable program, then it
should unveil /usr/lib/Loader.so by itself and not rely on the Kernel to
allow using this binary without any sense of respect to unveil promises
being made by the running parent program.
Previously we didn't send the SIGPIPE signal to processes when
sendto()/sendmsg()/etc. returned EPIPE. And now we do.
This also adds support for MSG_NOSIGNAL to suppress the signal.
This commit reached that goal of "safely discarding" a filesystem by
doing the following:
1. Stop using the s_file_system_map HashMap as it was an unsafe measure
to access pointers of FileSystems. Instead, make sure to register all
FileSystems at the VFS layer, with an IntrusiveList, to avoid problems
related to OOM conditions.
2. Make sure to cleanly remove the DiskCache object from a BlockBased
filesystem, so the destructor of such object will not need to do that in
the destruction point.
3. For ext2 filesystems, don't cache the root inode at m_inode_cache
HashMap. The reason for this is that when unmounting an ext2 filesystem,
we lookup at the cache to see if there's a reference to a cached inode
and if that's the case, we fail with EBUSY. If we keep the m_root_inode
also being referenced at the m_inode_cache map, we have 2 references to
that object, which will lead to fail with EBUSY. Also, it's much simpler
to always ask for a root inode and get it immediately from m_root_inode,
instead of looking up the cache for that inode.
The idea is to enable mounting FileSystem objects across multiple mounts
in contrast to what happened until now - each mount has its own unique
FileSystem object being attached to it.
Considering a situation of mounting a block device at 2 different mount
points at in system, there were a couple of critical flaws due to how
the previous "design" worked:
1. BlockBasedFileSystem(s) that pointed to the same actual device had a
separate DiskCache object being attached to them. Because both instances
were not synchronized by any means, corruption of the filesystem is most
likely achieveable by a simple cache flush of either of the instances.
2. For superblock-oriented filesystems (such as the ext2 filesystem),
lack of synchronization between both instances can lead to severe
corruption in the superblock, which could render the entire filesystem
unusable.
3. Flags of a specific filesystem implementation (for example, with xfs
on Linux, one can instruct to mount it with the discard option) must be
honored across multiple mounts, to ensure expected behavior against a
particular filesystem.
This patch put the foundations to start fix the issues mentioned above.
However, there are still major issues to solve, so this is only a start.
We now have a seperately allocated structure for the bookkeeping
information in the QueueHead and TransferDescriptor UHCI strucutres.
This way, we can support 64-bit pointers in UHCI, fixing a problem where
32-bit pointers would truncate the upper 32-bits of the (virtual)
address of the descriptor, causing a crash.
Co-authored-by: b14ckcat <b14ckcat@protonmail.com>
This flag doesn't conform to any POSIX standard nor is found in any OS
out there. The idea behind this mount flag is to ensure that only
non-regular files will be placed in a filesystem, which includes device
nodes, symbolic links, directories, FIFOs and sockets. Currently, the
only valid case for using this mount flag is for TmpFS instances, where
we want to mount a TmpFS but disallow any kind of regular file and only
allow other types of files on the filesystem.
Although this code worked quite well, it is considered to be a code
duplication with the TmpFS code which is more tested and works quite
well for a variety of cases. The only valid reason to keep this
filesystem was that it enforces that no regular files will be created at
all in the filesystem. Later on, we will re-introduce this feature in a
sane manner. Therefore, this can be safely removed after SystemServer no
longer uses this filesystem type anymore.
Instead of using absolute paths which is considered an abstraction layer
violation between the kernel and userspace, let's not hardcode the path
to children PID directories but instead we can use relative path links
to them.
Having this function return `nullptr` explicitly triggers the compiler's
inbuilt checker, as it knows the destination is null. Having this as a
static (scoped) variable for now circumvents this problem.
Decompose the current monolithic USBD Pipe interface into several
subclasses, one for each pair of endpoint type & direction. This is to
make it more clear what data and functionality belongs to which Pipe
type, and prevent nonsensical things like trying to execute a control
transfer on a non-control pipe. This is important, because the Pipe
class is the interface by which USB device drivers will interact with
the HCD, so the clearer and more explicit this interface is the better.
Allocate DMA buffer pages for use within the USBD Pipe class, and allow
for the user to specify the size of this buffer, rounding up to the
next page boundary.
This sets up the RPi::Timer to trigger an interurpt every 4ms using one
of the comparators. The actual time is calculated by looking at the main
counter of the RPi::Timer using the Timer::update_time function.
A stub for Scheduler::timer_tick is also added, since the TimeManagement
code now calls the function.
Instead of just having a giant KBuffer that is not resizeable easily, we
use multiple AnonymousVMObjects in one Vector to store them.
The idea is to not have to do giant memcpy or memset each time we need
to allocate or de-allocate memory for TmpFS inodes, but instead, we can
allocate only the desired block range when trying to write to it.
Therefore, it is also possible to have data holes in the inode content
in case of skipping an entire set of one data block or more when writing
to the inode content, thus, making memory usage much more efficient.
To ensure we don't run out of virtual memory range, don't allocate a
Region in advance to each TmpFSInode, but instead try to allocate a
Region on IO operation, and then use that Region to map the VMObjects
in IO loop.
This makes it easier to differentiate between cases where certain
functionality is not implemented vs. cases where a code location
should really be unreachable.
It costs us nothing, and some utilities (such as the known file utility)
rely on the exposed file size (after doing lstat on it), to show
anything useful besides saying the file is "empty".
In a few places we check `!Processor::in_critical()` to validate
that the current processor doesn't hold any kernel spinlocks.
Instead lets provide it a first class name for readability.
I'll also be adding more of these, so I would rather add more
usages of a nice API instead of this implicit/assumed logic.
This change ensures that the scheduler doesn't depend on a platform
specific or arch-specific code when it initializes itself, but rather we
ensure that in compile-time we will generate the appropriate code to
find the correct arch-specific current time methods.
For some odd reason we used to return PhysicalPtr for a page_table_base
result, but when setting it we accepted only a 32 bit value, so we
truncated valid 64 bit addresses into 32 bit addresses by doing that.
With this commit being applied, now PageDirectories can be located
beyond the 4 GiB barrier.
This was found by sin-ack, therefore he should be credited with this fix
appropriately with Co-authored-by sign.
Co-authored-by: sin-ack <sin-ack@users.noreply.github.com>
There is no particular reason why this section should be marked as
`NOBITS` (as it might very well include initialized values), and it
resolves 90% of the mismatches between the input and output sections,
which LLD now warns about when linking.
Doesn't use them in libc headers so that those don't have to pull in
AK/Platform.h.
AK_COMPILER_GCC is set _only_ for gcc, not for clang too. (__GNUC__ is
defined in clang builds as well.) Using AK_COMPILER_GCC simplifies
things some.
AK_COMPILER_CLANG isn't as much of a win, other than that it's
consistent with AK_COMPILER_GCC.
Nobody uses this because the x86 prekernel environment is corrupting the
ramdisk image prior to running the actual kernel. In the future we can
ensure that the prekernel doesn't corrupt the ramdisk if we want to
bring support back. In addition to that, we could just use a RAM based
filesystem to load whatever is needed like in Linux, without the need of
additional filesystem driver.
For the mentioned corruption problem, look at issue #9893.
The BootFramebufferConsole class maps the framebuffer using the
MemoryManager, so to be able to draw the logo, we need to get this
mapped framebuffer. This commit adds a unsafe API for that.
The MemoryManager now works, so we can use the same code as on x86 to
map the framebuffer. Since it uses the MemoryManager, the initialization
of the BootFramebufferConsole has to happen after the MemoryManager is
working.
For the initial page tables we only need to identity map the kernel
image, the rest of the memory will be managed by the MemoryManager. The
linker script is updated to get the kernel image start and end
addresses.
The page table and page directory formats are architecture specific, so
move the headers into the Arch directory. Also move the aarch64 page
table constants from aarch64/MMU.cpp to aarch64/PageDirectory.h.
When an exception happens it is sometimes hard to figure out where
exactly the exception happened, so use the frame pointer of the trap
frame to print a backtrace.
We no longer require to lock the m_inode_lock in the SharedInodeVMObject
code as the methods write_bytes and read_bytes of the Inode class do
this for us now.
According to Dr. POSIX, we should allow to call mmap on inodes even on
ranges that currently don't map to any actual data. Trying to read or
write to those ranges should result in SIGBUS being sent to the thread
that did violating memory access.
To implement this restriction, we simply check if the result of
read_bytes on an Inode returns 0, which means we have nothing valid to
map to the program, hence it should receive a SIGBUS in that case.
This value will be used later on by WindowServer to reject resolutions
that will request a mapping that will overflow the hardware framebuffer
max length.
This class is intended to replace all IOAddress usages in the Kernel
codebase altogether. The idea is to ensure IO can be done in
arch-specific manner that is determined mostly in compile-time, but to
still be able to use most of the Kernel code in non-x86 builds. Specific
devices that rely on x86-specific IO instructions are already placed in
the Arch/x86 directory and are omitted for non-x86 builds.
The reason this works so well is the fact that x86 IO space acts in a
similar fashion to the traditional memory space being available in most
CPU architectures - the x86 IO space is essentially just an array of
bytes like the physical memory address space, but requires x86 IO
instructions to load and store data. Therefore, many devices allow host
software to interact with the hardware registers in both ways, with a
noticeable trend even in the modern x86 hardware to move away from the
old x86 IO space to exclusively using memory-mapped IO.
Therefore, the IOWindow class encapsulates both methods for x86 builds.
The idea is to allow PCI devices to be used in either way in x86 builds,
so when trying to map an IOWindow on a PCI BAR, the Kernel will try to
find the proper method being declared with the PCI BAR flags.
For old PCI hardware on non-x86 builds this might turn into a problem as
we can't use port mapped IO, so the Kernel will gracefully fail with
ENOTSUP error code if that's the case, as there's really nothing we can
do within such case.
For general IO, the read{8,16,32} and write{8,16,32} methods are
available as a convenient API for other places in the Kernel. There are
simply no direct 64-bit IO API methods yet, as it's not needed right now
and is not considered to be Arch-agnostic too - the x86 IO space doesn't
support generating 64 bit cycle on IO bus and instead requires two 2
32-bit accesses. If for whatever reason it appears to be necessary to do
IO in such manner, it could probably be added with some neat tricks to
do so. It is recommended to use Memory::TypedMapping struct if direct 64
bit IO is actually needed.
The APICTimer, HPET and RTC (the RTC timer is in the context of the PC
RTC here) are timers that exist only in x86 platforms, therefore, we
move the handling code and the initialization code to the Arch/x86/Time
directory. Other related code patterns in the TimeManagement singleton
and in the Random.cpp file are guarded with #ifdef to ensure they are
only compiled for x86 builds.
The new VGAIOArbiter class is now responsible to conduct x86-specific
instructions to control VGA hardware from the old ISA ports. This allows
us to ensure the GraphicsManagement code doesn't use x86-specific code,
thus allowing it to be compiled within non-x86 kernel builds.
The BootFramebufferConsole highly depends on using the m_lock spinlock,
therefore setting and changing the cursor state should be done under
that spinlock too to avoid crashing.
Only the Console code in the Graphics directory should be able to call
on these methods. The set_cursor method stays public as VirtualConsole
uses that method to change the cursor position.
This device is supposed to be used in microvm and ISA-PC machine types,
and we assume that if we are able to probe for the QEMU BGA version of
0xB0C5, then we have an existing ISA Bochs VGA adapter to utilize.
To ensure we don't instantiate the driver for non isa-vga devices, we
try to ensure that PCI is disabled because hardware IO test probe failed
so we can be sure that we use this special handling code only in the
QEMU microvm and ISA-PC machine types. Unfortunately, this means that if
for some reason the isa-vga device is attached for the i440FX or Q35
machine types, we simply are not able to drive the device in such setups
at all.
To determine the amount of VRAM being available, we read VBE register at
offset 0xA. That register holds the amount of VRAM divided by 64K, so we
need to multiply the value in our code to use the actual VRAM size value
again.
The isa-vga device requires us to hardcode the framebuffer physical
address to 0xE0000000, and that address is not expected to change in the
future as many other projects rely on the isa-vga framebuffer to be
present at that physical memory address.
We should aim to reliably determine if PCI hardware exists or not, and
we should consider the ACPI MCFG table in that test. Although it is
unusual to see an hardware setup where the PCI host bridge does not
respond to x86 IO instructions, it is expected to happen at least on the
QEMU microvm machine type as the host bridge only responds to memory
mapped IO requests. Therefore, we first test if ACPI is enabled, and we
try to use it to fetch the MCFG table. Later on we could also add FDT
parsing as part of the PCI IO test which would be useful for the QEMU
microvm machine type.
We use a ScopeGuard to ensure we always set a console of some sort if we
exit early from the initialization sequence in the GraphicsManagement
code. We do so to ensure we can boot into text mode console in an ISA-PC
machine type, because earlier we failed with an assertion due to not
setting any console for VirtualConsole to use.
ISA IDE controllers don't support Bus-master DMA as this feature is only
available for PCI IDE controllers. Therefore, don't try to use DMA mode
for such hardware.
The code in init.cpp is specific to the x86 initialization sequence, so
move it to the Arch/x86 directory in the same fashion like the aarch64
pattern.
The PIC and APIC code are specific to x86 platforms, so move them out of
the general Interrupts directory to Arch/x86/common/Interrupts directory
instead.
That code heavily relies on x86-specific instructions, and while other
CPU architectures and platforms can have PCI IDE controllers, currently
we don't support those, so this code is a special case which needs to be
in the Arch/x86 directory.
In the future it could be put back to the original place when we make it
more generic and suitable for other platforms.
The i8042 controller with its attached devices, the PS2 keyboard and
mouse, rely on x86-specific IO instructions to work. Therefore, move
them to the Arch/x86 directory to make it easier to omit the handling
code of these devices.
The ISA IDE controller code makes sense to be compiled in a x86 build as
it relies on access to the x86 IO space. For other architectures, we can
just omit the code as there's no way we can use that code again.
To ensure we can omit the code easily, we move it to the Arch/x86
directory.
The AHCI code doesn't rely on x86 IO at all as it only uses memory
mapped IO so we can simply remove the header.
We also simply don't use x86 IO in the Intel graphics driver, so we can
simply remove the include of the x86 IO header there too.
Everything else was a bunch of stale includes to the x86 IO header and
are actually not necessary, so let's remove them to make it easier to
compile non-x86 Kernel builds.
The VMWare backdoor handling code involves many x86-specific
instructions and therefore should be in the Arch/x86 directory. This
ensures we can easily omit the code in compile-time for non-x86 builds.
It seems more correct to let each platform to define its own sequence of
initialization of the PCI bus, so let's remove the #if flags and just
put the entire Initializer.cpp file in the appropriate code directory.
Only use the Bochs debug output if we compile a x86 build since bochs
debug output relies on x86 specific instructions.
We also remove the CONSOLE_OUT_TO_BOCHS_DEBUG_PORT flag as we always
compile bochs debug output for x86 builds and we always want to include
the bochs debug output capability as it is very handy and doesn't hurt
bare metal hardware or do any other problem besides taking a small
amount of CPU cycles.
The simple PCI::HostBridge class implements access to the PCI
configuration space by using x86 IO instructions. Therefore, it should
be put in the Arch/x86/PCI directory so it can be easily omitted for
non-x86 builds.
kprintf should not really care about the hardware-specific details of
each UART or serial port out there, so instead of using x86 specific
instructions, let's ensure that we will compile only the relevant code
for debug output for a targeted-specific platform.
The RTC and CMOS are currently only supported for x86 platforms and use
specific x86 instructions to produce only certain x86 plaform operations
and results, therefore, we move them to the Arch/x86 specific directory.
Many code patterns and hardware procedures rely on reliable delay in the
microseconds granularity, and since they are using such delays which are
valid cases, but should not rely on x86 specific code, we allow to
determine in compile time the proper platform-specific code to use to
invoke such delays.
We only use the RTC code in the Kernel, so it doesn't make sense to make
the RTC namespace outside of it. In addition to that, we will need later
on to use the RTC in an x86 specific manner and this will help us to use
this code in such fashion.
We move QEMU and VirtualBox shutdown sequences to a separate file, as
well as moving the i8042 reboot code sequence too to another file.
This allows us to abstract specific methods from the power state node
code of the SysFS filesystem, to allow other architectures to put their
methods there too in the future.
Using the IO address space is only relevant for x86 machines, so let's
not compile instructions to access the PCI configuration space when we
don't target x86 platforms.
This remained undetected for a long time as HeaderCheck is disabled by
default. This commit makes the following file compile again:
// file: compile_me.cpp
#include <Kernel/API/POSIX/ucontext.h>
// That's it, this was enough to cause a compilation error.
According to Dr. POSIX, we should allow to call mmap on inodes even on
ranges that currently don't map to any actual data. Trying to read or
write to those ranges should result in SIGBUS being sent to the thread
that did violating memory access.
We make these methods non-virtual because we want to ensure we properly
enforce locking of the m_inode_lock mutex. Also, for write operations,
we want to call prepare_to_write_data before the actual write. The
previous design required us to ensure the callers do that at various
places which lead to hard-to-find bugs. By moving everything to a place
where we call prepare_to_write_data only once, we eliminate a possibilty
of forgeting to call it on some code path in the kernel.
The block list required a bit of work, and now the only method being
declared const to bypass its const-iness is the read_bytes method that
calls a new method called compute_block_list_with_exclusive_locking that
takes care of proper locking before trying to update the block list data
of the ext2 inode.
Before of this patch, we supported two methods to address a boot device:
1. Specifying root=/dev/hdXY, where X is a-z letter which corresponds to
a boot device, and Y as number from 1 to 16, to indicate the partition
number, which can be omitted to instruct the kernel to use a raw device
rather than a partition on a raw device.
2. Specifying root=PARTUUID: with a GUID string of a GUID partition. In
case of existing storage device with GPT partitions, this is most likely
the safest option to ensure booting from persistent storage.
While option 2 is more advanced and reliable, the first option has 2
caveats:
1. The string prefix "/dev/hd" doesn't mean anything beside a convention
on Linux installations, that was taken into use in Serenity. In Serenity
we don't mount DevTmpFS before we mount the boot device on /, so the
kernel doesn't really access /dev anyway, so this convention is only a
big misleading relic that can easily make the user to assume we access
/dev early on boot.
2. This convention although resemble the simple linux convention, is
quite limited in specifying a correct boot device across hardware setup
changes, so option 2 was recommended to ensure the system is always
bootable.
With these caveats in mind, this commit tries to fix the problem with
adding more addressing options as well as to remove the first option
being mentioned above of addressing.
To sum it up, there are 4 addressing options:
1. Hardware relative address - Each instance of StorageController is
assigned with a index number relative to the type of hardware it handles
which makes it possible to address storage devices with a prefix of the
commandset ("ata" for ATA, "nvme" for NVMe, "ramdisk" for Plain memory),
and then the number for the parent controller relative hardware index,
another number LUN target_id, and a third number for LUN disk_id.
2. LUN address - Similar to the previous option, but instead we rely on
the parent controller absolute index for the first number.
3. Block device major and minor numbers - by specifying the major and
minor numbers, the kernel can simply try to get the corresponding block
device and use it as the boot device.
4. GUID string, in the same fashion like before, so the user use the
"PARTUUID:" string prefix and add the GUID of the GPT partition.
For the new address modes 1 and 2, the user can choose to also specify a
partition out of the selected boot device. To do that, the user needs to
append the semicolon character and then add the string "partX" where X
is to be changed for the partition number. We start counting from 0, and
therefore the first partition number is 0 and not 1 in the kernel boot
argument.
This reworks the way the UHCI schedule is set up to handle interrupt
transfers, creating 11 queue heads each assigned a different
period/latency, so that interrupt transfers can be linked into the
schedule with their specified period more easily.
Modifies the way the UHCI schedule is set up & modified to allow for
multiple transfers of the same type, from one or more devices, to be
queued up and handled simultaneously.
Otherwise we would be holding the MM global data lock and the Process
address space locks in reversed order to the rest of the system, which
can lead to deadlocks.
This is a left-over from back when we didn't have any locking on the
global Process list, nor did we have SMP support, so this acted as some
kind of locking mechanism. We now have proper locks around the Process
list, so this is no longer relevant.
Change the name of set_serial_debug(bool on_or_off) to
set_serial_debug_enabled(bool desired_state). This is to make the names
more expressive and less unclear as to what the function does, as it
only sets the enabled state.
Likewise, change the name of get_serial_debug() to
is_serial_debug_enabled() in order to make clear from the name that
this is simply the state of s_serial_debug_enabled.
Change the name of serial_debug to s_serial_debug_enabled since this is
a static bool describing this state.
Finally, change the signature of set_serial_debug_enabled to return a
bool, as this is more logical and understandable.
Now that the Spinlock code is not dependent on architectural specific
code anymore, we can move it back to the Locking folder. This also means
that the Spinlock implemenation is now used for the aarch64 kernel.
This commit updates the lock function from Spinlock and
RecursiveSpinlock to return the InterruptsState of the processor,
instead of the processor flags. The unlock functions would only look at
the interrupt flag of the processor flags, so we now use the
InterruptsState enum to clarify the intent, and such that we can use the
same Spinlock code for the aarch64 build.
To not break the build, all the call sites are updated aswell.
This commit adds the concept of an InterruptsState to the kernel. This
will be used to make the Spinlock code architecture independent. A new
Processor.cpp file is added such that we don't have to duplicate the
code.
Globally shared MemoryManager state is now kept in a GlobalData struct
and wrapped in SpinlockProtected.
A small set of members are left outside the GlobalData struct as they
are only set during boot initialization, and then remain constant.
This allows us to access those members without taking any locks.
I believe this to be safe, as the main thing that LockRefPtr provides
over RefPtr is safe copying from a shared LockRefPtr instance. I've
inspected the uses of RefPtr<PhysicalPage> and it seems they're all
guarded by external locking. Some of it is less obvious, but this is
an area where we're making continuous headway.
We're not accessing any of the MM members here. Also remove some
redundant code to update CR3, since it calls activate_page_directory()
which does exactly the same thing.
Now that AddressSpace itself is always SpinlockProtected, we don't
need to also wrap the RegionTree. Whoever has the AddressSpace locked
is free to poke around its tree.
This allows sys$mprotect() to honor the original readable & writable
flags of the open file description as they were at the point we did the
original sys$mmap().
IIUC, this is what Dr. POSIX wants us to do:
https://pubs.opengroup.org/onlinepubs/9699919799/functions/mprotect.html
Also, remove the bogus and racy "W^X" checking we did against mappings
based on their current inode metadata. If we want to do this, we can do
it properly. For now, it was not only racy, but also did blocking I/O
while holding a spinlock.
Before this change, we had File::mmap() which did all the work of
setting up a VMObject, and then creating a Region in the current
process's address space.
This patch simplifies the interface by removing the region part.
Files now only have to return a suitable VMObject from
vmobject_for_mmap(), and then sys$mmap() itself will take care of
actually mapping it into the address space.
This fixes an issue where we'd try to block on I/O (for inode metadata
lookup) while holding the address space spinlock. It also reduces time
spent holding the address space lock.
This forces anyone who wants to look into and/or manipulate an address
space to lock it. And this replaces the previous, more flimsy, manual
spinlock use.
Note that pointers *into* the address space are not safe to use after
you unlock the space. We've got many issues like this, and we'll have
to track those down as wlel.
We were holding the MM lock across all of the region unmapping code.
This was previously necessary since the quickmaps used during unmapping
required holding the MM lock.
Now that it's no longer necessary, we can leave the MM lock alone here.
This makes locking them much more straightforward, and we can remove
a bunch of confusing use of AddressSpace::m_lock. That lock will also
be converted to use of SpinlockProtected in a subsequent patch.
By default these 2 fields were zero, which made it rely on
implementation defined behavior whether these fields internally would be
set to the correct value. The ARM processor in the Raspberry PI (and
QEMU 6.x) would actually fixup these values, whereas QEMU 7.x now does
not do that anymore, and a translation fault would be generated instead.
For more context see the relevant QEMU issue:
- https://gitlab.com/qemu-project/qemu/-/issues/1157Fixes#14856
Boot profiling was previously broken due to init_stage2() passing the
event mask to sys$profiling_enable() via kernel pointer, but a user
pointer is expected.
To fix this, I added Process::profiling_enable() as an alternative to
Process::sys$profiling_enable which takes a u64 rather than a
Userspace<u64 const*>. It's a bit of a hack, but it works.
You're still required to disable interrupts though, as the mappings are
per-CPU. This exposed the fact that our CR3 lookup map is insufficiently
protected (but we'll address that in a separate commit.)
While the "regular" quickmap (used to temporarily map a physical page
at a known address for quick access) has been per-CPU for a while,
we also have the PD (page directory) and PT (page table) quickmaps
used by the memory management code to edit page tables. These have been
global, which meant that SMP systems had to keep fighting over them.
This patch makes *all* quickmaps per-CPU. We reserve virtual addresses
for up to 64 CPUs worth of quickmaps for now.
Note that all quickmaps are still protected by the MM lock, and we'll
have to fix that too, before seeing any real throughput improvements.
Instead of having three separate APIs (one for each timestamp),
there's now only Inode::update_timestamps() and it takes 3x optional
timestamps. The non-empty timestamps are updated while holding the inode
mutex, and the outside world no longer has to look at intermediate
timestamp states.
Instead of temporary changing the open file description's "blocking"
flag while doing a non-waiting recvfrom, we instead plumb the currently
wanted blocking behavior all the way through to the underlying socket.
This ensures that all the permissions checks are made against the
provided credentials. Previously we were just calling through directly
to the inode setters, which did no security checks!
Instead of getting credentials from Process::current(), we now require
that they be provided as input to the various VFS functions.
This ensures that an atomic set of credentials is used throughout an
entire VFS operation.
This ensures that both mutable and immutable access to the protected
data of a process is serialized.
Note that there may still be multiple TOCTOU issues around this, as we
have a bunch of convenience accessors that make it easy to introduce
them. We'll need to audit those as well.
By protecting all the RefPtr<Custody> objects that may be accessed from
multiple threads at the same time (with spinlocks), we remove the need
for using LockRefPtr<Custody> (which is basically a RefPtr with a
built-in spinlock.)
Instead, allocate when acquiring the lock on m_fds struct, which is
safer to do in terms of safely mutating the m_fds struct, because we
don't use the big process lock in this syscall.
This patch adds the NGROUPS_MAX constant and enforces it in
sys$setgroups() to ensure that no process has more than 32 supplementary
group IDs.
The number doesn't mean anything in particular, just had to pick a
number. Perhaps one day we'll have a reason to change it.
Now that these operate on the neatly atomic and immutable Credentials
object, they should no longer require the process big lock for
synchronization. :^)
This patch adds a new object to hold a Process's user credentials:
- UID, EUID, SUID
- GID, EGID, SGID, extra GIDs
Credentials are immutable and child processes initially inherit the
Credentials object from their parent.
Whenever a process changes one or more of its user/group IDs, a new
Credentials object is constructed.
Any code that wants to inspect and act on a set of credentials can now
do so without worrying about data races.
Until now, our kernel has reimplemented a number of AK classes to
provide automatic internal locking:
- RefPtr
- NonnullRefPtr
- WeakPtr
- Weakable
This patch renames the Kernel classes so that they can coexist with
the original AK classes:
- RefPtr => LockRefPtr
- NonnullRefPtr => NonnullLockRefPtr
- WeakPtr => LockWeakPtr
- Weakable => LockWeakable
The goal here is to eventually get rid of the Lock* classes in favor of
using external locking.
Instead of having two separate implementations of AK::RefCounted, one
for userspace and one for kernelspace, there is now RefCounted and
AtomicRefCounted.
All users which relied on the default constructor use a None lock rank
for now. This will make it easier to in the future remove LockRank and
actually annotate the ranks by searching for None.
Unlike Clang, GCC does not support 8-byte atomics on i686 with the
-mno-80387 flag set, so until that is fixed, implement a minimal set of
atomics that are currently required.
Signal dispatch is already protected by the global scheduler lock, but
in some cases we also took Thread::m_lock for some reason. This led to
a number of different deadlocks that started showing up with 4+ CPU's
attached to the system.
As a first step towards solving this, simply don't take the thread lock
and let the scheduler lock cover it.
Eventually, we should work in the other direction and break the
scheduler lock into much finer-grained locks, but let's get out of the
deadlock swamp first.
This is not necessary, and is a leftover from before Thread started
using the ListedRefCounted pattern to be safely removed from lists on
the last call to unref().
As soon as we've saved CR2 (the faulting address), we can re-enable
interrupt processing. This should make the kernel more responsive under
heavy fault loads.
This fixes an issue where a sharing process would map the "lazy
committed page" early and then get stuck with that page even after
it had been replaced in the VMObject by a page fault.
Regressed in 27c1135d30, which made it
happen every time with the backing bitmaps used for WebContent.
Region::physical_page() now takes the VMObject lock while accessing the
physical pages array, and returns a RefPtr<PhysicalPage>. This ensures
that the array access is safe.
Region::physical_page_slot() now VERIFY()'s that the VMObject lock is
held by the caller. Since we're returning a reference to the physical
page slot in the VMObject's physical page array, this is the best we
can do here.
Note that SMP is still off by default, but this basically removes the
weird "SMP on but threads don't get scheduled" behavior we had by
default. If you pass "smp=on" to the kernel, you now get SMP. :^)
We really only need the VMObject lock when accessing the physical pages
array, so once we have a strong pointer to the physical page we want to
remap, we can give up the VMObject lock.
This fixes a deadlock I encountered while building DOOM on SMP.
When handling a page fault, we only need to remap the faulting region in
the current process. There's no need to traverse *all* regions that map
the same VMObject and remap them cross-process as well.
Those other regions will get remapped lazily by their own page fault
handlers eventually. Or maybe they won't and we avoided some work. :^)
- Instead of holding the VMObject lock across physical page allocation
and quick-map + copy, we now only hold it when updating the VMObject's
physical page slot.
Make sure we reject the unveil attempt with EPERM if the veil was locked
by another thread while we were parsing argument (and not holding the
veil state spinlock.)
Thanks Brian for spotting this! :^)
Amendment to #14907.
To ensure that we stay on the same CPU that acquired the spinlock until
we're completely unlocked, we now leave the critical section *before*
re-enabling interrupts.
We want to grab g_scheduler_lock *before* Thread::m_block_lock.
This appears to have fixed a deadlock that I encountered while building
DOOM with make -j2.
Path resolution may do blocking I/O so we must not do it while holding
a spinlock. There are tons of problems like this throughout the kernel
and we need to find and fix all of them.
This fixes an issue where we could get preempted after acquiring the
current Processor pointer, but before calling methods on it.
I strongly suspect this was the cause of "Processor::current() == this"
assertion failures.
This matches out general macro use, and specifically other verification
macros like VERIFY(), VERIFY_NOT_REACHED(), VERIFY_INTERRUPTS_ENABLED(),
and VERIFY_INTERRUPTS_DISABLED().
Instead of locking it twice, we now frontload all the work that doesn't
touch the fd table, and then only lock it towards the end of the
syscall.
The benefit here is simplicity. The downside is that we do a bit of
unnecessary work in the EMFILE error case, but we don't need to optimize
that case anyway.