This is not explicitly specified by POSIX, but is supported by other
*nixes, already supported by our sys$bind, and expected by various
programs. While were here, also clean up the user memory copies a bit.
We were previously assuming that the how value was a bitfield, but that
is not the case, so we must explicitly check for SHUT_RDWR when
deciding on the read and write shutdowns.
The previous check for valid how values assumed this field was a bitmap
and that SHUT_RDWR was simply a bitwise or of SHUT_RD and SHUT_WR,
which is not the case.
`sigsuspend` was previously implemented using a poll on an empty set of
file descriptors. However, this broke quite a few assumptions in
`SelectBlocker`, as it verifies at least one file descriptor to be
ready after waking up and as it relies on being notified by the file
descriptor.
A bare-bones `sigsuspend` may also be implemented by relying on any of
the `sigwait` functions, but as `sigsuspend` features several (currently
unimplemented) restrictions on how returns work, it is a syscall on its
own.
When updating the signal mask, there is a small frame where we might set
up the receiving process for handing the signal and therefore remove
that signal from the list of pending signals before SignalBlocker has a
chance to block. In turn, this might cause SignalBlocker to never notice
that the signal arrives and it will never unblock once blocked.
Track the currently handled signal separately and include it when
determining if SignalBlocker should be unblocking.
I haven't found any POSIX specification on this, but the Linux kernel
appears to handle it like that.
This is required by QEMU, as it just bulk-unlocks all its file locking
bytes without checking first if they are held.
Access to RDTSC is occasionally restricted to give malware one less
option to accurately time attacks (side-channels, etc.).
However, QEMU requires access to the timestamp counter for the exact
same reason (which is accurately timing its CPU ticks), so lets just
enable it for now.
The initialize_hba method now calls the reset method to reset the HBA
and initialize each AHCIPort. Also, after full HBA reset we need to turn
on the AHCI functionality of the HBA and global interrupts since they
are cleared to 0 according to the specification in the GHC register.
Instead of doing this in a parent class like the AHCIController, let's
do that directly in the AHCIPort class as that class is the only user of
these sort of physical pages. While it seems like we waste an entire 4KB
of physical RAM for each allocation, this could serve us later on if we
want to fetch other types of logs from the ATA device.
The way AHCIPortHandler held AHCIPorts and even provided them with
physical pages for the ATA identify buffer just felt wrong.
To fix this, AHCIPortHandler is not a ref-counted object anymore. This
solves the big part of the problem, because AHCIPorts can't hold a
reference to this object anymore, only the AHCIController can do that.
Then, most of the responsibilities are shifted to the AHCIController,
making the AHCIPortHandler a handler of port interrupts only.
The AHCI code is not very good at OOM conditions, so this is a first
step towards OOM correctness. We should not allocate things inside C++
constructors because we can't catch OOM failures, so most allocation
code inside constructors is exported to a different function.
Also, don't use a HashMap for holding RefPtr of AHCIPort objects in
AHCIPortHandler because this structure is not very OOM-friendly. Instead
use a fixed Array of 32 RefPtrs, as at most we can have 32 AHCI ports
per AHCI controller.
To prevent a race condition in case we received the ARP response in the
window between creating and initializing the Thread Blocker and the
actual blocking, we were checking if the IP address was updated in the
ARP table just before starting to block.
Unfortunately, the condition was partially flipped, which meant that if
the table was updated with the IP address we would still end up
blocking, at which point we would never end unblocking again, which
would result in LookupServer locking up as well.
Currently when allocating buffers for USB transfers, it is done
once for every transfer rather than once upon creation of the
USB device. This commit changes that by moving allocation of buffers
to the USB Pipe class where they can be reused.
In the same fashion like in the Linux kernel, we support pre-initialized
framebuffers that were set up by either the BIOS or the bootloader.
These framebuffers can be backed by any kind of video hardware, and are
not tied to VGA hardware at all. Therefore, this code should be in a
separate sub-folder in the Graphics subsystem to indicate this.
The flag will automatically initialize all variables to a pattern based
on it's type. The goal being here is to eradicate an entire bug class
of issues that can originate from uninitialized stack memory.
Some examples include:
- Kernel information disclosure, where uninitialized struct members
or struct padding is copied back to usermode, leaking kernel
information such as stack or heap addresses, or secret data like
stack cookies.
- Control flow based on uninitialized memory can cause a variety of
issues at runtime, including stack corruptions like buffer
overflows, heap corruptions due to deleting stray pointers.
Even basic logic bugs can result from control flow operating on
uninitialized data.
As of GCC 12 this flag is now supported.
https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=a25e0b5e6ac8a77a71c229e0a7b744603365b0e9
Clang has already supported it for a few releases.
https://reviews.llvm.org/D54604
When the size of the audio data was not a multiple of a page size,
subtracting the page size from this unsigned variable would underflow it
close to 2^32 and be clamped to the page size again. This would lead to
writes into garbage addresses because of an incorrect write size,
interestingly only causing the write() call to error out.
Using saturating math neatly fixes this problem and allows buffer
lengths that are not a multiple of a page size.
Currently CursorStyle enum handles both the styles and the steadiness or
blinking of the terminal caret, which doubles the amount of its entries.
This commit changes CursorStyle to CursorShape and moves the blinking
option to a seperate boolean value.
The RDGSBASE userspace instruction allows programs to read the contents
of the gs segment register which contains a kernel pointer to the base
of the current Processor struct.
Since we don't use this instruction in Serenity at the moment, we can
simply disable it for now to ensure we don't break KASLR. Support can
later be restored once proper swapping of the contents of gs is done on
userspace/kernel boundaries.
This is basically unchanged since the beginning of 2020, which is a year
before we had proper ASLR.
Now that we have a proper ASLR implementation, we can turn this down a
bit, as it is no longer our only protection against predictable dynamic
loader addresses, and it actually obstructs the default loading address
of x86_64 quite frequently.
There's nothing stopping a userspace program from keeping a bunch of
threads around with a custom signal stack in a suspended state with
their normal thread stack mprotected to PROT_NONE.
OpenJDK seems to do this, for example.
In a previous commit I moved everything into the new subdirectories in
FileSystem/SysFS directory without trying to actually make changes in
the code itself too much. Now it's time to split the code to make it
more readable and understandable, hence this change occurs now.
This is necessary for the next commit in the patch, otherwise this can't
be compiled. It seems like this was a hidden issue that is discovered
now only by changing includes in a mass-scale.