When the FileSystem does a sync, it gathers up all the inodes with
dirty metadata into a vector. The inode mutex is not held while
checking the inode dirty bit, which can lead to a kernel panic
due to concurrent inode modifications.
Fixes: #21796
It seems like the current implementation returns 0 in case we do not
have enough data for a whole packet yet. The 0 value gets propagated
to the return value of the syscall which according to the spec
should return non-zero values for non-errors cases. This causes panic,
as there is a VERIFY guard checking that more than > 0 bytes are
written if no error has occurred.
There's no need to have separate syscall for this kind of functionality,
as we can just have a device node in /dev, called "beep", that allows
writing tone generation packets to emulate the same behavior.
In addition to that, we remove LibC sysbeep function, as this function
was never being used by any C program nor it was standardized in any
way.
Instead, we move the userspace implementation to LibCore.
Previously, attempting to update an ext2 inode with a UID or GID
larger than 65535 would overflow. We now write the high bits of UIDs
and GIDs to the same place that Linux does within the `osd2` struct.
A bit old but a relatively uncomplicated device capable of outputting
1920x1080 video with 32-bit color. Tested with a Voodoo 3 3000 16MB
PCI card. Resolution switching from DisplaySettings also works.
If the requested mode contains timing information, it is used directly.
Otherwise, display timing values are selected from the EDID. First the
detailed timings are checked, and then standard and established
timings for which there is a matching DMT mode. The driver does not
(yet) read the actual EDID, so the generic EDID in DisplayConnector now
includes a set of common display modes to make this work.
The driver should also be compatible with the Voodoo Banshee, 4 and 5
but I don't have these cards to test this with. The PCI IDs of these
cards are included as a commented line in case someone wants to give it
a try.
Moving the DeviceManagement initialization, which is only needed by
userland in the first place, to after interrupt and time management
initialization (like other things that require randomness) allows the
SipHash initialization to access good randomness without problems.
Note: There currently is another, unrelated boot problem on aarch64,
which is not caused by SipHash as far as we know. This commit therefore
only fixes the SipHash regression.
This view is really nice to check flags, but when clearing them we must
make sure that we only ever try to set 1 bit at a time, which makes
setting bits through the structured view a footgun, as that fetches,
ors in and then sets, potentially resetting other flags.
According to multiboot spec if flag for framebuffer isn't
set then corresponding fields are invalid. In reality they're set
to 0 but let's be defensive.
Loaders try to put modules as low as reasonable but on
EFI often "reasonable" is much higher than on BIOS. As
a result target can be easily higher than source.
Then we have 2 problems:
* memmove compares virtual address and since target
is mapped higher it ends up going backwards which
is wrong if target is physically below source
* order of copying of sections must be inverted if
target is below source
Prekernel code currently assumes that mapping until MAX_KERNEL_SIZE
is enough to make the modules accessible. GRUB tries to load as low
as possible but higher than 1 MiB. Hence this is usually true.
However on EFI some ranges may already be used by boot services and
GRUB tries to avoid them if possible. This pushes modules higher.
The simplest solution is to map entire 4 GiB space.
As an additional benefit it makes the framebuffer accessible that
can be used for the debugging.
About half of the Processor code is common across architectures, so
let's share it with a templated base class. Also, other code that can be
shared in some ways, like FPUState and TrapFrame functions, is adjusted
here. Functions which cannot be shared trivially (without internal
refactoring) are left alone for now.
SipHash is highly HashDoS-resistent, initialized with a random seed at
startup (i.e. non-deterministic) and usable for security-critical use
cases with large enough parameters. We just use it because it's
reasonably secure with parameters 1-3 while having excellent properties
and not being significantly slower than before.
This subtraction is necessary to ensure that the section has the correct
address. Also, without this change, the Kernel ELF binary would explode
in size. This was forgotten in a0dd6ec6b1.
This field is in a packed struct, which makes it possibly misaligned.
This knowledge is lost when invoking `dbgln` triggering an unaligned
access to it, aka UB. By explicitely copying it we avoid this issue.
Simplify core methods in the VirtIO bus handling code by ensuring proper
error propagation. This makes initialization of queues, handling changes
in device configuration, and other core patterns more readable as well.
It also allows us to remove the obnoxious pattern of checking for
boolean "success" and if we get false answer then returning an actual
errno code.
When a device is plugged into the machine (and hence, when
`Device::try_create()` is called), then we attempt to load a driver by
calling that driver's probe function.
At any one given time, there can be an abitrary number of USB drivers in
the system. The way driver mapping works (i.e, a device is inserted, and
a potentially matching driver is probed) requires us to have
instantiated driver objects _before_ a device is inserted. This leaves
us with a slight "chicken and egg" problem. We cannot call the probe
function before the driver is initialised, but we need to know _what_
driver to initialise.
This section is designed to store pointers to functions that are called
during the last stage of the early `_init` sequence in the Kernel. The
accompanying macro in `USBDriver` emits a symbol, based on the driver
name, into this table that is then automatically called.
This way, we enforce a "common" driver model; driver developers are not
only required to write their driver and inherit from `USB::Driver`, but
are also required to have a free floating init function that registers
their driver with the USB Core.
The VirtIO specification defines many types of devices with different
purposes, and it also defines 3 possible transport mediums where devices
could be connected to the host machine.
We only care about the PCIe transport, but this commit puts the actual
foundations for supporting the lean MMIO transport too in the future.
To ensure things are kept abstracted but still functional, the VirtIO
transport code is responsible for what is deemed as related to an actual
transport type - allocation of interrupt handlers and tinkering with low
level transport-related registers, etc.
We should consider whether the selected Thread is within the same jail
or not.
Therefore let's make it clear to callers with jail semantics if a called
method checks if the desired Thread object is within the same jail.
As for Thread::for_each_* methods, currently nothing in the kernel
codebase needs iteration with consideration for jails, so the old
Thread::for_each* were simply renamed to include "ignoring_jails" suffix
in their names.
Some syscalls could be simplified by using the non-static method
Process::get_thread_from_thread_list which should ensure that the
specified tid is of a Thread in the same Process of the current Thread.
The Kernel/API directory in general shouldn't include userspace code,
but structure definitions that both are shared between the Kernel and
userspace.
All users of the ioctl API obviously use LibC so LibC is the most common
and shared library for the affected programs.
The Kernel/API directory in general shouldn't include userspace code,
but structure definitions that both are shared between the Kernel and
userspace.
LibC is the most appropriate place for these methods as they're already
included in the sys/sysmacros.h file to create a set of convenient
macros for these methods.
Userspace initially didn't have any sort of mechanism to handle
device hotplug (either removing or inserting a device).
This meant that after a short term of scanning all known devices, by
fetching device events (DeviceEvent packets) from /dev/devctl, we
basically never try to read it again after SystemServer initialization
code.
To accommodate hotplug needs, we change SystemServer by ensuring it will
generate a known set of device nodes at their location during the its
main initialization code. This includes devices like /dev/mem, /dev/zero
and /dev/full, etc.
The actual responsible userspace program to handle hotplug events is a
new userspace program called DeviceMapper, with following key points:
- Its current task is to to constantly read the /dev/devctl device node.
Because we already created generic devices, we only handle devices
that are dynamically-generated in nature, like storage devices, audio
channels, etc.
- Since dynamically-generated device nodes could have an infinite minor
numbers, but major numbers are decoded to a device type, we create an
internal registry based on two structures - DeviceNodeFamily, and
RegisteredDeviceNode. DeviceNodeFamily objects are attached in the
main logic code, when handling a DeviceEvent device insertion packet.
A DeviceNodeFamily object has an internal HashTable to hold objects of
RegisteredDeviceNode class.
- Because some device nodes could still share the same major number (TTY
and serial TTY devices), we have two modes of allocation - limited
allocation (so a range is defined for a major number), or infinite
range. Therefore, two (or more) separate DeviceNodeFamily objects can
can exist albeit sharing the same major number, but they are required
to allocate from a different minor numbers' range to ensure there are
no collisions.
- As for KCOV, we handle this device differently. In case the user
compiled the kernel with such support - this happens to be a singular
device node that we usually don't need, so it's dynamically-generated
too, and because it has only one instance, we don't register it in our
internal registry to not make it complicated needlessly.
The Kernel code is modified to allow proper blocking in case of no
events in the DeviceControlDevice class, because otherwise we will need
to poll periodically the device to check if a new event is available,
which would waste CPU time for no good reason.
This is an initial implementation, about as basic as intended by the
RFC, and not configurable from userspace at the moment. It should reduce
the amount of low-sized packets sent, reducing overhead and thereby
network traffic.
The name for this directory is a bit awkward. Also, the distinction of
constant information is not really valuable as I thought it would be, so
let's bring that information back into the /sys/kernel directory.
The name "variables" is a bit awkward and what the directory entries are
really about is kernel configuration so let's make it clear with the new
name.
These options are not relevant and are actually meaningless on pure TTY
devices, as they are meant to be effective only for the VirtualConsole
devices.
This also removes the virtual marking from two methods because they're
no longer declared in the TTY class as well.
These syscalls are not necessary on their own, and they give the false
impression that a caller could set or get the thread name of any process
in the system, which is not true.
Therefore, move the functionality of these syscalls to be options in the
prctl syscall, which makes it abundantly clear that these operations
could only occur from a running thread in a process that sees other
threads in that process only.
This new Kernel StdLib function will be used to copy contents of a
FixedStringBuffer with a null character to a user process.
The first user of this new function is the prctl option of
PR_GET_PROCESS_NAME which would copy a process name including a null
character to a user provided buffer.
When doing PR_{SET,GET}_PROCESS_NAME, it's not expected to pass a signed
integer for the buffer size (in arg2). Therefore, cast it immediately to
a size_t integer type, and let the FixedStringBuffer StdLib memory copy
functions in such cases to worry about possible overflows.
There was a small mishmash of argument order, as seen on the table:
| Traits<T>::equals(U, T) | Traits<T>::equals(T, U)
============= | ======================= | =======================
uses equals() | HashMap | Vector, HashTable
defines equals() | *String[^1] | ByteBuffer
[^1]: String, DeprecatedString, their Fly-type equivalents and KString.
This mostly meant that you couldn't use a StringView for finding a value
in Vector<String>.
I'm changing the order of arguments to make the trait type itself first
(`Traits<T>::equals(T, U)`), as I think it's more expected and makes us
more consistent with the rest of the functions that put the stored type
first (like StringUtils functions and binary_serach). I've also renamed
the variable name "other" in find functions to "entry" to give more
importance to the value.
With this change, each of the following lines will now compile
successfully:
Vector<String>().contains_slow("WHF!"sv);
HashTable<String>().contains("WHF!"sv);
HashMap<ByteBuffer, int>().contains("WHF!"sv.bytes());
As "\n" is translated to "\r\n" in TTYs, the condition for a write
to succeed on a pseudoterminal should check if the underlying buffer
has 2 bytes empty rather than 1.
Fixes SerenityOS#18888
This patch ensures that the shutdown procedure can complete due to the
fact we don't kill kernel processes anymore, and only stop the scheduler
from running after the filesystems unmount procedure.
We also need kernel processes during the shutdown procedure, because we
rely on the WorkQueue threads to run WorkQueue items to complete async
IO requests initiated by filesystem sync & unmounting, etc.
This is also simplifying the code around the killing processes, because
we don't need to worry about edge cases such as the FinalizerTask
anymore.
The process could be long gone by the point the async IO request has
completed so hold a weak reference pointer to the requesting Process and
try get a strong reference only when needed.
This patch is necessary because otherwise async IO requests can hold
Process objects long after they were terminated, which would make it
impossible to perform certain tasks in the system, like killing all user
processes during the shutdown procedure.
We first must flush the superblock through the BlockBasedFileSystem
methods properly and only then clear the DiskCache pointer, to prevent a
possible kernel panic due to nullptr dereference.
Previously we would set the KeyCode correctly to the appropriate
extended keys values, like Home and End, but keep the code point of the
original keys, like 1, 2, 3, etc. Because of this, the keys would just
print the original keys, instead of behaving like the extended ones.
This is a minimal set of changes to allow `serenity.sh build riscv64` to
successfully generate the build environment and start building. This
includes some, but not all, assembly stubs that will be needed later on;
they are currently empty.
Shadow doorbell feature was added in the NVMe spec to improve
the performance of virtual devices.
Typically, ringing a doorbell involves writing to an MMIO register in
QEMU, which can be expensive as there will be a trap for the VM.
Shadow doorbell mechanism was added for the VM to communicate with the
OS when it needs to do an MMIO write, thereby avoiding it when it is
not necessary.
There is no performance improvement with this support in Serenity
at the moment because of the block layer constraint of not batching
multiple IOs. Once the command batching support is added to the block
layer, shadow doorbell support can improve performance by avoiding many
MMIO writes.
Default to old MMIO mechanism if shadow doorbell is not supported.
Introduce a new Struct Doorbell that encapsulates the mmio doorbell
register.
This commit does not introduce any functional changes and it is added
in preparation to adding shadow doorbell support.
This was the root cause of zombie processes showing up randomly and
disappearing after some disk activity, such as running shell commands -
The NVMeIO AsyncBlockDeviceRequest member simply held a pointer to a
Process object, therefore it could keep it alive a for a long time after
it ceased to actually function at all.
While LLD and mold support RELR "packed" relocations on all
architectures, the BFD linker currently only implements them on x86-64
and POWER.
This fixes two issues:
- The Kernel had it enabled even for AArch64 + GCC, which led to the
following being printed: `warning: -z pack-relative-relocs ignored`.
- The userland always had it disabled, even in the supported AArch64 +
Clang/mold scenarios.
Two non-functional changes:
- Remove pointless `-latomic` flag. It was specified via
`add_compile_options`, which only affects compilation and not linking,
so the library was never actually linked into the kernel. In fact, we
do not even build `libatomic` for our toolchain.
- Do not disable `-Wnonnull`. The warning-causing code was fixed at some
point.
This commit also removes `-mstrict-align` from the userland. Our target
AArch64 hardware natively supports unaligned accesses without a
significant performance penalty. Allowing the compiler to insert
unaligned accesses into aligned-as-written code allows for some
performance optimizations in fact. We keep this option turned on in the
kernel to preserve correctness for MMIO, as that might be sensitive to
alignment.
Add the device ID for PCI serial port cards that use the WCH CH351
chip. This device has been tested with real hardware where the serial
debug output could succesfully be received.
Now that support for 32-bit x86 has been removed, we don't have to worry
about the top half of `off_t`/`u64` values being chopped off when we try
to pass them in registers. Therefore, we no longer need the workaround
of pointers to stack-allocated values to syscalls.
Note that this changes the system call ABI, so statically linked
programs will have to be re-linked.
Using the kernel stack is preferable, especially when the examined
strings should be limited to a reasonable length.
This is a small improvement, because if we don't actually move these
strings then we don't need to own heap allocations for them during the
syscall handler function scope.
In addition to that, some kernel strings are known to be limited, like
the hostname string, for these strings we also can use FixedStringBuffer
to store and copy to and from these buffers, without using any heap
allocations at all.
Instead, use the FixedCharBuffer class to ensure we always use a static
buffer storage for these names. This ensures that if a Process or a
Thread were created, there's a guarantee that setting a new name will
never fail, as only copying of strings should be done to that static
storage.
The limits which are set are 32 characters for processes' names and 64
characters for thread names - this is because threads' names could be
more verbose than processes' names.
This class encapsulates a fixed Array with compile-time size definition
for storing ASCII characters.
There are also new Kernel StdLib functions to copy user data into such
objects so this class will be useful later on.
Previously we could get a raw pointer to a Mount object which might be
invalid when actually dereferencing it.
To ensure this could not happen, we should just use a callback that will
be used immediately after finding the appropriate Mount entry, while
holding the mount table lock.
We don't really need this method anymore, because we could just try to
find the mount entry based on the given mount point host custody.
This also allows us to remove the is_vfs_root and root_inode_id methods
from the VirtualFileSystem class.
We could easily encounter a case where we do the following:
```
mkdir -p /tmp2
mount /dev/hda /tmp2
```
would produce a bug that doing `ls /tmp2/tmp2` will give the contents
on `/dev/hda` ext2 root directory and also on `/tmp2/tmp2/tmp2` and so
on.
To prevent this, we must compare the current custody against each mount
entry's custody to ensure their paths match.
This is not useful, as we have literally zero knowledge about where this
inode is actually located at with respect to the entire global path tree
so we could easily encounter a case where we do the following:
```
mkdir -p /tmp2
mount /dev/hda /tmp2
```
and when traversing the /tmp2 directory entries, we will see the root
inode of /dev/hda on "/tmp2/tmp2", even if it was not mounted.
Therefore, we should just plainly give the raw directory entries as they
are written "on the disk". Anything else that needs to exactly know if
there's an underlying mounted filesystem, can just use the stat syscall
instead.
This ensures that the host mount point custody path is not the same like
the new to-be-mounted custody.
A scenario that could happen before adding this check is:
```
mkdir -p /tmp2
mount /dev/hda /tmp2/
mount /dev/hda /tmp2/
mount /dev/hda /tmp2/ # this will fail here
```
and after adding this check, the following scenario is now this:
```
mkdir -p /tmp2
mount /dev/hda /tmp2/
mount /dev/hda /tmp2/ # this will fail here
mount /dev/hda /tmp2/ # this will fail here too
```
Currently, ephemeral port allocation is handled by the
allocate_local_port_if_needed() and protocol_allocate_local_port()
methods. Actually binding the socket to an address (which means
inserting the socket/address pair into a global map) is performed either
in protocol_allocate_local_port() (for ephemeral ports) or in
protocol_listen() (for non-ephemeral ports); the latter will fail with
EADDRINUSE if the address is already used by an existing pair present in
the map.
There used to be a bug where for listen() without an explicit bind(),
the port allocation would conflict with itself: first an ephemeral port
would get allocated and inserted into the map, and then
protocol_listen() would check again for the port being free, find the
just-created map entry, and error out. This was fixed in commit
01e5af487f by passing an additional flag
did_allocate_port into protocol_listen() which specifies whether the
port was just allocated, and skipping the check in protocol_listen() if
the flag is set.
However, this only helps if the socket is bound to an ephemeral port
inside of this very listen() call. But calling bind(sin_port = 0) from
userspace should succeed and bind to an allocated ephemeral port, in the
same was as using an unbound socket for connect() does. The port number
can then be retrieved from userspace by calling getsockname (), and it
should be possible to either connect() or listen() on this socket,
keeping the allocated port number. Also, calling bind() when already
bound (either explicitly or implicitly) should always result in EINVAL.
To untangle this, introduce an explicit m_bound state in IPv4Socket,
just like LocalSocket has already. Once a socket is bound, further
attempt to bind it fail. Some operations cause the socket to implicitly
get bound to an (ephemeral) address; this is implemented by the new
ensure_bound() method. The protocol_allocate_local_port() method is
gone; it is now up to a protocol to assign a port to the socket inside
protocol_bind() if it finds that the socket has local_port() == 0.
protocol_bind() is now called in more cases, such as inside listen() if
the socket wasn't bound before that.
Since this is the block size that file system drivers *should* set,
let's name it the logical block size, just like most file systems such
as ext2 already do anyways.
This never was a logical block size, it always was a device specific
block size. Ideally the block size would change in accordance to
whatever the driver wants to use, but that is a change for the future.
For now, let's get rid of this confusing naming.
This also makes it easier to understand and reference where these
(sometimes rather arbitrary) calculations come from.
This also fixes a bug where group_index_from_block_index assumed 1KiB
blocks.
For a long time, our shutdown procedure has basically been:
- Acquire big process lock.
- Switch framebuffer to Kernel debug console.
- Sync and lock all file systems so that disk caches are flushed and
files are in a good state.
- Use firmware and architecture-specific functionality to perform
hardware shutdown.
This naive and simple shutdown procedure has multiple issues:
- No processes are terminated properly, meaning they cannot perform more
complex cleanup work. If they were in the middle of I/O, for instance,
only the data that already reached the Kernel is written to disk, and
data corruption due to unfinished writes can therefore still occur.
- No file systems are unmounted, meaning that any important unmount work
will never happen. This is important for e.g. Ext2, which has
facilites for detecting improper unmounts (see superblock's s_state
variable) and therefore requires a proper unmount to be performed.
This was also the starting point for this PR, since I wanted to
introduce basic Ext2 file system checking and unmounting.
- No hardware is properly shut down beyond what the system firmware does
on its own.
- Shutdown is performed within the write() call that asked the Kernel to
change its power state. If the shutdown procedure takes longer (i.e.
when it's done properly), this blocks the process causing the shutdown
and prevents any potentially-useful interactions between Kernel and
userland during shutdown.
In essence, current shutdown is a glorified system crash with minimal
file system cleanliness guarantees.
Therefore, this commit is the first step in improving our shutdown
procedure. The new shutdown flow is now as follows:
- From the write() call to the power state SysFS node, a new task is
started, the Power State Switch Task. Its only purpose is to change
the operating system's power state. This task takes over shutdown and
reboot duties, although reboot is not modified in this commit.
- The Power State Switch Task assumes that userland has performed all
shutdown duties it can perform on its own. In particular, it assumes
that all kinds of clean process shutdown have been done, and remaining
processes can be hard-killed without consequence. This is an important
separation of concerns: While this commit does not modify userland, in
the future SystemServer will be responsible for performing proper
shutdown of user processes, including timeouts for stubborn processes
etc.
- As mentioned above, the task hard-kills remaining user processes.
- The task hard-kills all Kernel processes except itself and the
Finalizer Task. Since Kernel processes can delay their own shutdown
indefinitely if they want to, they have plenty opportunity to perform
proper shutdown if necessary. This may become a problem with
non-cooperative Kernel tasks, but as seen two commits earlier, for now
all tasks will cooperate within a few seconds.
- The task waits for the Finalizer Task to clean up all processes.
- The task hard-kills and finalizes the Finalizer Task itself, meaning
that it now is the only remaining process in the system.
- The task syncs and locks all file systems, and then unmounts them. Due
to an unknown refcount bug we currently cannot unmount the root file
system; therefore the task is able to abort the clean unmount if
necessary.
- The task performs platform-dependent hardware shutdown as before.
This commit has multiple remaining issues (or exposed existing ones)
which will need to be addressed in the future but are out of scope for
now:
- Unmounting the root filesystem is impossible due to remaining
references to the inodes /home and /home/anon. I investigated this
very heavily and could not find whoever is holding the last two
references.
- Userland cannot perform proper cleanup, since the Kernel's power state
variable is accessed directly by tools instead of a proper userland
shutdown procedure directed by SystemServer.
The recently introduced Firmware/PowerState procedures are removed
again, since all of the architecture-independent code can live in the
power state switch task. The architecture-specific code is kept,
however.
Once we move to a more proper shutdown procedure, processes other than
the finalizer task must be able to perform cleanup and finalization
duties, not only because the finalizer task itself needs to be cleaned
up by someone. This global variable, mirroring the early boot flags,
allows a future shutdown process to perform cleanup on its own.
Note that while this *could* be considered a weakening in security, the
attack surface is minimal and the results are not dramatic. To exploit
this, an attacker would have to gain a Kernel write primitive to this
global variable (bypassing KASLR among other things) and then gain some
way of calling the relevant functions, all of this only to destroy some
other running process. The same effect can be achieved with LPE which
can often be gained with significantly simpler userspace exploits (e.g.
of setuid binaries).
Since we never check a kernel process's state like a userland process,
it's possible for a kernel process to ignore the fact that someone is
trying to kill it, and continue running. This is not desireable if we
want to properly shutdown all processes, including Kernel ones.
This is correct since unmount doesn't treat bind mounts specially. If we
don't do this, unmounting bind mounts will call
prepare_for_last_unmount() on the guest FS much too early, which will
most likely fail due to a busy file system.
Previously, we started parsing the ELF file again in a completely
different place, and without the partial mapping that we do while
validating.
Instead of doing manual parsing in two places, just capture the
requested stack size right after we validated it.
This resolves the various "implicit truncation from int to a one-bit
wide bit-field changes value from 1 to -1" warnings produced by Clang
16+ when assigning to single-bit bitfields.
The driver would crash if it was unable to find an output route, and
subsequently the destruction of controller did not invoke
`GenericInterruptHandler::will_be_destroyed()` because on the level of
`AudioController`, that method is unavailable.
By decoupling the interrupt handling from the controller, we get a new
refcounted class that correctly cleans up after itself :^)
We used to not care about stopping an audio output stream for Intel HDA
since AudioServer would continuously send new buffers to play. Since
707f5ac150ef858760eb9faa52b9ba80c50c4262 however, that has changed.
Intel HDA now uses interrupts to detect when each buffer was completed
by the device, and uses a simple heuristic to detect whether a buffer
underrun has occurred so it can stop the output stream.
This was tested on Qemu's Intel HDA (Linux x86_64) and a bare metal MSI
Starship/Matisse HD Audio Controller.
This is a preparation before we can create a usable mechanism to use
filesystem-specific mount flags.
To keep some compatibility with userland code, LibC and LibCore mount
functions are kept being usable, but now instead of doing an "atomic"
syscall, they do multiple syscalls to perform the complete procedure of
mounting a filesystem.
The FileBackedFileSystem IntrusiveList in the VFS code is now changed to
be protected by a Mutex, because when we mount a new filesystem, we need
to check if a filesystem is already created for a given source_fd so we
do a scan for that OpenFileDescription in that list. If we fail to find
an already-created filesystem we create a new one and register it in the
list if we successfully mounted it. We use a Mutex because we might need
to initiate disk access during the filesystem creation, which will take
other mutexes in other parts of the kernel, therefore making it not
possible to take a spinlock while doing this.
Otherwise, reading will sometimes fail on the Raspberry Pi.
This is mostly a hack, the spec has some info about how the correct
divisor should be calculated and how we can recover from timeouts.
Namely, we previously forgot to configure the SD Host Controller for
4-bit mode after issuing ACMD6, which caused data transfers to fail on
bare metal.
Instead of using ifdefs to use the correct platform-specific methods, we
can just use the same pattern we use for the microseconds_delay function
which has specific implementations for each Arch CPU subdirectory.
When linking a kernel image, the actual correct and platform-specific
power-state changing methods will be called in Firmware/PowerState.cpp
file.
Since https://reviews.llvm.org/D131441, libc++ must be included before
LibC. As clang includes libc++ as one of the system includes, LibC
must be included after those, and the only correct way to do that is
to install LibC's headers into the sysroot.
Targets that don't link with LibC yet require its headers for one
reason or another must add install_libc_headers as a dependency to
ensure that the correct headers have been (re)installed into the
sysroot.
LibC/stddef.h has been dropped since the built-in stddef.h receives
a higher include priority.
In addition, string.h and wchar.h must
define __CORRECT_ISO_CPP_STRING_H_PROTO and
_LIBCPP_WCHAR_H_HAS_CONST_OVERLOADS respectively in order to tell
libc++ to not try to define methods implemented by LibC.
Once LibC is installed to the sysroot and its conflicts with libc++
are resolved, including LibC headers in such a way will cause errors
with a modern LLVM-based toolchain.
This is needed to avoid including LibC headers in Lagom builds.
Unfortunately, we cannot rely on the build machine to provide a
fully POSIX-compatible ELF header for Lagom builds, so we have to
use our own.
To ensure actual PS2 code is not tied to the i8042 code, we make them
separated in the following ways:
- PS2KeyboardDevice and PS2MouseDevice classes are no longer inheriting
from the IRQHandler class. Instead we have specific IRQHandler derived
class for the i8042 controller implementation, which is used to ensure
that we don't end up mixing PS2 code with low-level interrupt handling
functionality. In the future this means that we could add a driver for
other PS2 controllers that might have only one interrupt handler but
multiple PS2 devices are attached, therefore, making it easier to put
the right propagation flow from the controller driver all the way to
the HID core code.
- A simple abstraction layer is added between the PS2 command set which
devices could use and the actual implementation low-level commands.
This means that the code in PS2MouseDevice and PS2KeyboardDevice
classes is no longer tied to i8042 implementation-specific commands,
so now these objects could send PS2 commands to their PS2 controller
and get a PS2Response which abstracts the given response too.
The HIDController class is removed and instead adding SerialIOController
class. The HIDController class was a mistake - there's no such thing in
real hardware as host controller only for human interface devices
(VirtIO PCI input controller being the exception here, but it could be
technically treated as serial IO controller too).
Instead, we simply add a new abstraction layer - the SerialIO "bus",
which will hold all the code that is related to serial communications
with other devices. A PS2 controller is simply a serial IO controller,
and the Intel 8042 Controller is simply a specific implementation of a
PS2 controller.
Ideally, we would want the audio controller to run a channel at a
device's initial sample rate instead of hardcoding 44.1 KHz. However,
most audio is provided at 44.1 KHz and as long as `Audio::Resampler`
introduces significant audio artifacts, let's set a sensible sample
rate that offers a better experience for most users.
This can be removed after someone implements a higher quality
`Audio::Resampler`.
All code that is related to PC BIOS should not be in the Kernel/Firmware
directory as this directory is for abstracted and platform-agnostic code
like ACPI (and device tree parsing in the future).
This fixes a problem with the aarch64 architecure, as these machines
don't have any PC-BIOS in them so actually trying to access these memory
locations (EBDA, BIOS ROM) does not make any sense, as they're specific
to x86 machines only.
This code is very x86-specific, because Intel introduced the actual
MultiProcessor specification back in 1993, qouted here as a proof:
"The MP specification covers PC/AT-compatible MP platform designs based
on Intel processor architectures and Advanced Programmable Interrupt
Controller (APIC) architectures"
Most of the ACPI static parsing methods (methods that can be called
without initializing a full AML parser) are not tied to any specific
platform or CPU architecture.
The only method that is platform-specific is the one that finds the RSDP
structure. Thus, each CPU architecture/platform needs to implement it.
This means that now aarch64 can implement its own method to find the
ACPI RSDP structure, which would be hooked into the rest of the ACPI
code elegantly, but for now I just added a FIXME and that method returns
empty value of Optional<PhysicalAddress>.
Previously, reads would only be successful for offset 0. For this
reason, the maximum size that could be correctly read from the PCI
expansion ROM SysFS node was limited to the block size, and
subsequent blocks would fail. This commit fixes the computation of
the number of bytes to read.
During receive_tcp_packet(), we now set m_send_window_size for the
socket if it is different from the default.
This removes one FIXME from TCPSocket.h.
These 4 fields were made `Atomic` in
c3f668a758, at which time these were still
accessed unserialized and TOCTOU bugs could happen. Later, in
8ed06ad814, we serialized access to these
fields in a number of helper methods, removing the need for `Atomic`.
Instead of having a single available memory range that encompasses the
whole 0x00000000-0x3EFFFFFF range of physical memory, create a separate
reserved entry for the RAM range used by the VideoCore. This fixes a
crash that happens when we try to allocate physical pages in the GPU's
reserved range.
This will eventually be replaced with parsing the data from the device
tree, but for now, this should solve some of the recurring CI failures.
Like the HID, Audio and Storage subsystem, the Graphics subsystem (which
handles GPUs technically) exposes unix device files (typically in /dev).
To ensure consistency across the repository, move all related files to a
new directory under Kernel/Devices called "GPU".
Also remove the redundant "GPU" word from the VirtIO driver directory,
and the word "Graphics" from GraphicsManagement.{h,cpp} filenames.
The implemented cloning mechanism should be sound:
- If a PartitionTable is passed a File with
ShouldCloseFileDescriptor::Yes, then it will keep it alive until the
PartitionTable is destroyed.
- If a PartitionTable is passed a File with
ShouldCloseFileDescriptor::No, then the caller has to ensure that the
file descriptor remains alive.
If the caller is EBRPartitionTable, the same consideration holds.
If the caller is PartitionEditor::PartitionModel, this is satisfied by
keeping an OwnPtr<Core::File> around which is the originally opened
file.
Therefore, we never leak any fds, and never access a Core::File or fd
after destroying it.
This has KString, KBuffer, DoubleBuffer, KBufferBuilder, IOWindow,
UserOrKernelBuffer and ScopedCritical classes being moved to the
Kernel/Library subdirectory.
Also, move the panic and assertions handling code to that directory.
When deleting a directory, the rmdir syscall should fail if the path was
unveiled without the 'c' permission. This matches the same behavior that
OpenBSD enforces when doing this kind of operation.
When deleting a file, the unlink syscall should fail if the path was
unveiled without the 'w' permission, to ensure that userspace is aware
of the possibility of removing a file only when the path was unveiled as
writable.
When using the userdel utility, we now unveil that directory path with
the unveil 'c' permission so removal of an account home directory is
done properly.
The Storage subsystem, like the Audio and HID subsystems, exposes Unix
device files (for example, in the /dev directory). To ensure consistency
across the repository, we should make the Storage subsystem to reside in
the Kernel/Devices directory like the two other mentioned subsystems.
This is enforced by the hardware and an exception is generated when the
stack pointer is not properly aligned. This brings us closer to booting
the aarch64 Kernel on baremetal.
This is the only kernel issue blocking us from running the test suite.
Having userspace backtraces printed to the debug console during crashes
isn't vital to the system's function, so let's just return an empty
trace and print a FIXME instead of crashing.
After examination of all overriden Inode::traverse_as_directory methods
it seems like proper locking is already existing everywhere, so there's
no need to take the big process lock anymore, as there's no access to
shared process structures anyway.
The contents of the directory inode could change if we are not taking so
we must take the m_inode_lock to prevent corruption when reading the
directory contents.
This is not needed, because when we are doing this traversing, functions
that are called from this function are using proper and more "atomic"
locking.
"Wherever applicable" = most places, actually :^), especially for
networking and filesystem timestamps.
This includes changes to unzip, which uses DOSPackedTime, since that is
changed for the FAT file systems.
That's what this class really is; in fact that's what the first line of
the comment says it is.
This commit does not rename the main files, since those will contain
other time-related classes in a little bit.
Add a helper initialize_interrupt_queue() helper to enable_irq instead
of doing it as part of its object construction as it can fail. This is
similar to how AHCI initializes its interrupt as well.
NVMe{Poll|Interrupt}Queue don't have a try_create() method. Add one to
keep it consistent with how we create objects. Also this commit is in
preparation to moving any initialization related code out of the
constructor.
This commit lets us differentiate whether access faults are caused by
accessing junk memory addresses given to us by userspace or if we hit a
kernel bug.
The stub implementations of the `safe_*` functions currently don't let
us jump back into them and return a value indicating failure, so we
panic if such a fault happens. Practically, this means that we still
crash, but if the access violation was caused by something else, we take
the usual kernel crash code path and print a register and memory dump,
rather than hitting the `TODO_AARCH64` in `handle_safe_access_fault`.
These are used in futexes, which are needed if we want to get further in
`run-tests`.
For now, we have no way to return a non-fatal error if an access fault
is raised while executing these, so the kernel will panic. Some would
consider this a DoS vulnerability where a malicious userspace app can
crash the kernel by passing bogus pointers to it, but I prefer to call
it progress :^)
Enabling these will fix the Unsupported Exclusive or Atomic access data
fault we get on bare metal Raspberry Pi 3. On A53/A57 chips (and newer),
atomic compare-exchange operations require the data cache to be enabled.
Referencing ARM DDI 0487J.a, update the names of previously reserved
fields, and set the reset_value() of the SCTLR_EL1 struct to reflect
the defaults we want for this register on reboot.
... key-value decomposition
The RaspberryPi firmware will give us a value for the 'video' key that
contains multiple equal signs:
```
video=HDMI-A-1:1920x1080M@30D,margin_left=48,margin_right=48,[...]
```
Instead of asserting that this only has one equal sign, let's just split
it by the first one.
Remove the hardcoded "AHCI Scattered DMA" for region name as it is a
part of a common API. Add region_name parameter to the try_create API
so that this API can be used by other drivers with the correct Memory
region name.
The constructor code of ScatterGatherList had code that can return
error. Move it to try_create for better error propagation.
This removes one TODO() and one
release_value_but_fixme_should_propagate_errors().
This removes the TODO from the try_create API to return ErrorOr. This
is also a preparation patch to move the init code in the constructor
that can fail to this try_create function.
These 2 are an actual separate types of syscalls, so let's stop using
special flags for bind mounting or re-mounting and instead let userspace
calling directly for this kind of actions.
The Multiboot header stores the framebuffer's pitch in bytes, so
multiplying it by the pixel's size is not necessary. We ended up
allocating 4 times as much memory as needed, which caused us to overlap
the MMIO reserved memory area on the Raspberry Pi.
Otherwise, the message's contents might be in the cache only, so
VideoCore will read stale/garbage data from main memory.
This fixes framebuffer setup on bare metal with the data cache enabled.
While the PL011-based UART0 is currently reserved for the kernel
console, UART1 is free to be exposed to the userspace as `/dev/ttyS0`.
This will be used as the stdout of `run-tests-and-shutdown.sh` when
testing the AArch64 kernel.
The Raspberry Pi hardware doesn't support a proper software-initiated
shutdown, so this instead uses the watchdog to reboot to a special
partition which the firmware interprets as an immediate halt on
shutdown. When running under Qemu, this causes the emulator to exit.
We now have everything in the AArch64 kernel to be able to use the full
`__panic` implementation, so we can share the code with x86-64.
I have kept `__assertion_failed` separate for now, as the x86-64 version
directly executes inline assembly, thus `Kernel/Arch/aarch64/Panic.cpp`
could not be removed.
Extend reserve_irqs, allocate_irq, enable_interrupt and
disable_interrupt API to add MSI support in PCI device.
The current changes only implement single MSI message support.
TODOs have been added to support Multiple MSI Message (MME) support in
the future.