We were calibrating it to 260 instead of 250 ticks per second (being
off by one for the 1/10th second calibration time), resulting in
ticks of only ~3.6 ms instead of ~4ms. This gets us closer to ~4ms,
but because the APIC isn't nearly as precise as e.g. HPET, it will
only be a best effort. Then, use the higher precision reference
timer to more accurately calculate how many ticks we actually get
each second.
Also the frequency calculation was off, causing a "Frequency too slow"
error with VMware.
Fixes some problems observed in #5539
Add a special boot mode for running tests, rather than using the system
as a general purpose OS. We'll use this in SystemServer to specify
only services needed to run tests and exit.
This may seem like a no-op change, however it shrinks down the Kernel by a bit:
.text -432
.unmap_after_init -60
.data -480
.debug_info -673
.debug_aranges 8
.debug_ranges -232
.debug_line -558
.debug_str -308
.debug_frame -40
With '= default', the compiler can do more inlining, hence the savings.
I intentionally omitted some opportunities for '= default', because they
would increase the Kernel size.
Because registering and unregistering interrupt handlers triggers
calls to virtual functions, we can't do this in the constructor
and destructor.
Fixes#5539
We don't need to flush the on-disk inode struct multiple times while
writing out its block list. Just mark the in-memory Inode as having
dirty metadata and the SyncTask will flush it eventually.
Since the inode is the logical owner of its block list, let's move the
code that computes the block list there, and also stop hogging the FS
lock while we compute the block list, as there is no need for it.
There are two locks in the Ext2FS implementation:
* The FS lock (Ext2FS::m_lock)
This governs access to the superblock, block group descriptors,
and the block & inode bitmap blocks. It's held while allocating
or freeing blocks/inodes.
* The inode lock (Ext2FSInode::m_lock)
This governs access to the inode metadata, including the block
list, and to the content data as well. It's held while doing
basically anything with the inode.
Once an on-disk block/inode is allocated, it logically belongs
to the in-memory Inode object, so there's no need for the FS lock
to be taken while manipulating them, the inode lock is all you need.
This dramatically reduces the impact of disk I/O on path resolution
and various other things that look at individual inodes.
Since these filesystems operate on an underlying file descriptor
and rely on its offset for correctness, let's use the FS lock to
serialize these operations.
This also means that FS subclasses can rely on block-level read/write
operations being atomic.
This is basically just for consistency, it's quite strange to see
multiple AK container types next to each other, some with and some
without the namespace prefix - we're 'using AK::Foo;' a lot and should
leverage that. :^)
This reverts commit 1e737a5c50.
The cached block list does not include meta-blocks, so we'd end up
leaking those. There's definitely a nice way to avoid work here, but it
turns out it wasn't quite this trivial. Reverting for now.
Currently, when a process which has a tracee exits, nothing will happen,
leaving the tracee unable to be attached again. This will call the
stop_tracing function on any process which is traced by the exiting
process and sending the SIGSTOP signal making the traced process wait
for a SIGCONT (just as Linux does)
This patch combines inode the scan for an available inode with the
updating of the bit in the inode bitmap into a single operation.
We also exit the scan immediately when we find an inode, instead of
continuing until we've scanned all the eligible groups(!)
Finally, we stop holding the filesystem lock throughout the entire
operation, and instead only take it while actually necessary
(during inode allocation, flush, and inode cache update.)
Improve a bunch of situations where we'd previously panic the kernel
on failure. We now propagate whatever error we had instead. Usually
that'll be EIO.
Both inode and block allocation operate on bitmap blocks and update
counters in the superblock and group descriptor.
Since we're here, also add some error propagation around this code.
This was another vestige from a long time ago, when exiting a thread
would mutate global data structures that were only protected by the
interrupt flag.
This was necessary in the past when crash handling would modify
various global things, but all that stuff is long gone so we can
simplify crashes by leaving the interrupt flag alone.
Make more of the kernel compile in 64-bit mode, and make some things
pointer-size-agnostic (by using FlatPtr.)
There's a lot of work to do here before the kernel will even compile.
Let's help our future selves find this problem sooner next time it
happens. Hopefully we'll come up with a nicer loader before then,
but who knows. :^)
Instead of writing to the userspace utsname struct one field at a time,
build up a utsname on the kernel stack and copy it out to userspace
once it's finished. This is both simpler and gets validity checking
built-in for free.
Found by KUBSAN! :^)
Fixes#5499.
For some reason I don't yet understand, building the kernel with -O2
produces a way-too-large kernel on some people's systems.
Since there are some really nice performance benefits from -O2 in
userspace, let's do a compromise and build Userland with -O2 but
put Kernel back into the -Os box for now.
Due to the non-standard way the boot assembler code is linked into
the kernel (not and actual dependency, but linked via linker.ld script)
both make and ninja weren't re-linking the kernel when boot.S was
changed. This should theoretically work since we use the cmake
`add_dependencies(..)` directive to express a manual dependency
on boot from Kernel, but something is obviously broken in cmake.
We can work around that with a hack, which forces a dependency on
a file we know will always exist in the kernel (init.cpp). So if
boot.S is rebuilt, then init.cpp is forced to be rebuilt, and then
we re-link the kernel. init.cpp is also relatively small, so it
compiles fast.
We were only 448 KiB away from filling up the old slot size we reserve
for the kernel above the 3 GiB mark. This expands the slot to 16 MiB,
which allows us to continue booting the kernel until somebody takes
the time to improve our loader.
(...and ASSERT_NOT_REACHED => VERIFY_NOT_REACHED)
Since all of these checks are done in release builds as well,
let's rename them to VERIFY to prevent confusion, as everyone is
used to assertions being compiled out in release.
We can introduce a new ASSERT macro that is specifically for debug
checks, but I'm doing this wholesale conversion first since we've
accumulated thousands of these already, and it's not immediately
obvious which ones are suitable for ASSERT.
With the kernel command line issue fixed, we can now enable these
KUBSAN options without getting triple faults on startup:
* alignment
* null
* pointer-overflow
When building the kernel with -O2, we somehow ended up with the kernel
command line outside of the lower 8MB of physical memory. Since we don't
map that area in our initial page table setup, we would triple fault
when trying to parse the command line.
This patch sidesteps the issue by copying the (first 4KB of) the kernel
command line to a buffer in a known safe location at boot.
Clangd (CLion) was choking on some of the -fsanitize options, and since
we're not building the kernel with Clang anyway, let's just disable
the options for non-GCC compilers for now.