Commit graph

6663 commits

Author SHA1 Message Date
Idan Horowitz
3dc8bbbc8b Kernel: Remove the infallible make_ref_counted<T> factory function
This function had no users, nor should it ever be used, as all
allocation failures in the Kernel should be explicitly checked.
2022-02-03 23:33:20 +01:00
Idan Horowitz
a65bbbdb71 Kernel: Convert try_make_ref_counted to use ErrorOr
This allows more ergonomic memory allocation failure related error
checking using the TRY macro.
2022-02-03 23:33:20 +01:00
Idan Horowitz
8289727fac Kernel: Stop using the make<T> factory method in the Kernel
As make<T> is infallible, it really should not be used anywhere in the
Kernel. Instead replace with fallible `new (nothrow)` calls, that will
eventually be error-propagated.
2022-02-03 23:33:20 +01:00
Idan Horowitz
e5e7cb822a Kernel: Ignore allocation failures when trying to retransmit packets
We ignore allocation failures above the first 16 guaranteed socket
slots, as we will just retransmit their packets the next time around.
2022-02-03 23:33:20 +01:00
Idan Horowitz
2dc91865a4 Kernel: Stop allocating VirtIO configuration structs on the heap
These are trivially-copyable 12-byte structs, so there's no point in
allocating them on the heap.
2022-02-03 23:33:20 +01:00
Andreas Kling
34f6c88ffd Revert "Kernel: Protect InodeWatcher internals with spinlock instead of mutex"
This reverts commit 0bebf013e3.

This caused a deadlock when handling a crashed process, so let's revert
it until we can figure out what went wrong.
2022-02-03 18:25:55 +01:00
Andreas Kling
e7dc9f71b8 Kernel: Protect Inode flock list with spinlock instead of mutex 2022-02-03 17:28:45 +01:00
Andreas Kling
a81aebfd6e Kernel: Remove unnecessary mutex for ubsan-is-deadly ProcFS node 2022-02-03 16:11:26 +01:00
Andreas Kling
e86ab57078 AK+Kernel+LibSanitizer: Store "ubsan-is-deadly" flag as Atomic<bool> 2022-02-03 16:11:26 +01:00
Andreas Kling
0bebf013e3 Kernel: Protect InodeWatcher internals with spinlock instead of mutex 2022-02-03 16:11:26 +01:00
Andreas Kling
64254b5df8 Kernel: Protect PCI access with spinlock instead of mutex 2022-02-03 16:11:26 +01:00
Andreas Kling
200589ba27 Kernel: Turn VirtIOGPU operation lock from mutex into spinlock 2022-02-03 16:11:26 +01:00
Andreas Kling
ca42621be1 Kernel: Protect FramebufferDevice with spinlock instead of mutex 2022-02-03 16:11:26 +01:00
Andreas Kling
ddde9e7ee5 Kernel: Protect global device map with spinlock instead of mutx 2022-02-03 16:11:26 +01:00
Andreas Kling
e0d9472ced Kernel: Protect Inode's list of watchers with spinlock instead of mutex 2022-02-03 16:11:26 +01:00
Andreas Kling
210689281f Kernel: Protect mounted filesystem list with spinlock instead of mutex 2022-02-03 16:11:26 +01:00
Andreas Kling
3becff9eae Kernel: Protect network adapter list with spinlock instead of mutex 2022-02-03 16:11:26 +01:00
Andreas Kling
0899153170 Kernel: Protect PTYMultiplexer freelist with spinlock instead of mutex 2022-02-03 16:11:26 +01:00
Andreas Kling
6fbb924bbf Kernel: Protect ARP table with spinlock instead of mutex 2022-02-03 16:11:26 +01:00
Andreas Kling
248832f438 Kernel: Convert OpenFileDescriptor from mutex to spinlock
A mutex is useful when we need to be able to block the current thread
until it's available. This is overkill for OpenFileDescriptor.

First off, this patch wraps the main state member variables inside a
SpinlockProtected<State> to enforce synchronized access. This also
avoids "free locking" where figuring out which variables are guarded
by which lock is left as an unamusing exercise for the reader.

Then we remove mutex locking from the functions that simply call through
to the underlying File or Inode, since those fields never change anyway,
and the target objects perform their own synchronization.
2022-02-03 16:11:26 +01:00
Andreas Kling
35e24bc774 Kernel: Move Spinlock lock/unlock functions out of line
I don't see why these have to be inlined everywhere in the kernel.
2022-02-03 16:11:26 +01:00
Pankaj Raghav
d234e6b801 Kernel: Add polling support to NVMe
Add polling support to NVMe so that it does not use interrupt to
complete a IO but instead actively polls for completion. This probably
is not very efficient in terms of CPU usage but it does not use
interrupts to complete a IO which is beneficial at the moment as there
is no MSI(X) support and it can reduce the latency of an IO in a very
fast NVMe device.

The NVMeQueue class has been made the base class for NVMeInterruptQueue
and NVMePollQueue. The factory function `NVMeQueue::try_create` will
return the appropriate queue to the controller based on the polling
boot parameter.

The polling mode can be enabled by adding an extra boot parameter:
`nvme_poll`.
2022-02-02 18:26:59 +01:00
Pankaj Raghav
aa832ee251 Kernel: Add conditional call to disable_irq in IRQHandler constructor
There is no use in calling disable_irq function in the IRQHandler
constructor if irq was not registered before. So add a condition where
we call disable_irq only if the irq was registered before.
2022-02-02 18:26:59 +01:00
Pankaj Raghav
e5a6d12ff8 Kernel: Add nvme_poll command line parameters
As we don't currently support MSI(X) interrupts, it could be an issue
to boot on some newer hardware. NVMe devices support polling mode
where the driver actively polls for completion instead of waiting for
an interrupt.
2022-02-02 18:26:59 +01:00
Andreas Kling
d85f062990 Revert "Kernel: Only update page tables for faulting region"
This reverts commit 1c5ffaae41.

This broke shared memory as used by OutOfProcessWebView. Let's do
a revert until we can figure out what went wrong.
2022-02-02 11:02:54 +01:00
Andreas Kling
1c5ffaae41 Kernel: Only update page tables for faulting region
When a page fault led to the mapping of a new physical page, we were
updating the page tables for *every* region that shared the same
underlying VMObject.

Let's just not do that, avoiding a bunch of unnecessary page table
updates and TLB invalidations.
2022-02-02 02:16:49 +01:00
Liav A
88c5992e0b Kernel/Interrupts: Initialize two spurious handlers when PIC is disabled
Even if the PIC was disabled it can still generate noise (spurious IRQs)
so we need to register two handlers for handling such cases.

Also, we declare interrupt service routine offset 0x20 to 0x2f as
reserved, so when the PIC is disabled, we can handle spurious IRQs from
the PIC at separate handlers.
2022-01-30 21:07:20 +02:00
Liav A
7028a64997 Kernel: Use a constexpr declaration for the disabled PIC IRQ base 2022-01-30 21:07:20 +02:00
Andreas Kling
cc9ed31c37 Kernel: Don't mark current thread as inactive after successful exec()
At the end of sys$execve(), we perform a context switch from the old
executable into the new executable.

However, the Kernel::Thread object we are switching to is the *same*
thread as the one we are switching from. So we must not assume the
from_thread and to_thread are different threads.

We had a bug caused by this misconception, where the "from" thread would
always get marked as "inactive" when switching to a new thread.
This meant that threads would always get switched into "inactive" mode
on first context switch into them.

If a thread then tried blocking on a kernel mutex within its first time
slice, we'd end up in Thread::block(Mutex&) with an inactive thread.

Once a thread is inactive, the scheduler believes it's okay to
reactivate the thread (by scheduling it.) If a thread got re-scheduled
prematurely while setting up a mutex block, things would fall apart and
we'd crash in Thread::block() due to the thread state being "Runnable"
instead of the expected "Running".
2022-01-30 16:21:59 +01:00
Andreas Kling
a44316fa8b Kernel: Release page directory and MM locks sooner in space finalization
We don't need to hold these locks when tearing down the region tree.
Release them as soon as unmapping is finished.
2022-01-30 16:21:59 +01:00
Andreas Kling
fcd3844da6 Kernel: Take scheduler lock before block lock in unblock_from_mutex()
This matches the acquisition order used elsewhere.
2022-01-30 16:21:59 +01:00
Andreas Kling
cffcfca80a Kernel: Remove unused bool return values from scheduler functions
Turns out nobody actually cared whether the scheduler switched to a new
thread or not (which is what we were returning.)
2022-01-30 16:21:59 +01:00
Andreas Kling
a6b5065d94 Kernel: Simplify x86 IOPL sanity check
Move this architecture-specific sanity check (IOPL must be 0) out of
Scheduler and into the x86 enter_thread_context(). Also do this for
every thread and not just userspace ones.
2022-01-30 16:21:59 +01:00
Andreas Kling
684d5eed19 Kernel: VERIFY that Scheduler::context_switch() always has a from-thread
We always context_switch() from somewhere, so there's no need to handle
the case where from_thread is null.
2022-01-30 16:21:59 +01:00
Andreas Kling
09f0843716 Kernel: Enforce that Thread::unblock_from_mutex() doesn't happen in IRQ
Mutexes are not usable from IRQ handlers, so unblock_from_mutex()
can simply VERIFY() that the current processor is not in an IRQ.
2022-01-30 16:21:59 +01:00
Andreas Kling
b0e5406ae2 Kernel: Update terminology around Thread's "blocking mutex"
It's more accurate to say that we're blocking on a mutex, rather than
blocking on a lock. The previous terminology made sense when this code
was using something called Kernel::Lock, but since it was renamed to
Kernel::Mutex, this updates brings the language back in sync.
2022-01-30 16:21:59 +01:00
Andreas Kling
dca5fe69eb Kernel: Make Thread::State an enum class and use it consistently
It was annoyingly hard to spot these when we were using them with
different amounts of qualification everywhere.

This patch uses Thread::State::Foo everywhere instead of Thread::Foo
or just Foo.
2022-01-30 16:21:59 +01:00
Andreas Kling
7d89409618 Kernel: Don't dispatch signals in Thread::block_impl()
If the blocker is interrupted by a signal, that signal will be delivered
to the process when returning to userspace (at the syscall exit point.)
We don't have to perform the dispatch manually in Thread::block_impl().
2022-01-30 16:21:59 +01:00
Andreas Kling
677da0288c Kernel: Don't dispatch signals in Processor::enter_current()
Signal dispatch is already taken care of elsewhere, so there appears to
be no need for the hack in enter_current().

This also allows us to remove the Thread::m_in_block flag, simplifying
thread blocking logic somewhat.

Verified with the original repro for #4336 which this was meant to fix.
2022-01-30 16:21:59 +01:00
Andreas Kling
3845c90e08 Kernel: Remove unnecessary includes from Thread.h
...and deal with the fallout by adding missing includes everywhere.
2022-01-30 16:21:59 +01:00
Andreas Kling
f469fb47b8 Kernel: Move Thread::block<BlockerType>() out of the Thread.h header
This function is large and unwieldy and forces Thread.h to #include
a bunch of things. The only reason it was in the header is because we
need to instantiate a blocker based on the templated BlockerType.

We actually keep block<BlockerType>() in the header, but move the
bulk of the function body out of line into Thread::block_impl().

To preserve destructor ordering, we add Blocker::finalize() which is
called where we'd previously destroy the Blocker.
2022-01-30 16:21:59 +01:00
Jelle Raaijmakers
98d666eab3 Kernel: Support PS/2 right super key
We currently support the left super key. This poses an issue on
keyboards that only have a right super key, such as my Steelseries 6G.

The implementation mirrors the left/right shift key logic and
effectively considers the right super key identical to the left one.
2022-01-30 15:08:49 +01:00
Idan Horowitz
2e5a9b4fab Kernel: Use HashCompatible HashMap lookups instead of specifying a hash 2022-01-29 23:01:23 +02:00
Idan Horowitz
c68609b27f Kernel: Make {Nonnull,}OwnPtr<KString> hash compatible with StringView
This will allow us to use KString as HashTable/HashMap keys more easily
2022-01-29 23:01:23 +02:00
Lenny Maiorani
b0a54518d8 Everywhere: Remove redundant inline keyword
`constexpr` implies `inline` so when both are used it is redundant.
2022-01-29 21:45:17 +02:00
Idan Horowitz
e28af4a2fc Kernel: Stop using HashMap in Mutex
This commit removes the usage of HashMap in Mutex, thereby making Mutex
be allocation-free.

In order to achieve this several simplifications were made to Mutex,
removing unused code-paths and extra VERIFYs:
 * We no longer support 'upgrading' a shared lock holder to an
   exclusive holder when it is the only shared holder and it did not
   unlock the lock before relocking it as exclusive. NOTE: Unlike the
   rest of these changes, this scenario is not VERIFY-able in an
   allocation-free way, as a result the new LOCK_SHARED_UPGRADE_DEBUG
   debug flag was added, this flag lets Mutex allocate in order to
   detect such cases when debugging a deadlock.
 * We no longer support checking if a Mutex is locked by the current
   thread when the Mutex was not locked exclusively, the shared version
   of this check was not used anywhere.
 * We no longer support force unlocking/relocking a Mutex if the Mutex
   was not locked exclusively, the shared version of these functions
   was not used anywhere.
2022-01-29 16:45:39 +01:00
Pankaj Raghav
60aa4152e9 Kernel: Optimize StorageDevice read and write function
Use shift operator with log size instead of division while calculating
the index and len.
2022-01-29 17:41:06 +02:00
Pankaj Raghav
0f010cee02 Kernel: Add block_size_log helper to BlockDevice
It is useful to have the log2 value of the block size while calculating
index for an IO.
2022-01-29 17:41:06 +02:00
Pankaj Raghav
3b27e28e67 Kernel: Cache blocks_per_page in StorageDevice class
Instead of calculating blocks_per_page in every IO, cache it to save
CPU cycles as that value will not change after initialization.
2022-01-29 17:41:06 +02:00
Pankaj Raghav
cf44d71edf Kernel: Remove the assumption of 512 block size in read/write_block
Devices such as NVMe can have blocks bigger that 512. Use the
m_block_size variable in read/write_block function instead of the
hardcoded 512 block size.
2022-01-29 17:41:06 +02:00