Commit graph

61 commits

Author SHA1 Message Date
kleines Filmröllchen
398d271a46 Kernel: Share Processor class (and others) across architectures
About half of the Processor code is common across architectures, so
let's share it with a templated base class. Also, other code that can be
shared in some ways, like FPUState and TrapFrame functions, is adjusted
here. Functions which cannot be shared trivially (without internal
refactoring) are left alone for now.
2023-10-03 16:08:29 -06:00
Liav A
1b04726c85 Kernel: Move all tasks-related code to the Tasks subdirectory 2023-06-04 21:32:34 +02:00
Andreas Kling
ed1253ab90 Kernel: Don't ref/unref the holder thread in Mutex
There was a whole bunch of ref counting churn coming from Mutex, which
had a RefPtr<Thread> m_holder to (mostly) point at the thread holding
the mutex.

Since we never actually dereference the m_holder value, but only use it
for identity checks against thread pointers, we can store it as an
uintptr_t and skip the ref counting entirely.

Threads can't die while holding a mutex anyway, so there's no risk of
them going missing on us.
2023-04-04 10:33:42 +02:00
Andreas Kling
c3915e4058 Kernel: Stop using *LockRefPtr for Thread
These were stored in a bunch of places. The main one that's a bit iffy
is the Mutex::m_holder one, which I'm going to simplify in a subsequent
commit.

In Plan9FS and WorkQueue, we can't make the NNRPs const due to
initialization order problems. That's probably doable with further
cleanup, but left as an exercise for our future selves.

Before starting this, I expected the thread blockers to be a problem,
but as it turns out they were super straightforward (for once!) as they
don't mutate the thread after initiating a block, so they can just use
simple const-ified NNRPs.
2023-04-04 10:33:42 +02:00
kleines Filmröllchen
a6a439243f Kernel: Turn lock ranks into template parameters
This step would ideally not have been necessary (increases amount of
refactoring and templates necessary, which in turn increases build
times), but it gives us a couple of nice properties:
- SpinlockProtected inside Singleton (a very common combination) can now
  obtain any lock rank just via the template parameter. It was not
  previously possible to do this with SingletonInstanceCreator magic.
- SpinlockProtected's lock rank is now mandatory; this is the majority
  of cases and allows us to see where we're still missing proper ranks.
- The type already informs us what lock rank a lock has, which aids code
  readability and (possibly, if gdb cooperates) lock mismatch debugging.
- The rank of a lock can no longer be dynamic, which is not something we
  wanted in the first place (or made use of). Locks randomly changing
  their rank sounds like a disaster waiting to happen.
- In some places, we might be able to statically check that locks are
  taken in the right order (with the right lock rank checking
  implementation) as rank information is fully statically known.

This refactoring even more exposes the fact that Mutex has no lock rank
capabilites, which is not fixed here.
2023-01-02 18:15:27 -05:00
Linus Groh
d26aabff04 Everywhere: Run clang-format 2022-12-03 23:52:23 +00:00
Timon Kruiper
026f37b031 Kernel: Move Spinlock functions back to arch independent Locking folder
Now that the Spinlock code is not dependent on architectural specific
code anymore, we can move it back to the Locking folder. This also means
that the Spinlock implemenation is now used for the aarch64 kernel.
2022-08-26 12:51:57 +02:00
Timon Kruiper
e8aff0c1c8 Kernel: Use InterruptsState in Spinlock code
This commit updates the lock function from Spinlock and
RecursiveSpinlock to return the InterruptsState of the processor,
instead of the processor flags. The unlock functions would only look at
the interrupt flag of the processor flags, so we now use the
InterruptsState enum to clarify the intent, and such that we can use the
same Spinlock code for the aarch64 build.

To not break the build, all the call sites are updated aswell.
2022-08-26 12:51:57 +02:00
Andreas Kling
11eee67b85 Kernel: Make self-contained locking smart pointers their own classes
Until now, our kernel has reimplemented a number of AK classes to
provide automatic internal locking:

- RefPtr
- NonnullRefPtr
- WeakPtr
- Weakable

This patch renames the Kernel classes so that they can coexist with
the original AK classes:

- RefPtr => LockRefPtr
- NonnullRefPtr => NonnullLockRefPtr
- WeakPtr => LockWeakPtr
- Weakable => LockWeakable

The goal here is to eventually get rid of the Lock* classes in favor of
using external locking.
2022-08-20 17:20:43 +02:00
kleines Filmröllchen
4314c25cf2 Kernel: Require lock rank for Spinlock construction
All users which relied on the default constructor use a None lock rank
for now. This will make it easier to in the future remove LockRank and
actually annotate the ranks by searching for None.
2022-08-19 20:26:47 -07:00
kleines Filmröllchen
d0e614c045 Kernel: Don't check that interrupts are enabled during early boot
The interrupts enabled check in the Kernel mutex is there so that we
don't lock mutexes within a spinlock, because mutexes reenable
interrupts and that will mess up the spinlock in more ways than one if
the thread moves processors. This check is guarded behind a debug flag
because it's too hard to fix all the problems at once, but we regressed
and weren't even getting to init stage 2 with it enabled. With this
commit, we get to stage 2 again. In early boot, there are no interrupts
enabled and spinlocks used, so we can sort of kind of safely ignore the
interrupt state. There might be a better solution with another boot
state flag that checks whether APs are up (because they have interrupts
enabled from the start) but that seems overkill.
2022-07-19 12:12:13 +01:00
Jelle Raaijmakers
14fc05e912 Kernel: Verify mutex big lock behavior
These two methods are big lock specific, so verify our mutex' behavior.
2022-04-09 15:55:20 +02:00
Jelle Raaijmakers
bb02e9a7b9 Kernel: Unblock big lock waiters correctly
If the regular exclusive and shared lists were empty (which they
always should be for the big lock), we were not unblocking any waiters.
2022-04-09 15:55:20 +02:00
Jelle Raaijmakers
7826729ab2 Kernel: Track big lock blocked threads in separate list
When we lock a mutex, eventually `Thread::block` is invoked which could
in turn invoke `Process::big_lock().restore_exclusive_lock()`. This
would then try to add the current thread to a different blocked thread
list then the one in use for the original mutex being locked, and
because it's an intrusive list, the thread is removed from its original
list during the `.append()`. When the original mutex eventually
unblocks, we no longer have the thread in the intrusive blocked threads
list and we panic.

Solve this by making the big lock mutex special and giving it its own
blocked thread list. Because the process big lock is temporary and is
being actively removed from e.g. syscalls, it's a matter of time before
we can also remove the fix introduced by this commit.

Fixes issue #9401.
2022-04-06 18:27:19 +02:00
Andreas Kling
b28beb691e Kernel: Protect Mutex's thread lists with a spinlock 2022-04-05 14:44:50 +02:00
Idan Horowitz
086969277e Everywhere: Run clang-format 2022-04-01 21:24:45 +01:00
Andreas Kling
71792e4b3f Kernel: Make SpinlockProtected constructor forward all arguments
This allows you to instantiate SpinlockProtected<T> where T requires
constructor arguments.
2022-03-08 00:19:49 +01:00
Andreas Kling
b0e5406ae2 Kernel: Update terminology around Thread's "blocking mutex"
It's more accurate to say that we're blocking on a mutex, rather than
blocking on a lock. The previous terminology made sense when this code
was using something called Kernel::Lock, but since it was renamed to
Kernel::Mutex, this updates brings the language back in sync.
2022-01-30 16:21:59 +01:00
Idan Horowitz
e28af4a2fc Kernel: Stop using HashMap in Mutex
This commit removes the usage of HashMap in Mutex, thereby making Mutex
be allocation-free.

In order to achieve this several simplifications were made to Mutex,
removing unused code-paths and extra VERIFYs:
 * We no longer support 'upgrading' a shared lock holder to an
   exclusive holder when it is the only shared holder and it did not
   unlock the lock before relocking it as exclusive. NOTE: Unlike the
   rest of these changes, this scenario is not VERIFY-able in an
   allocation-free way, as a result the new LOCK_SHARED_UPGRADE_DEBUG
   debug flag was added, this flag lets Mutex allocate in order to
   detect such cases when debugging a deadlock.
 * We no longer support checking if a Mutex is locked by the current
   thread when the Mutex was not locked exclusively, the shared version
   of this check was not used anywhere.
 * We no longer support force unlocking/relocking a Mutex if the Mutex
   was not locked exclusively, the shared version of these functions
   was not used anywhere.
2022-01-29 16:45:39 +01:00
Andreas Kling
8f3b3af5ea Kernel: Remove no-longer-used Lockable template 2021-12-26 21:22:59 +01:00
Hendiadyoin1
e5cf395a54 Kernel: Collapse blocking logic for exclusive Mutex' restore_lock()
Clang-tidy pointed out that the `need_to_block = true;` block was
duplicate, and if we collapse these if statements, we should do so
fully.
2021-12-15 23:34:11 -08:00
Hendiadyoin1
1ad4a190b5 Kernel: Add implied auto-specifiers in Locking
As per clang-tidy.
2021-12-15 23:34:11 -08:00
Hendiadyoin1
a7209ca0f9 Kernel: Add missing includes in Locking 2021-12-15 23:34:11 -08:00
James Mintram
e8f09279d3 Kernel: Move spinlock into Arch
Spinlocks are tied to the platform they are built for, this is why they
have been moved into the Arch folder. They are still available via
"Locking/Spinlock.h"

An Aarch64 stub has been created
2021-10-15 21:48:45 +01:00
James Mintram
545ce5b595 Kernel: Add per platform Processor.h headers
The platform independent Processor.h file includes the shared processor
code and includes the specific platform header file.

All references to the Arch/x86/Processor.h file have been replaced with
a reference to Arch/Processor.h.
2021-10-14 01:23:08 +01:00
Brian Gianforcaro
fbb31b4519 Kernel: Disable lock rank enforcement by default for now
There are a few violations with signal handling that I won't be able to
fix it until later this week. So lets put lock rank enforcement under a
debug option for now so other folks don't hit these crashes until rank
enforcement is more fleshed out.
2021-09-14 18:31:16 +00:00
Ali Mohammad Pur
5a0cdb15b0 AK+Everywhere: Reduce the number of template parameters of IntrusiveList
This makes the user-facing type only take the node member pointer, and
lets the compiler figure out the other needed types from that.
2021-09-10 18:05:46 +03:00
Idan Horowitz
7bb3b2839e Kernel: Fix a typo in LockRank::Process's comment 2021-09-08 19:17:07 +03:00
Brian Gianforcaro
f6b1517426 Kernel/Locking: Add lock rank tracking to Spinlock/RecursiveSpinlock 2021-09-07 13:16:01 +02:00
Brian Gianforcaro
066b0590ec Kernel/Locking: Add lock rank tracking per thread to find deadlocks
This change adds a static lock hierarchy / ranking to the Kernel with
the goal of reducing / finding deadlocks when running with SMP enabled.

We have seen quite a few lock ordering deadlocks (locks taken in a
different order, on two different code paths). As we properly annotate
locks in the system, then these facilities will find these locking
protocol violations automatically

The `LockRank` enum documents the various locks in the system and their
rank. The implementation guarantees that a thread holding one or more
locks of a lower rank cannot acquire an additional lock with rank that
is greater or equal to any of the currently held locks.
2021-09-07 13:16:01 +02:00
Brian Gianforcaro
bb58a4d943 Kernel: Make all Spinlocks use u8 for storage, remove template
The default template argument is only used in one place, and it
looks like it was probably just an oversight. The rest of the Kernel
code all uses u8 as the type. So lets make that the default and remove
the unused template argument, as there doesn't seem to be a reason to
allow the size to be customizable.
2021-09-05 20:46:02 +02:00
Brian Gianforcaro
472454cded Kernel: Switch static_asserts of a type size to AK::AssertSize
This will provide better debug ability when the size comparison fails.
2021-09-05 20:08:57 +02:00
Brian Gianforcaro
9d1b27263f Kernel: Declare type aliases with "using" instead of "typedef"
This is the idiomatic way to declare type aliases in modern C++.
Flagged by Sonar Cloud as a "Code Smell", but I happen to agree
with this particular one. :^)
2021-09-05 09:48:43 +01:00
Andreas Kling
68bf6db673 Kernel: Rename Spinlock::is_owned_by_current_thread()
...to is_owned_by_current_processor(). As Tom pointed out, this is
much more accurate. :^)
2021-08-29 22:19:42 +02:00
Andreas Kling
0b4671add7 Kernel: {Mutex,Spinlock}::own_lock() => is_locked_by_current_thread()
Rename these API's to make it more clear what they are checking.
2021-08-29 12:53:11 +02:00
Andreas Kling
6ae60137d7 Kernel: Use StringView instead of C strings in Mutex 2021-08-29 02:21:01 +02:00
Andrew Kaster
72de228695 Kernel: Verify interrupts are disabled when interacting with Mutexes
This should help prevent deadlocks where a thread blocks on a Mutex
while interrupts are disabled, and makes it impossible for the holder of
the Mutex to make forward progress because it cannot be scheduled in.

Hide it behind a new debug macro LOCK_IN_CRITICAL_DEBUG for now, because
Ext2FS takes a series of Mutexes from the page fault handler, which
executes with interrupts disabled.
2021-08-28 20:53:38 +02:00
Andreas Kling
a8967388d3 Kernel: Remove unused ScopedLockRelease class 2021-08-23 02:17:02 +02:00
Andreas Kling
d60635cb9d Kernel: Convert Processor::in_irq() to static current_in_irq()
This closes the race window between Processor::current() and a context
switch happening before in_irq().
2021-08-23 00:02:09 +02:00
Andreas Kling
c922a7da09 Kernel: Rename ScopedSpinlock => SpinlockLocker
This matches MutexLocker, and doesn't sound like it's a lock itself.
2021-08-22 03:34:10 +02:00
Andreas Kling
55adace359 Kernel: Rename SpinLock => Spinlock 2021-08-22 03:34:10 +02:00
Andreas Kling
7d5d26b048 Kernel: Simplify SpinLockProtected<T>
Same treatment as MutexProtected<T>: inheritance and helper class is
removed, SpinLockProtected now holds a T and a SpinLock.
2021-08-22 03:34:09 +02:00
Andreas Kling
ed6f84c2c9 Kernel: Rename SpinLockProtectedValue<T> => SpinLockProtected<T> 2021-08-22 03:34:09 +02:00
Andreas Kling
532ffa7ddb Kernel: Simplify MutexProtected<T>
This patch removes the MutexContendedResource<T> helper class,
and MutexProtected<T> no longer inherits from T.
Instead, MutexProtected<T> simply has a T and a Mutex.

The LockedResource<T, LockMode> helper class is made a private nested
class in MutexProtected.
2021-08-22 03:34:09 +02:00
Andreas Kling
c2fc33becd Kernel: Rename ProtectedValue<T> => MutexProtected<T>
Let's make it obvious what we're protecting it with.
2021-08-22 03:34:09 +02:00
Andreas Kling
81d990b551 Kernel: Remove some unused classes from Kernel/Locking/ 2021-08-22 03:34:09 +02:00
Brian Gianforcaro
464dc42640 Kernel: Convert lock debug APIs to east const 2021-08-13 20:42:39 +02:00
Brian Gianforcaro
bffcb3e92a Kernel: Add lock debugging to ProtectedValue / RefCountedContended
Enable the LOCK_DEBUG functionality for these new APIs, as it looks
like we want to move the whole system to use this in the not so distant
future. :^)
2021-08-13 20:42:39 +02:00
Brian Gianforcaro
bea74f4b77 Kernel: Reduce LOCK_DEBUG ifdefs by utilizing Kernel::LockLocation
The LOCK_DEBUG conditional code is pretty ugly for a feature that we
only use rarely. We can remove a significant amount of this code by
utilizing a zero sized fake type when not building in LOCK_DEBUG mode.

This lets us keep the same API, but just let the compiler optimize it
away when don't actually care about the location the caller came from.
2021-08-13 20:42:39 +02:00
Brian Gianforcaro
6c18b4e558 Kernel: Introduce LockLocation abstraction from SourceLocation
Introduce a zero sized type to represent a SourceLocation, when we
don't want to compile with SourceLocation support.
2021-08-13 20:42:39 +02:00