Kernel: Validate we don't hold s_mm_lock during context switch

Since `s_mm_lock` is a RecursiveSpinlock, if a kernel thread gets
preempted while accidentally hold the lock during switch_context,
another thread running on the same processor could end up manipulating
the state of the memory manager even though they should not be able to.
It will just bump the recursion count and keep going.

This appears to be the root cause of weird bugs like: #7359
Where page protection magically appears to be wrong during execution.

To avoid these cases lets guard this specific unfortunate case and make
sure it can never go unnoticed ever again.

The assert was Tom's idea to help debug this, so I am going to tag him
as co-author of this commit.

Co-Authored-By: Tom <tomut@yahoo.com>
This commit is contained in:
Brian Gianforcaro 2021-05-25 01:02:19 -07:00 committed by Andreas Kling
parent fe679de791
commit 6830963321
Notes: sideshowbarker 2024-07-18 17:25:31 +09:00

View file

@ -342,6 +342,10 @@ bool Scheduler::donate_to(RefPtr<Thread>& beneficiary, const char* reason)
bool Scheduler::context_switch(Thread* thread)
{
if (s_mm_lock.own_lock()) {
PANIC("In context switch while holding s_mm_lock");
}
thread->did_schedule();
auto from_thread = Thread::current();