The only persistent one of these was Thread::m_process and that never
changes after initialization. Make it const to enforce this and switch
everything over to RefPtr & NonnullRefPtr.
This commit updates the lock function from Spinlock and
RecursiveSpinlock to return the InterruptsState of the processor,
instead of the processor flags. The unlock functions would only look at
the interrupt flag of the processor flags, so we now use the
InterruptsState enum to clarify the intent, and such that we can use the
same Spinlock code for the aarch64 build.
To not break the build, all the call sites are updated aswell.
Until now, our kernel has reimplemented a number of AK classes to
provide automatic internal locking:
- RefPtr
- NonnullRefPtr
- WeakPtr
- Weakable
This patch renames the Kernel classes so that they can coexist with
the original AK classes:
- RefPtr => LockRefPtr
- NonnullRefPtr => NonnullLockRefPtr
- WeakPtr => LockWeakPtr
- Weakable => LockWeakable
The goal here is to eventually get rid of the Lock* classes in favor of
using external locking.
When updating the signal mask, there is a small frame where we might set
up the receiving process for handing the signal and therefore remove
that signal from the list of pending signals before SignalBlocker has a
chance to block. In turn, this might cause SignalBlocker to never notice
that the signal arrives and it will never unblock once blocked.
Track the currently handled signal separately and include it when
determining if SignalBlocker should be unblocking.
This function is large and unwieldy and forces Thread.h to #include
a bunch of things. The only reason it was in the header is because we
need to instantiate a blocker based on the templated BlockerType.
We actually keep block<BlockerType>() in the header, but move the
bulk of the function body out of line into Thread::block_impl().
To preserve destructor ordering, we add Blocker::finalize() which is
called where we'd previously destroy the Blocker.
In order to reduce our reliance on __builtin_{ffs, clz, ctz, popcount},
this commit removes all calls to these functions and replaces them with
the equivalent functions in AK/BuiltinWrappers.h.
This includes a new Thread::Blocker called SignalBlocker which blocks
until a signal of a matching type is pending. The current Blocker
implementation in the Kernel is very complicated, but cleaning it up is
a different yak for a different day.
We now use AK::Error and AK::ErrorOr<T> in both kernel and userspace!
This was a slightly tedious refactoring that took a long time, so it's
not unlikely that some bugs crept in.
Nevertheless, it does pass basic functionality testing, and it's just
real nice to finally see the same pattern in all contexts. :^)
Prior to this change, both uid_t and gid_t were typedef'ed to `u32`.
This made it easy to use them interchangeably. Let's not allow that.
This patch adds UserID and GroupID using the AK::DistinctNumeric
mechanism we've already been employing for pid_t/ProcessID.
Previously, we would try to acquire a reference to the all processes
lock or other contended resources while holding both the scheduler lock
and the thread's blocker lock. This could lead to a deadlock if we
actually have to block on those other resources.
The `m_should_block` member variable that many of the Thread::Blocker
subclasses had was really only used to carry state from the constructor
to the immediate-unblock-without-blocking escape hatch.
This patch refactors the blockers so that we don't need to hold on
to this flag after setup_blocker(), and instead the return value from
setup_blocker() is the authority on whether the unblock conditions
are already met.
Instead of registering with blocker sets and whatnot in the various
Blocker subclass constructors, this patch moves such initialization
to a separate setup_blocker() virtual.
setup_blocker() returns false if there's no need to actually block
the thread. This allows us to bail earlier in Thread::block().
Same deal as WaitQueueBlocker, we can get the blocked thread from
Blocker::thread() now, so there's no need to register the current
thread as custom data.
When adding a WaitQueueBlocker to a WaitQueue, it stored the blocked
thread in the registration's custom "void* data" slot.
This was only used to print the Thread* in some debug logging.
Now that Blocker always knows its origin Thread, we can simply add
a Blocker::thread() accessor and then get the blocked Thread& from
there. No need to register custom data.
There's no harm in the blocker always knowing which thread it originated
from. It also simplifies some logic since we don't need to think about
it ever being null.
The BlockerSet stores its blockers along with a "void* data" that may
contain some blocker-specific context relevant to the specific blocker
registration (for example, SelectBlocker stores a pointer to the
relevant entry in an array of SelectBlocker::FDInfo structs.)
When unregistering a blocker from a set, we don't need to key the
blocker by both the Blocker* and the data. Just the Blocker* is enough,
since all registrations for that blocker need to be removed anyway as
the blocker is about to be destroyed.
So we stop passing the "void* data" to BlockerSet::remove_blocker(),
which also allows us to remove the now-unneeded Blocker::m_block_data.
Namely, will_unblock_immediately_without_blocking(Reason).
This virtual function is called on a blocker *before any block occurs*,
if it turns out that we don't need to block the thread after all.
This can happens for one of two reasons:
- UnblockImmediatelyReason::UnblockConditionAlreadyMet
We don't need to block the thread because the condition for
unblocking it is already met.
- UnblockImmediatelyReason::TimeoutInThePast
We don't need to block the thread because a timeout was specified
and that timeout is already in the past.
This patch does not introduce any behavior changes, it's only meant to
clarify this part of the blocking logic.
Namely, unblock_all_blockers_whose_conditions_are_met().
The old name made it sound like things were getting unblocked no matter
what, but that's not actually the case.
What this actually does is iterate through the set of blockers,
unblocking those whose conditions are met. So give it a (very) verbose
name that errs on the side of descriptiveness.
This has several benefits:
1) We no longer just blindly derefence a null pointer in various places
2) We will get nicer runtime error messages if the current process does
turn out to be null in the call location
3) GCC no longer complains about possible nullptr dereferences when
compiling without KUBSAN
I botched this in 859e5741ff, the check
was supposed to be with Process::is_kernel_process().
This fixes an issue with zombie processes hanging around forever.
Thanks tomuta for spotting it! :^)
We leak a ref() onto every user process when constructing them,
either via Process::create_user_process(), or via Process::sys$fork().
This ref() is balanced by a corresponding unref() in
Thread::WaitBlockCondition::finalize().
Since kernel processes don't have a leaked ref() on them, this led to
an extra Process::unref() on kernel processes during finalization.
This happened during every boot, with the `init_stage2` process.
Found by turning off kfree() scrubbing. :^)
There is logic at the end of the constructor that sets m_should_block
to false if we encountered errors. We were missing this step due to the
erroneous early return, the code then ended up waiting and then
asserting on unblock since the WaitBlocker is in a invalid state.
This fix is to not return early, and let normal control flow handle it.
Fixes: #7857
Verified with `stress-ng --yield=10` locally.
The fact that current_time can "fail" makes its use a bit awkward.
All callers in the Kernel are trusted besides syscalls, so assert
that they never get there, and make sure all current callers perform
validation of the clock_id with TimeManagement::is_valid_clock_id().
I have fuzzed this change locally for a bit to make sure I didn't
miss any obvious regression.
We had some inconsistencies before:
- Sometimes "The", sometimes "the"
- Sometimes trailing ".", sometimes no trailing "."
I picked the most common one (lowecase "the", trailing ".") and applied
it to all copyright headers.
By using the exact same string everywhere we can ensure nothing gets
missed during a global search (and replace), and that these
inconsistencies are not spread any further (as copyright headers are
commonly copied to new files).
SPDX License Identifiers are a more compact / standardized
way of representing file license information.
See: https://spdx.dev/resources/use/#identifiers
This was done with the `ambr` search and replace tool.
ambr --no-parent-ignore --key-from-file --rep-from-file key.txt rep.txt *
In case multiple file descriptors in the `fd_set` were already readable
and/or writable when calling Thread::block<SelectBlocker>(), we would
only mark the first fd in the output sets instead of all relevant fd's.
The short-circuit code path when blocking isn't necessary must ensure
that unblock flags are collected for all file descriptors, not just the
first one encountered.
Fixes#5795.
Switch to using type-safe bitwise operators for the BlockFlags class,
this cleans up a lot of boilerplate casts which are necessary when the
enum is declared as `enum class`.
I don't dare touch the multi-threading logic and locking mechanism, so it stays
timespec for now. However, this could and should be changed to AK::Time, and I
bet it will simplify the "increment_time_since_boot()" code.