In order to reduce our reliance on __builtin_{ffs, clz, ctz, popcount},
this commit removes all calls to these functions and replaces them with
the equivalent functions in AK/BuiltinWrappers.h.
This includes a new Thread::Blocker called SignalBlocker which blocks
until a signal of a matching type is pending. The current Blocker
implementation in the Kernel is very complicated, but cleaning it up is
a different yak for a different day.
We now use AK::Error and AK::ErrorOr<T> in both kernel and userspace!
This was a slightly tedious refactoring that took a long time, so it's
not unlikely that some bugs crept in.
Nevertheless, it does pass basic functionality testing, and it's just
real nice to finally see the same pattern in all contexts. :^)
Prior to this change, both uid_t and gid_t were typedef'ed to `u32`.
This made it easy to use them interchangeably. Let's not allow that.
This patch adds UserID and GroupID using the AK::DistinctNumeric
mechanism we've already been employing for pid_t/ProcessID.
Previously, we would try to acquire a reference to the all processes
lock or other contended resources while holding both the scheduler lock
and the thread's blocker lock. This could lead to a deadlock if we
actually have to block on those other resources.
The `m_should_block` member variable that many of the Thread::Blocker
subclasses had was really only used to carry state from the constructor
to the immediate-unblock-without-blocking escape hatch.
This patch refactors the blockers so that we don't need to hold on
to this flag after setup_blocker(), and instead the return value from
setup_blocker() is the authority on whether the unblock conditions
are already met.
Instead of registering with blocker sets and whatnot in the various
Blocker subclass constructors, this patch moves such initialization
to a separate setup_blocker() virtual.
setup_blocker() returns false if there's no need to actually block
the thread. This allows us to bail earlier in Thread::block().
Same deal as WaitQueueBlocker, we can get the blocked thread from
Blocker::thread() now, so there's no need to register the current
thread as custom data.
When adding a WaitQueueBlocker to a WaitQueue, it stored the blocked
thread in the registration's custom "void* data" slot.
This was only used to print the Thread* in some debug logging.
Now that Blocker always knows its origin Thread, we can simply add
a Blocker::thread() accessor and then get the blocked Thread& from
there. No need to register custom data.
There's no harm in the blocker always knowing which thread it originated
from. It also simplifies some logic since we don't need to think about
it ever being null.
The BlockerSet stores its blockers along with a "void* data" that may
contain some blocker-specific context relevant to the specific blocker
registration (for example, SelectBlocker stores a pointer to the
relevant entry in an array of SelectBlocker::FDInfo structs.)
When unregistering a blocker from a set, we don't need to key the
blocker by both the Blocker* and the data. Just the Blocker* is enough,
since all registrations for that blocker need to be removed anyway as
the blocker is about to be destroyed.
So we stop passing the "void* data" to BlockerSet::remove_blocker(),
which also allows us to remove the now-unneeded Blocker::m_block_data.
Namely, will_unblock_immediately_without_blocking(Reason).
This virtual function is called on a blocker *before any block occurs*,
if it turns out that we don't need to block the thread after all.
This can happens for one of two reasons:
- UnblockImmediatelyReason::UnblockConditionAlreadyMet
We don't need to block the thread because the condition for
unblocking it is already met.
- UnblockImmediatelyReason::TimeoutInThePast
We don't need to block the thread because a timeout was specified
and that timeout is already in the past.
This patch does not introduce any behavior changes, it's only meant to
clarify this part of the blocking logic.
Namely, unblock_all_blockers_whose_conditions_are_met().
The old name made it sound like things were getting unblocked no matter
what, but that's not actually the case.
What this actually does is iterate through the set of blockers,
unblocking those whose conditions are met. So give it a (very) verbose
name that errs on the side of descriptiveness.
This has several benefits:
1) We no longer just blindly derefence a null pointer in various places
2) We will get nicer runtime error messages if the current process does
turn out to be null in the call location
3) GCC no longer complains about possible nullptr dereferences when
compiling without KUBSAN
I botched this in 859e5741ff, the check
was supposed to be with Process::is_kernel_process().
This fixes an issue with zombie processes hanging around forever.
Thanks tomuta for spotting it! :^)
We leak a ref() onto every user process when constructing them,
either via Process::create_user_process(), or via Process::sys$fork().
This ref() is balanced by a corresponding unref() in
Thread::WaitBlockCondition::finalize().
Since kernel processes don't have a leaked ref() on them, this led to
an extra Process::unref() on kernel processes during finalization.
This happened during every boot, with the `init_stage2` process.
Found by turning off kfree() scrubbing. :^)
There is logic at the end of the constructor that sets m_should_block
to false if we encountered errors. We were missing this step due to the
erroneous early return, the code then ended up waiting and then
asserting on unblock since the WaitBlocker is in a invalid state.
This fix is to not return early, and let normal control flow handle it.
Fixes: #7857
Verified with `stress-ng --yield=10` locally.
The fact that current_time can "fail" makes its use a bit awkward.
All callers in the Kernel are trusted besides syscalls, so assert
that they never get there, and make sure all current callers perform
validation of the clock_id with TimeManagement::is_valid_clock_id().
I have fuzzed this change locally for a bit to make sure I didn't
miss any obvious regression.
We had some inconsistencies before:
- Sometimes "The", sometimes "the"
- Sometimes trailing ".", sometimes no trailing "."
I picked the most common one (lowecase "the", trailing ".") and applied
it to all copyright headers.
By using the exact same string everywhere we can ensure nothing gets
missed during a global search (and replace), and that these
inconsistencies are not spread any further (as copyright headers are
commonly copied to new files).
SPDX License Identifiers are a more compact / standardized
way of representing file license information.
See: https://spdx.dev/resources/use/#identifiers
This was done with the `ambr` search and replace tool.
ambr --no-parent-ignore --key-from-file --rep-from-file key.txt rep.txt *
In case multiple file descriptors in the `fd_set` were already readable
and/or writable when calling Thread::block<SelectBlocker>(), we would
only mark the first fd in the output sets instead of all relevant fd's.
The short-circuit code path when blocking isn't necessary must ensure
that unblock flags are collected for all file descriptors, not just the
first one encountered.
Fixes#5795.
Switch to using type-safe bitwise operators for the BlockFlags class,
this cleans up a lot of boilerplate casts which are necessary when the
enum is declared as `enum class`.
I don't dare touch the multi-threading logic and locking mechanism, so it stays
timespec for now. However, this could and should be changed to AK::Time, and I
bet it will simplify the "increment_time_since_boot()" code.
This commit is very invasive, because Thread likes to take a pointer and write
to it. This means that translating between timespec/timeval/Time would have been
more difficult than just changing everything that hands a raw pointer to Thread,
in bulk.
(...and ASSERT_NOT_REACHED => VERIFY_NOT_REACHED)
Since all of these checks are done in release builds as well,
let's rename them to VERIFY to prevent confusion, as everyone is
used to assertions being compiled out in release.
We can introduce a new ASSERT macro that is specifically for debug
checks, but I'm doing this wholesale conversion first since we've
accumulated thousands of these already, and it's not immediately
obvious which ones are suitable for ASSERT.
..and allow implicit creation of KResult and KResultOr from ErrnoCode.
This means that kernel functions that return those types can finally
do "return EINVAL;" and it will just work.
There's a handful of functions that still deal with signed integers
that should be converted to return KResults.
This adds support for FUTEX_WAKE_OP, FUTEX_WAIT_BITSET, FUTEX_WAKE_BITSET,
FUTEX_REQUEUE, and FUTEX_CMP_REQUEUE, as well well as global and private
futex and absolute/relative timeouts against the appropriate clock. This
also changes the implementation so that kernel resources are only used when
a thread is blocked on a futex.
Global futexes are implemented as offsets in VMObjects, so that different
processes can share a futex against the same VMObject despite potentially
being mapped at different virtual addresses.
The unblock_all variant used to ASSERT if a blocker didn't unblock,
but it wasn't clear from the name that it would do that. Because
the BlockCondition already asserts that no blockers are left at
destruction time, it would still catch blockers that haven't been
unblocked for whatever reason.
Fixes#4496