Commit graph

125 commits

Author SHA1 Message Date
Sergey Bugaev
d2b500fbcb AK+Kernel: Help the compiler inline a bunch of trivial methods
If these methods get inlined, the compiler is able to statically eliminate most
of the assertions. Alas, it doesn't realize this, and believes inlining them to
be too expensive. So give it a strong hint that it's not the case.

This *decreases* the kernel binary size.
2020-05-20 14:11:13 +02:00
Brian Gianforcaro
faf15e3721 Kernel: Add timeout support to Thread::wait_on
This change plumbs a new optional timeout option to wait_on.
The timeout is enabled by enqueing a timer on the timer queue
while we are waiting. We can then see if we were woken up or
timed out by checking if we are still on the wait queue or not.
2020-04-26 21:31:52 +02:00
Andreas Kling
bed0e6d250 Kernel: Make Process and Thread non-copyable and non-movable 2020-04-22 12:36:35 +02:00
Itamar
9e51e295cf ptrace: Add PT_SETREGS
PT_SETTREGS sets the regsiters of the traced thread. It can only be
used when the tracee is stopped.

Also, refactor ptrace.
The implementation was getting long and cluttered the alraedy large
Process.cpp file.

This commit moves the bulk of the implementation to Kernel/Ptrace.cpp,
and factors out peek & poke to separate methods of the Process class.
2020-04-13 00:53:22 +02:00
Itamar
77f671b462 CPU: Handle breakpoint trap
Also, start working on the debugger app.
2020-04-13 00:53:22 +02:00
Andreas Kling
b7ff3b5ad1 Kernel: Include the current instruction pointer in profile samples
We were missing the innermost instruction pointer when sampling.
This makes the instruction-level profile info a lot cooler! :^)
2020-04-11 21:04:45 +02:00
Itamar
6b74d38aab Kernel: Add 'ptrace' syscall
This commit adds a basic implementation of
the ptrace syscall, which allows one process
(the tracer) to control another process (the tracee).

While a process is being traced, it is stopped whenever a signal is
received (other than SIGCONT).

The tracer can start tracing another thread with PT_ATTACH,
which causes the tracee to stop.

From there, the tracer can use PT_CONTINUE
to continue the execution of the tracee,
or use other request codes (which haven't been implemented yet)
to modify the state of the tracee.

Additional request codes are PT_SYSCALL, which causes the tracee to
continue exection but stop at the next entry or exit from a syscall,
and PT_GETREGS which fethces the last saved register set of the tracee
(can be used to inspect syscall arguments and return value).

A special request code is PT_TRACE_ME, which is issued by the tracee
and causes it to stop when it calls execve and wait for the
tracer to attach.
2020-03-28 18:27:18 +01:00
Andreas Kling
7d862dd5fc AK: Reduce header dependency graph of String.h
String.h no longer pulls in StringView.h. We do this by moving a bunch
of String functions out-of-line.
2020-03-23 13:48:44 +01:00
Andreas Kling
b1058b33fb AK: Add global FlatPtr typedef. It's u32 or u64, based on sizeof(void*)
Use this instead of uintptr_t throughout the codebase. This makes it
possible to pass a FlatPtr to something that has u32 and u64 overloads.
2020-03-08 13:06:51 +01:00
Andreas Kling
2839bb0be1 Kernel: Restore the previous thread state on SIGCONT after SIGSTOP
When stopping a thread with the SIGSTOP signal, we now store the thread
state in Thread::m_stop_state. That state is then restored on SIGCONT.
This fixes an issue where previously-blocked threads would unblock
upon resume. Now they simply resume in the Blocked state, and it's up
to the regular unblocking mechanism to unblock them.

Fixes #1326.
2020-03-01 15:14:17 +01:00
Andreas Kling
9aa234cc47 Kernel: Reset FPU state on exec() 2020-02-18 13:44:27 +01:00
Andreas Kling
48f7c28a5c Kernel: Replace "current" with Thread::current and Process::current
Suggested by Sergey. The currently running Thread and Process are now
Thread::current and Process::current respectively. :^)
2020-02-17 15:04:27 +01:00
Andreas Kling
16818322c5 Kernel: Reduce header dependencies of Process and Thread 2020-02-16 02:01:42 +01:00
Andreas Kling
e28809a996 Kernel: Add forward declaration header 2020-02-16 01:50:32 +01:00
Andreas Kling
a356e48150 Kernel: Move all code into the Kernel namespace 2020-02-16 01:27:42 +01:00
Andreas Kling
0341ddc5eb Kernel: Rename RegisterDump => RegisterState 2020-02-16 00:15:37 +01:00
Andreas Kling
ea8d386146 Kernel: Update Thread::raw_backtrace() signature to use uintptr_t 2020-02-02 19:00:38 +01:00
Andreas Kling
5163c5cc63 Kernel: Expose the signal that stopped a thread via sys$waitpid() 2020-01-27 20:47:10 +01:00
Andreas Kling
137a45dff2 Kernel: read()/write() should respect timeouts when used on a sockets
Move timeout management to the ReadBlocker and WriteBlocker classes.
Also get rid of the specialized ReceiveBlocker since it no longer does
anything that ReadBlocker can't do.
2020-01-26 17:54:23 +01:00
Andreas Kling
e901a3695a Kernel: Use the templated copy_to/from_user() in more places
These ensure that the "to" and "from" pointers have the same type,
and also that we copy the correct number of bytes.
2020-01-20 13:41:21 +01:00
Andreas Kling
94ca55cefd Meta: Add license header to source files
As suggested by Joshua, this commit adds the 2-clause BSD license as a
comment block to the top of every source file.

For the first pass, I've just added myself for simplicity. I encourage
everyone to add themselves as copyright holders of any file they've
added or modified in some significant way. If I've added myself in
error somewhere, feel free to replace it with the appropriate copyright
holder instead.

Going forward, all new source files should include a license header.
2020-01-18 09:45:54 +01:00
Andreas Kling
41376d4662 Kernel: Fix Lock racing to the WaitQueue
There was a time window between releasing Lock::m_lock and calling into
the lock's WaitQueue where someone else could take m_lock and bring two
threads into a deadlock situation.

Fix this issue by holding Lock::m_lock until interrupts are disabled by
either Thread::wait_on() or WaitQueue::wake_one().
2020-01-12 19:04:16 +01:00
Andreas Kling
8c5cd97b45 Kernel: Fix kernel null deref on process crash during join_thread()
The join_thread() syscall is not supposed to be interruptible by
signals, but it was. And since the process death mechanism piggybacked
on signal interrupts, it was possible to interrupt a pthread_join() by
killing the process that was doing it, leading to confusing due to some
assumptions being made by Thread::finalize() for threads that have a
pending joiner.

This patch fixes the issue by making "interrupted by death" a distinct
block result separate from "interrupted by signal". Then we handle that
state in join_thread() and tidy things up so that thread finalization
doesn't get confused by the pending joiner being gone.

Test: Tests/Kernel/null-deref-crash-during-pthread_join.cpp
2020-01-10 19:23:45 +01:00
Andreas Kling
e23f05a157 Kernel: Remove unused variable Thread::m_userspace_stack_region 2020-01-09 12:31:18 +01:00
Andreas Kling
fd740829d1 Kernel: Switch to eagerly restoring x86 FPU state on context switch
Lazy FPU restore is well known to be vulnerable to timing attacks,
and eager restore is a lot simpler anyway, so let's just do it eagerly.
2020-01-01 16:54:21 +01:00
Andreas Kling
a69734bf2e Kernel: Also add a process boosting mechanism
Let's also have set_process_boost() for giving all threads in a process
the same boost.
2019-12-30 20:10:00 +01:00
Andreas Kling
610f3ad12f Kernel: Add a basic thread boosting mechanism
This patch introduces a syscall:

    int set_thread_boost(int tid, int amount)

You can use this to add a permanent boost value to the effective thread
priority of any thread with your UID (or any thread in the system if
you are the superuser.)

This is quite crude, but opens up some interesting opportunities. :^)
2019-12-30 19:23:13 +01:00
Andreas Kling
50677bf806 Kernel: Refactor scheduler to use dynamic thread priorities
Threads now have numeric priorities with a base priority in the 1-99
range.

Whenever a runnable thread is *not* scheduled, its effective priority
is incremented by 1. This is tracked in Thread::m_extra_priority.
The effective priority of a thread is m_priority + m_extra_priority.

When a runnable thread *is* scheduled, its m_extra_priority is reset to
zero and the effective priority returns to base.

This means that lower-priority threads will always eventually get
scheduled to run, once its effective priority becomes high enough to
exceed the base priority of threads "above" it.

The previous values for ThreadPriority (Low, Normal and High) are now
replaced as follows:

    Low -> 10
    Normal -> 30
    High -> 50

In other words, it will take 20 ticks for a "Low" priority thread to
get to "Normal" effective priority, and another 20 to reach "High".

This is not perfect, and I've used some quite naive data structures,
but I think the mechanism will allow us to build various new and
interesting optimizations, and we can figure out better data structures
later on. :^)
2019-12-30 18:46:17 +01:00
Andreas Kling
abdd5aa08a Kernel: Separate runnable thread queues by priority
This patch introduces three separate thread queues, one for each thread
priority available to userspace (Low, Normal and High.)

Each queue operates in a round-robin fashion, but we now always prefer
to schedule the highest priority thread that currently wants to run.

There are tons of tweaks and improvements that we can and should make
to this mechanism, but I think this is a step in the right direction.

This makes WindowServer significantly more responsive while one of its
clients is burning CPU. :^)
2019-12-27 00:52:30 +01:00
Andreas Kling
f4978b2be1 Kernel: Use IntrusiveList to make WaitQueue allocation-free :^) 2019-12-22 12:38:01 +01:00
Andreas Kling
3012b224f0 Kernel: Fix intermittent assertion failure in sys$exec()
While setting up the main thread stack for a new process, we'd incur
some zero-fill page faults. This was to be expected, since we allocate
a huge stack but lazily populate it with physical pages.

The problem is that page fault handlers may enable interrupts in order
to grab a VMObject lock (or to page in from an inode.)

During exec(), a process is reorganizing itself and will be in a very
unrunnable state if the scheduler should interrupt it and then later
ask it to run again. Which is exactly what happens if the process gets
pre-empted while the new stack's zero-fill page fault grabs the lock.

This patch fixes the issue by creating new main thread stacks before
disabling interrupts and going into the critical part of exec().
2019-12-18 23:03:23 +01:00
Andreas Kling
7a64f55c0f Kernel: Fix get_register_dump_from_stack() after IRQ entry changes
I had to change the layout of RegisterDump a little bit to make the new
IRQ entry points work. This broke get_register_dump_from_stack() which
was expecting the RegisterDump to be badly aligned due to a goofy extra
16 bits which are no longer there.
2019-12-15 17:58:53 +01:00
Andreas Kling
b32e961a84 Kernel: Implement a simple process time profiler
The kernel now supports basic profiling of all the threads in a process
by calling profiling_enable(pid_t). You finish the profiling by calling
profiling_disable(pid_t).

This all works by recording thread stacks when the timer interrupt
fires and the current thread is in a process being profiled.
Note that symbolication is deferred until profiling_disable() to avoid
adding more noise than necessary to the profile.

A simple "/bin/profile" command is included here that can be used to
start/stop profiling like so:

    $ profile 10 on
    ... wait ...
    $ profile 10 off

After a profile has been recorded, it can be fetched in /proc/profile

There are various limits (or "bugs") on this mechanism at the moment:

- Only one process can be profiled at a time.
- We allocate 8MB for the samples, if you use more space, things will
  not work, and probably break a bit.
- Things will probably fall apart if the profiled process dies during
  profiling, or while extracing /proc/profile
2019-12-11 20:36:56 +01:00
Andrew Kaster
9058962712 Kernel: Allow setting thread names
The main thread of each kernel/user process will take the name of
the process. Extra threads will get a fancy new name
"ProcessName[<tid>]".

Thread backtraces now list the thread name in addtion to tid.

Add the thread name to /proc/all (should it get its own proc
file?).

Add two new syscalls, set_thread_name and get_thread_name.
2019-12-08 14:09:29 +01:00
Andreas Kling
8bb98aa31b Kernel: Use a WaitQueue to implement finalizer wakeup
This gets rid of the special "Lurking" thread state and replaces it
with a generic WaitQueue :^)
2019-12-01 19:17:17 +01:00
Andreas Kling
5a45376180 Kernel+SystemMonitor: Log amounts of I/O per thread
This patch adds these I/O counters to each thread:

- (Inode) file read bytes
- (Inode) file write bytes
- Unix socket read bytes
- Unix socket write bytes
- IPv4 socket read bytes
- IPv4 socket write bytes

These are then exposed in /proc/all and seen in SystemMonitor.
2019-12-01 17:40:27 +01:00
Andreas Kling
5859e16e53 Kernel: Use a dedicated thread state for wait-queued threads
Instead of using the generic block mechanism, wait-queued threads now
go into the special Queued state.

This fixes an issue where signal dispatch would unblock a wait-queued
thread (because signal dispatch unblocks blocked threads) and cause
confusion since the thread only expected to be awoken by the queue.
2019-12-01 16:02:58 +01:00
Andreas Kling
8b129476b1 Kernel: Use a WaitQueue in PATAChannel
Instead of waking up repeatedly to check if a disk operation has
finished, use a WaitQueue and wake it up in the IRQ handler.

This simplifies the device driver a bit, and makes it more responsive
as well :^)
2019-12-01 12:54:38 +01:00
Andreas Kling
9ed272ce98 Kernel: Disable interrupts while setting up a thread blocker
There was a race window between instantiating a WaitQueueBlocker and
setting the thread state to Blocked. If a thread was preempted between
those steps, someone else might try to wake the wait queue and find an
unblocked thread in a wait queue, which is not sane.
2019-12-01 12:47:33 +01:00
Andreas Kling
f067730f6b Kernel: Add a WaitQueue for Thread queueing/waking and use it for Lock
The kernel's Lock class now uses a proper wait queue internally instead
of just having everyone wake up regularly to try to acquire the lock.

We also keep the donation mechanism, so that whenever someone tries to
take the lock and fails, that thread donates the remainder of its
timeslice to the current lock holder.

After unlocking a Lock, the unlocking thread calls WaitQueue::wake_one,
which unblocks the next thread in queue.
2019-12-01 12:07:43 +01:00
Andreas Kling
5b8cf2ee23 Kernel: Make syscall counters and page fault counters per-thread
Now that we show individual threads in SystemMonitor and "top",
it's also very nice to have individual counters for the threads. :^)
2019-11-26 21:37:38 +01:00
Andrew Kaster
618aebdd8a Kernel+LibPthread: pthread_create handles pthread_attr_t
Add an initial implementation of pthread attributes for:
  * detach state (joinable, detached)
  * schedule params (just priority)
  * guard page size (as skeleton) (requires kernel support maybe?)
  * stack size and user-provided stack location (4 or 8 MB only, must be aligned)

Add some tests too, to the thread test program.

Also, LibC: Move pthread declarations to sys/types.h, where they belong.
2019-11-18 09:04:32 +01:00
Andreas Kling
e34ed04d1e Kernel+LibPthread+LibC: Create secondary thread stacks in userspace
Have pthread_create() allocate a stack and passing it to the kernel
instead of this work happening in the kernel. The more of this we can
do in userspace, the better.

This patch also unexposes the raw create_thread() and exit_thread()
syscalls since they are now only used by LibPthread anyway.
2019-11-17 17:29:20 +01:00
Andreas Kling
73d6a69b3f Kernel: Release the big process lock while yielding in sys$yield()
Otherwise, a thread calling sched_yield() will prevent other threads
in that process from entering the kernel.
2019-11-16 12:18:59 +01:00
Andreas Kling
cb5021419e Kernel: Move Thread::m_joinee_exit_value into the JoinBlocker
There's no need for this to be a permanent Thread member. Just use a
reference in the JoinBlocker instead.
2019-11-14 21:04:34 +01:00
Andreas Kling
69efa3f630 Kernel+LibPthread: Implement pthread_join()
It's now possible to block until another thread in the same process has
exited. We can also retrieve its exit value, which is whatever value it
passed to pthread_exit(). :^)
2019-11-14 20:58:23 +01:00
Sergey Bugaev
1e1ddce9d8 Kernel: Unwind kernel stacks before dying
While executing in the kernel, a thread can acquire various resources
that need cleanup, such as locks and references to RefCounted objects.
This cleanup normally happens on the exit path, such as in destructors
for various RAII guards. But we weren't calling those exit paths when
killing threads that have been executing in the kernel, such as threads
blocked on reading or sleeping, thus causing leaks.

This commit changes how killing threads works. Now, instead of killing
a thread directly, one is supposed to call thread->set_should_die(),
which will unblock it and make it unwind the stack if it is blocked
in the kernel. Then, just before returning to the userspace, the thread
will automatically die.
2019-11-14 20:05:58 +01:00
Andreas Kling
083c5f8b89 Kernel: Rework Process::Priority into ThreadPriority
Scheduling priority is now set at the thread level instead of at the
process level.

This is a step towards allowing processes to set different priorities
for threads. There's no userspace API for that yet, since only the main
thread's priority is affected by sched_setparam().
2019-11-06 16:30:06 +01:00
Drew Stratford
5efbb4ae95 Kernel: Fix bug in Thread::dispatch_signal().
dispatch_signal() expected a RegisterDump on the kernel stack. However
in certain cases, like just after a clone, this was not the case and
dispatch_signal() would instead write to an incorrect user stack pointer.

We now use the threads TSS in situations where the RegisterDump may not
be valid, fixing the issue.
2019-11-04 10:12:59 +01:00
Drew Stratford
44f22c99ef Thread.cpp: add method get_RegisterDump_from_stack().
This refactors some the RegisterDump code from dispatch_signal
into a stand-alone function, allowing for better reuse.
2019-11-04 10:12:59 +01:00