Use copy_{to,from}_user() in the various File::ioctl() implementations
instead of disabling SMAP wholesale in sys$ioctl().
This patch does not port IPv4Socket::ioctl() to those API's since that
will be more involved. That function now creates a local SmapDisabler.
This is something I've been meaning to do for a long time, and here we
finally go. This patch moves all sys$foo functions out of Process.cpp
and into files in Kernel/Syscalls/.
It's not exactly one syscall per file (although it could be, but I got
a bit tired of the repetitive work here..)
This makes hacking on individual syscalls a lot less painful since you
don't have to rebuild nearly as much code every time. I'm also hopeful
that this makes it easier to understand individual syscalls. :^)
I decided to play around with trying to run Serenity in VirtualBox.
It crashed WindowServer with a beautiful array of multi-color
flashing letters :^)
Skipping getting side-tracked seeing that it chose MBVGA in the
serial debug and trying to debug why it caused such a display,
I finally checked BXVGA.
While find_framebuffer_address checks for VBoxVGA, init_stage2 didn't.
Whoops!
We were masking the fragment offset bits incorrectly in the IPv4 header
sent out with fragments. This worked up to ~32KB but after that, things
would get very confused. :^)
Because Thread::sleep is an internal interface, it's easy to check that there
are only few callers: Process::sys$sleep, usleep, and nanosleep are happy
with this increased size, because now they support the entire range of their
arguments (assuming small-ish values for ticks_per_second()).
SyncTask doesn't care.
Note that the old behavior wasn't "cap out at 388 days", which would have been
reasonable. Instead, the code resulted in unsigned overflow, meaning that a
very long sleep would "on average" end after about 194 days, sometimes much
quicker.
Fixes#2871.
Ignoring the 'securely generated bytes' constraint seems to
be fine for Linux, so it's probably fine for Serenity.
Note that there *might* be more bottlenecks down the road
if Serenity is started in a non-GUI way. Currently though,
loading the GUI seems to generate enough interrupts to
seed the entropy pool, even on my non-RDRAND setup. Yay! :^)
Add all 6 shortcuts even if the switch between VirtualConsoles is
currently not available in the graphical console.
Also make the case statement more compact.
It is possible to switch to VirtualConsoles 1 to 4 via the shortcut
ALT + [1-4]. Therefor the array of VirtualConsoles should be guaranteed
to be initialized.
Also add an constant for the maximum number of VirtualConsoles to
guarantee consistency.
For now, only the non-standard _SC_NPROCESSORS_CONF and
_SC_NPROCESSORS_ONLN are implemented.
Use them to make ninja pick a better default -j value.
While here, make the ninja package script not fail if
no other port has been built yet.
To be more in line with other parts of Serenity's procfs, the
"key: value" format of /proc/cpuinfo was replaced with JSON, namely
an array of objects (one for each core).
The available keys remain the same, though "features" has been changed
from a space-separated string to an array of strings.
We need to halt the BSP briefly until all APs are ready for the
first context switch, but we can't hold the same spinlock by all
of them while doing so. So, while the APs are waiting on each other
they need to release the scheduler lock, and then once signaled
re-acquire it. Should solve some timing dependent hangs or crashes,
most easily observed using qemu with kvm disabled.
We now have BlockResult::WokeNormally and BlockResult::NotBlocked,
both of which indicate no error. We can no longer just check for
BlockResult::WokeNormally and assume anything else must be an
interruption.
The AT_* entries are placed after the environment variables, so that
they can be found by iterating until the end of the envp array, and then
going even further beyond :^)
MemoryManager::quickmap_pd and MemoryManager::quickmap_pt can only
be called by one processor at the time anyway, since anything using
these must have the MM lock held. So, no need to inform the other
CPUs to flush their TLBs, we can just flush our own.
We can now properly initialize all processors without
crashing by sending SMP IPI messages to synchronize memory
between processors.
We now initialize the APs once we have the scheduler running.
This is so that we can process IPI messages from the other
cores.
Also rework interrupt handling a bit so that it's more of a
1:1 mapping. We need to allocate non-sharable interrupts for
IPIs.
This also fixes the occasional hang/crash because all
CPUs now synchronize memory with each other.
The short-circuit path added for waiting on a queue that already had a
pending wake was able to return with interrupts disabled, which breaks
the API contract of wait_on() always returning with IF=1.
Fix this by adding a way to override the restored IF in ScopedCritical.
If WaitQueue::wake_all, WaitQueue::wake_one, or WaitQueue::wake_n
is called but nobody is currently waiting, we should remember that
fact and prevent someone from waiting after such a request. This
solves a race condition where the Finalizer thread is notified
to finalize a thread, but it is not (yet) waiting on this queue.
Fixes#2693
These changes solve a number of problems with the software
context swithcing:
* The scheduler lock really should be held throughout context switches
* Transitioning from the initial (idle) thread to another needs to
hold the scheduler lock
* Transitioning from a dying thread to another also needs to hold
the scheduler lock
* Dying threads cannot necessarily be finalized if they haven't
switched out of it yet, so flag them as active while a processor
is running it (the Running state may be switched to Dying while
it still is actually running)
We need to be able to prevent a WaitQueue from being
modified by another CPU. So, add a SpinLock to it.
Because this pushes some other class over the 64 byte
limit, we also need to add another 128-byte bucket to
the slab allocator.
The Lock class still permits no reason, but for everything else
require a reason to be passed to Thread::wait_on. This makes it
easier to diagnose why a Thread is in Queued state.
FileBackedFileSystem is one that's backed by (mounted from) a file, in other
words one that has a "source" of the mount; that doesn't mean it deals in
blocks. The hierarchy now becomes:
* FS
* ProcFS
* DevPtsFS
* TmpFS
* FileBackedFS
* (future) Plan9FS
* BlockBasedFS
* Ext2FS