The kernel now bills processes for time spent in kernelspace and userspace
separately. The accounting is forwarded to the parent process in reap().
This makes the "time" builtin in bash work.
Clang demands that the size argument to the various operator new()'s
to be exactly whatever it thinks std::size_t is.
Since we can't include STL headers, this little trick will do.
This way the scheduler doesn't need to plumb the exit status into the waiter.
We still plumb the waitee pid though, I don't love it but it can be fixed.
I feel like this concept might be useful in more places. It's very naive
right now and uses dynamically growing buffers. It should really use static
size buffers and never kmalloc().
...by adding a new class called Ext2Inode that inherits CoreInode.
The idea is that a vnode will wrap a CoreInode rather than InodeIdentifier.
Each CoreInode subclass can keep whatever caches they like.
Right now, Ext2Inode caches the list of block indices since it can be very
expensive to retrieve.
I was surprised to find that dup()'ed fds don't share the close-on-exec flag.
That means it has to be stored separately from the FileDescriptor object.
Pass the file name in a stack-allocated buffer instead of using an AK::String
when iterating directories. This dramatically reduces the amount of cycles
spent traversing the filesystem.
- Process::exec() needs to restore the original paging scope when called
on a non-current process.
- Add missing InterruptDisabler guards around g_processes access.
- Only flush the TLB when modifying the active page tables.
First of all, change sys$mmap to take a struct SC_mmap_params since our
sycsall calling convention can't handle more than 3 arguments.
This exposed a bug in Syscall::invoke() needing to use clobber lists.
It was a bit confusing to debug. :^)
sys$fork() now clones all writable regions with per-page COW bits.
The pages are then mapped read-only and we handle a PF by COWing the pages.
This is quite delightful. Obviously there's lots of work to do still,
and it needs better data structures, but the general concept works.
This was the fix:
-process.m_page_directory[0] = m_kernel_page_directory[0];
-process.m_page_directory[1] = m_kernel_page_directory[1];
+process.m_page_directory->entries[0] = m_kernel_page_directory->entries[0];
+process.m_page_directory->entries[1] = m_kernel_page_directory->entries[1];
I spent a good two hours scratching my head, not being able to figure out why
user process page directories felt they had ownership of page tables in the
kernel page directory.
It was because I was copying the entire damn kernel page directory into
the process instead of only sharing the two first PDE's. Dang!
The SpinLock was all backwards and didn't actually work. Fixing it exposed
how wrong most of the locking here is.
I need to come up with a better granularity here.
This shows some info about the MM. Right now it's just the zone count
and the number of free physical pages. Lots more can be added.
Also added "exit" to sh so we can nest shells and exit from them.
I also noticed that we were leaking all the physical pages, so fixed that.
I also added a generator cache to FileHandle. This way, multiple
reads to a generated file (i.e in a synthfs) can transparently
handle multiple calls to read() without the contents changing
between calls.
The cache is discarded at EOF (or when the FileHandle is destroyed.)
It's implemented as a separate process. How cute is that.
Tasks now have a current working directory. Spawned tasks inherit their
parent task's working directory.
Currently everyone just uses "/" as there's no way to chdir().
I added a dead-simple malloc that only allows allocations < 4096 bytes.
It just forwards the request to mmap() every time.
I also added simplified versions of opendir() and readdir().