This is obviously not ideal, and it would be better to teach it how to
allocate more pages, etc. But since the physical page allocator itself
currently uses SlabAllocator, it's a little bit tricky :^)
Make sure we don't move accepted sockets to the Completed setup state
until we've actually constructed a FileDescription for them.
This is important, since this state transition will trigger connect()
to unblock on the client side, and the client may try writing to the
socket right away.
This makes DNS lookups way more reliable since we don't just fail to
write() right after connect()ing to LookupServer sometimes. :^)
This was causing connect() to unblock immediately for local sockets,
since that's exactly what ConnectBlocker checks for.
Instead, just move to SetupState::Completed when it's accept()ed.
The Cache entries found in `DiskBackedFileSystem` are now stored in a
`KBuffer` object, instead of relying on `kmalloc_eternal`. The number
of entries was exceeding that of the number of bytes allocated to
`kmalloc_eternal`, which in turn caused `mount()` to fail epically
when called.
Now programs can catch the SIGSEGV signal when they segfault.
This commit also introduced the send_urgent_signal_to_self method,
which is needed to send signals to a thread when handling exceptions
caused by the same thread.
Added the exception_code field to RegisterDump, removing the need
for RegisterDumpWithExceptionCode. To accomplish this, I had to
push a dummy exception code during some interrupt entries to properly
pad out the RegisterDump. Note that we also needed to change some code
in sys$sigreturn to deal with the new RegisterDump layout.
Also added a script to handle creation of GPT partitioned disk (with
GRUB config file). Block limit will be used to disallow potential access
to other partitions.
This patch adds three separate per-process fault counters:
- Inode faults
An inode fault happens when we've memory-mapped a file from disk
and we end up having to load 1 page (4KB) of the file into memory.
- Zero faults
Memory returned by mmap() is lazily zeroed out. Every time we have
to zero out 1 page, we count a zero fault.
- CoW faults
VM objects can be shared by multiple mappings that make their own
unique copy iff they want to modify it. The typical reason here is
memory shared between a parent and child process.
If we didn't find anything else that wants to run, we don't need to
update the current thread's TSS since we're just gonna return to the
same thread anyway.
This patch overloads Inode::is_directory() with a faster version that
doesn't require instantiating the whole InodeMetadata.
If you have an Ext2FSInode&, calling is_directory() should be instant
since we can just look directly at the raw inode bits.
We need these for PhysicalPage objects. Ultimately I'd like to get rid
of these objects entirely, but while we still have to deal with them,
let's at least handle large demand a bit better.
Instead of allocating and populating a Copy-on-Write bitmap for each
Region up front, wait until we actually clone the Region for sharing
with another process.
In most cases, we never need any CoW bits and we save ourselves a lot
of kmalloc() memory and time.
When splitting an Region that's already the result of an earlier split,
we have to take the Region's offset-in-VMObject into account since it
may be non-zero.
If we want to make a new entry in the disk cache when it's completely
full of dirty blocks, we'll now synchronously flush the writes at that
point. Maybe it's not ideal, but at least we can keep going.
This way clients are not required to have instantiated ByteBuffers
and can choose whatever memory scheme works best for them.
Also converted some of the Ext2FS code to use stack buffers instead.
Instead of having DiskBackedFS allocate a ByteBuffer, leave it to each
client to provide buffer space.
This is significantly faster in many cases where we can use a stack
buffer and avoid heap allocation entirely.
The hashmap cache was ridiculously slow and hurt us more than it helped
us. This patch replaces it with a flat memory cache that keeps up to
10'000 blocks in cache with a simple dirty bit.
The syncd task will wake up periodically and call flush_writes() on all
file systems, which now causes us to traverse the cache and write all
dirty blocks to disk.
There's a ton of room for improvement here, but this itself is already
drastically better when doing repeated GCC invocations.
This is a simple command that can be used to display HTML from a given
file, or from the standard input, in an HtmlView. It replaces the `tho`
(test HTML output) command.
It turns out some BIOS vendors don't support non-BCD date/time mode, but
we were relying on it being available. We no longer do this, but instead
check whether the BIOS claims to provide BCD or regular binary values for
its date/time data.
We were just blindly trusting that the bootloader would only give us
page-aligned memory regions. This is apparently not always the case,
so now we can try to repair those regions.
Fixes#601
We were always returning the full VM range of the partially-unmapped
Region to the range allocator. This caused us to re-use those addresses
for subsequent VM allocations.
This patch also skips creating a new VMObject in partial munmap().
Instead we just make split regions that point into the same VMObject.
This fixes the mysterious GCC ICE on large C++ programs.
This simplifies the ownership model and makes Region easier to reason
about. Userspace Regions are now primarily kept by Process::m_regions.
Kernel Regions are kept in various OwnPtr<Regions>'s.
Regions now only ever get unmapped when they are destroyed.
If we can't already read when we enter recvfrom() on a LocalSocket,
we'll now block the current thread until we can.
Also added a buffer_for(FileDescription&) helper so that the client
and server can share some of the code. :^)
1) Off-by-one in block allocation when block size != 1 KB
Due to a quirk in the on-disk layout of ext2, file systems with a block
size of 1 KB have block #1 as their first block, while all others start
on block #0.
2) We had no fallback mechanism when the preferred group was full
We now allocate blocks from the preferred block group as long as it's
possible, and fall back to a simple scan through all groups when the
preferred one is full.
Put one unused page on each side of VM allocations to make invalid
accesses more likely to generate crashes.
Note that we will not add this guard padding for mmap() at a specific
memory address, only to "mmap it anywhere" requests.
Made getsockopt() and setsockopt() virtual so we can handle them in the
various Socket subclasses. The subclasses map kinda nicely to "levels".
This will allow us to implement things like "traceroute", although..
I spent some time trying to do that, but then hit a wall when it turned
out that the user-mode networking in QEMU doesn't preserve TTL in the
ICMP packets passing through.