After improving the mine field generation method, fields with greater
than 50% mines no longer take too long to generate. So, the mine limit
for a given size can be increased to its maximum possible value.
In reset() function, the icons of labels in the game area were initially
set as mine icon or null. And then, after generation, only the number
icon was set. In the old field generation algorithm, this did not cause
a very visible issue (The displayed mine icons in the game over screen
were from a previously generated game field, which was only slightly
wrong).
However, the newer field generation caused a "no mine icons are shown in
the game over screen" issue. To fix that, the label icon is set to null
initially, and then it is set to a mine or number bitmap.
The existing method was simply using a "randomly generate until it fits
our criteria" method to generate a game field. While this worked OK in
most cases, the run time was increasing seriously in boards whose
mine count / board size ratio was too big.
The new approach simply generates every possible mine location, shuffles
the array and picks its head. This uses more memory (shouldn't be a big
deal since minesweeper boards are generally miniscule) but runs much
quicker. The generation could still use some improvement (regarding
error handling), though :^)
This allows us to either pass a reference, which keeps compatibility
with old code, or to pass a NonnullOwnPtr, which allows us to
comfortably chain streams as usual.
This essentially wraps a `NonnullOwnPtr` or a reference, allowing us to
either have a stream own a dependent stream that it uses or to just hold
a reference if a stream is already owned by somebody else and we just
want to use it temporarily.
We were previously using a 32-bit unsigned integer for this, which
caused us to start truncating region sizes when multiplied with
`PAGE_SIZE` on hardware with a lot of memory.
Implement insertion sort in AK. The cutoff value 7 is a magic number
here, values [5, 15] should work well. Main idea of the cutoff is to
reduce recursion performed by quicksort to speed up sorting
of small partitions.
This currently doesn't work when running Serenity through QEMU, as it
doesn't pass the side button events over to Serenity due to some bug or
missing feature.
This would previously crash with a heap UAF when storing the result of
`yield 1` into `e` on the second `next` call:
```js
function* a() { const e = yield 1; }
b = a();
b.next();
gc();
b.next();
```
This generally seems like a better name, especially if we somehow also
need a better name for "read the entire buffer, but not the entire file"
somewhere down the line.
Next to functions like `is_eof` these were really confusing to use, and
the `read`/`write` functions should fail anyways if a stream is not
readable/writable.
:yakkie:
The build process for the Zig compiler is more involved than most of
the other ports, because the Zig compiler is mostly self-hosting. In
order to build it, the zig-bootstrap build system is used, which does
the following:
1) Build LLVM for the host OS;
2) Build Zig for the host OS with the SerenityOS target enabled;
3) Build zlib, zstd and LLVM for SerenityOS using `zig cc` as the C/C++
compiler;
4) Build Zig for SerenityOS using the host Zig.
A few hacks are required in order to tell `zig cc` and zig about what
Serenity's libc looks like in the build process, but other than that
it's fairly straightforward. All of the patches that are included with
this commit are Zig-upstream ready once the LLVM patches are upstreamed.
When using cmake --build, CMake will look for this environment variable
to enable parallelism. The Zig port, for example, uses cmake --build,
and will otherwise use a single core if cmake selects Make as the build
system. This should help with all ports which use cmake --build.
Some programs explicitly ask for a different initial stack size than
what the OS provides. This is implemented in ELF by having a
PT_GNU_STACK header which has its p_memsz set to the amount that the
program requires. This commit implements this policy by reading the
p_memsz of the header and setting the main thread stack size to that.
ELF::Image::validate_program_headers ensures that the size attribute is
a reasonable value.
This commit makes it possible for a process to downgrade a file lock it
holds from a write (exclusive) lock to a read (shared) lock. For this,
the process must point to the exact range of the flock, and must be the
owner of the lock.
This replaces all state-related variables with a single ThreadState.
These are simplified over what the Kernel has, but capture all
userspace-available thread state.
Locking the state behind an atomic and using proper atomic operations
also gets rid of quite some deadlocks and race conditions that have
existed around m_tid and others beforehand.
In terms of behavior, this introduces the following changes:
- All thread state mishandling (e.g. joining a detached thread) crashes
the program. Mishandling thread state is a severe kind of concurrency
bug that might also be indeterministic, so letting it silently
disappear with the return value of pthread_ APIs is a bad idea. The
thread state can always be checked beforehand to ensure that no crash
happens.
- Destructing a still-running thread will crash in AK/Function, so the
Thread destructor issues its own warning for debugging purposes.
- Thread issues warnings before crashes in many places to aid
concurrency debugging (the most difficult kind of debugging).
- Joining dead but not detached threads is legal, as per POSIX APIs.
- The thread ID is never reset to 0 after the thread has been started
and subsequently been assigned a valid thread ID. The thread's exit
state is still obtainable.
- Detaching threads that are about to exit is considered a programming
bug and will often (not always, as we can't catch all execution
sequences involved in such a situation) crash the program on purpose.
If you want to detach a thread that will definitely exit on its own,
you have to prevent it from exiting before detach() was called (e.g.
with an "exit requested" flag).
Joining dead threads is allowed for two main reasons:
- Thread join behavior should not be racy when a thread is joined and
exiting at roughly the same time. This is common behavior when threads
are given a signal to end (meaning they are going to exit ASAP) and
then joined.
- POSIX requires that exited threads are joinable (at least, there is no
language in the specification forbidding it).
The behavior is still well-defined; e.g. it doesn't allow a dead
detached thread to be joined or a thread to be joined more than once.