Before 6490529ef7, all programs in the
SPECIAL_TARGETS list were built because they didn't have an
EXCLUDE_FROM_ALL property set.
That commit set the property for all targets, but because of this, the
minimal "Required" build configuration no longer built, as CMake failed
to rename every special target. Even the not built ones.
This commit makes the rename action run only if the executable exists,
which makes us build Serenity without the `install` utility, and also
by using the minimal configuration set. :^) :^)
Previously the tool was removing the Root directory after configuring
cmake which was breaking the build, as some cmake rules put some
necessary files there.
Moving the cmake command to be the last one makes it regenerate those
files automatically. :^)
This filesystem is based on the code of the long-lived TmpFS. It differs
from that filesystem in one keypoint - its root inode doesn't have a
sticky bit on it.
Therefore, we mount it on /dev, to ensure only root can modify files on
that directory. In addition to that, /tmp is mounted directly in the
SystemServer main (start) code, so it's no longer specified in the fstab
file. We ensure that /tmp has a sticky bit and has the value 0777 for
root directory permissions, which is certainly a special case when using
RAM-backed (and in general other) filesystems.
Because of these 2 changes, it's no longer needed to maintain the TmpFS
filesystem, hence it's removed (renamed to RAMFS), because the RAMFS
represents the purpose of this filesystem in a much better way - it
relies on being backed by RAM "storage", and therefore it's easy to
conclude it's temporary and volatile, so its content is gone on either
system shutdown or unmounting of the filesystem.
Instead of explaining custom build directories first and then following
that up with a an explainer that Meta/serenity.sh is the easiest way to
get the browser up and running to try it out was not very ergonmic.
Also reorganize some of the per-distro documentation to put the compiler
requirements front and center.
While the clipping logic was correct (current v/s new clipping path),
the clipping path contents weren't. This commit fixed that.
We calculate the clipping path in two places: when we set it to be the
whole page at graphics state creation time, and when we perform clipping
path intersection to calculate a new clipping path. The clipping path is
then used to limit painting by passing it to the painter (more
precisely, but passing its bounding box to the painter, as the latter
doesn't support arbitrary path clipping). For this last point the
clipping path must be in device coordinates.
There was however a mix of coordinate systems involved in the creation,
update and usage of the clipping path:
* The initial values of the path (i.e., the whole page) were in user
coordinates.
* Clipping path intersection was performed against m_current_path,
which is in device coordinates.
* To perform the clipping operation, the current clipping path was
assumed to be in user coordinates.
This mix resulted in the clipping not working correctly depending on the
zoom level at which one visualised a page.
This commit fixes the issue by always keeping track of the clipping path
in device coordinates. This means that the initial full-page contents
are now converted to device coordinates before putting them in the
graphics state, and that no mapping is performed when applied the
clipping to the painter.
Draw pieces around 80% of the size of a square, instead of 100%, so that
there is a nice gap around them. This feels more comfy, and makes it
actually possible to read the coordinates while a piece is on their
square.
Also, the Tree object was only ever used by the TreeMapWidget, so its
creation can happen inside `analyze()`, fixing the memory leak issue.
Plus, it doesn't have to be RefCounted.
This is a first pass at implementing CRC2D.createPattern() and the
associated CanvasPattern object. This implementation only works for a
few of the required image sources [like CRC2D.drawImage()], and does
not yet support transforms. Other than that it supports everything
else (which is mainly the various repeat modes).
SQLClient exists as a wrapper around SQL IPC to provide a bit friendlier
interface for clients to deal with. Though right now, it mostly forwards
values as-is from IPC to the clients. This makes it a bit verbose to add
values to IPC responses, as we then have to add it to the callbacks used
by all clients. It's also a bit confusing seeing a sea of "auto" as the
parameter types for these callbacks.
This patch moves these response values to named structures instead. This
will allow adding values without needing to simultaneously update all
clients. We can then separately handle the new values in interested
clients only.
To do this properly, we also create Strings with formatting of device
nodes' names, taking into consideration errors when doing that.
Also, we use LibCore System mknod method instead of raw LibC functions
to be able to propagate errors from these calls too.
Quick select is an algorithm that is able to find the median
of a Vector without fully sorting it.
This replaces the old very naive implementation
for `AK::Statistics::median()` with `AK::quickselect_inline`
This adds the quick select algorithm that finds
the kth smallest element for any collection.
Whilst doing so it also partially sorts the collection.
I have also included the option to use different pivoting functions
including median of medians which makes the quick select have
a truely linear time complexity at the costs of enormous overhead,
so this that only really useful for really large datasets.
The same was chosen to reflect the fact that it modifies
the collection in place during the selection process.
When doing PT_SETREGS, we want to verify that the debugged thread is
executing in usermode.
b2f7ccf refactored things and flipped the relevant check around, which
broke things that use PT_SETREGS (for example, stepping over
breakpoints with sdb).