People seem to easily miss the "Setting up build tools" section, so I
have moved that step above the filesystem information and linked
directly to BuildInstructions.md to hopefully make it harder to skip.
Also, added mention of `\\wsl$` since that regularly comes up in
Discord.
I was building serenity on quite a fresh NixOS system and it turns
out `unzip` and `qemu` were missing from this nix expression to
compile and run serenity.
The default template argument is only used in one place, and it
looks like it was probably just an oversight. The rest of the Kernel
code all uses u8 as the type. So lets make that the default and remove
the unused template argument, as there doesn't seem to be a reason to
allow the size to be customizable.
This is a combination of the efforts of multiple people and hours of
pain trying to configure VSCode properly. This should give a good
baseline for anyone trying to develop Serenity with VSCode.
Co-authored-by: Hendiadyoin1 <leon2002.la@gmail.com>
Serenity build tooling autodetects gcc 10 so update-alternatives
is not necessary. Also, switching apt repositories on the fly can
cause issues with dependencies, package downgrades and leave the
system in a broken state.
After discussing on Discord about how to speed up the build time, I
received lots of helpful advice on measuring the build time, but none of
it was easily accessible. I thought other people might find it useful,
so I've written it down! :^)
Thanks goes to @bgianfo and @nico.
GCC and Clang allow us to inject a call to a function named
__sanitizer_cov_trace_pc on every edge. This function has to be defined
by us. By noting down the caller in that function we can trace the code
we have encountered during execution. Such information is used by
coverage guided fuzzers like AFL and LibFuzzer to determine if a new
input resulted in a new code path. This makes fuzzing much more
effective.
Additionally this adds a basic KCOV implementation. KCOV is an API that
allows user space to request the kernel to start collecting coverage
information for a given user space thread. Furthermore KCOV then exposes
the collected program counters to user space via a BlockDevice which can
be mmaped from user space.
This work is required to add effective support for fuzzing SerenityOS to
the Syzkaller syscall fuzzer. :^) :^)
This removes the '$' character so that it is easier to copy commands
directly from the build instructions and then executing them without
first having to remove the '$' character.
The WASM spec tests caused a stack overflow when generated with wat2wasm
version 1.0.23, which ships with homebrew. To give feature parity,
manually download the same version from GitHub packages for Ubuntu.
Document the dependencies of the WASM spec tests option, as well.
This is no longer relevant for most users because due to an
unrelated change to Meta/run.sh the default display backend is now
SDL which does not exhibit this problem.
The x86_64 QEMU binary supports both i386 as well as x86_64 guests.
By using the x86_64 binary users won't have to change anything when
switching between i386 and x86_64 builds.
People are commonly asking about a package manager in
serenity. This patch adds an answer the FAQ, explaining
why there is no need for packages as well as different
possible ways to add or remove software installed on the
system.
This workaround disables the in-kernel interrupt controller.
This impacts the VM performance and should probably be removed
when the workaround is no longer needed.
This workaround was posed by stelar7.
See #7523
Mention the "Open Project Wizard" where you can set
the CMake options before making the cache.
Remind users to use the "Default" build type
and to build the Toolchain so CMake does not complain.
Because of the added complexity of *non-throwing* `new`, helper methods
for correctly constructing smart pointers were added in a previous
commit. This commit changes the documentation to recommend using these,
and adds examples to aid in correctly determining when to use
non-throwing new when manually creating smart pointers.
Components are a group of build targets that can be built and installed
separately. Whether a component should be built can be configured with
CMake arguments: -DBUILD_<NAME>=ON|OFF, where <NAME> is the name of the
component (in all caps).
Components can be marked as REQUIRED if they're necessary for a
minimally functional base system or they can be marked as RECOMMENDED
if they're not strictly necessary but are useful for most users.
A component can have an optional description which isn't used by the
build system but may be useful for a configuration UI.
Components specify the TARGETS which should be built when the component
is enabled. They can also specify other components which they depend on
(with DEPENDS).
This also adds the BUILD_EVERYTHING CMake variable which lets the user
build all optional components. For now this defaults to ON to make the
transition to the components-based build system easier.
The list of components is exported as an INI file in the build directory
(e.g. Build/i686/components.ini).
Fixes#8048.
PR #7970 added a line clarifying the requirement for QEMU 5.
Unfortunately, this location this line was added changed the meaning
of the following line, referencing the availability of GCC in Ubuntu
20.04.
QEMU 5 is not available in Ubuntu 20.04, so this change is incorrect,
as well as misleading.
These are pretty common on older LGA1366 & LGA1150 motherboards.
NOTE: Since the registers datasheets for all versions of the chip
besides versions 1 - 3 are still under NDAs i had to collect
several "magical vendor constants" from the *BSD driver and the
linux driver that i was not able to name verbosely, and as such
these are labeled with the comment "vendor magic values".
This is a fairly small change; removed the statement "Pointer and
reference types in C++ code" as it does not provide any additional
knowledge that contributors are or will be aware of after further
reading into the "Pointers and References" section. It seems
unnecessary and redundant given the sentence adjacent to it.
Unfortunately we cannot enforce this with clang-format yet, as that
feature is not available. Until then, let's try to write new code
with this in mind, and convert old code as we go.
This option replaces the use of ENABLE_ALL_THE_DEBUG_MACROS in CI runs,
and enables all debug options that might be broken by developers
unintentionally that are only used in specific debugging situations.
When debugging kernel code, it's necessary to set extra flags. Normal
advice is to set -ggdb3. Sometimes that still doesn't provide enough
debugging information for complex functions that still get optimized.
Compiling with -Og gives the best optimizations for debugging, but can
sometimes be broken by changes that are innocuous when the compiler gets
more of a chance to look at them. The new CMake option enables both
compile options for kernel code.
This only tests "can it be parsed", but the goal of this commit is to
provide a test framework that can be built upon :)
The conformance tests are downloaded, compiled* and installed only if
the INCLUDE_WASM_SPEC_TESTS cmake option is enabled.
(*) Since we do not yet have a wast parser, the compilation is delegated
to an external tool from binaryen, `wasm-as`, which is required for the
test suite download/install to succeed.
This *does* run the tests in CI, but it currently does not include the
spec conformance tests.
This was hiding on the serenityos.org website previously, where not
many people found it. Let's put it in a more natural location, and
also make sure to link to it from the README.
The current ProtocolServer was really only used for requests, and with
the recent introduction of the WebSocket service, long-lasting
connections with another server are not part of it. To better reflect
this, this commit renames it to RequestServer.
This commit also changes the existing 'protocol' portal to 'request',
the existing 'protocol' user and group to 'request', and most mentions
of the 'download' aspect of the request to 'request' when relevant, to
make everything consistent across the system.
Note that LibProtocol still exists as-is, but the more generic Client
class and the more specific Download class have both been renamed to a
more accurate RequestClient and Request to match the new names.
This commit only change names, not behaviors.
SPDX License Identifiers are a more compact / standardized
way of representing file license information.
See: https://spdx.dev/resources/use/#identifiers
This was done with the `ambr` search and replace tool.
ambr --no-parent-ignore --key-from-file --rep-from-file key.txt rep.txt *
This command line flag can be used to disable VirtIO support on
certain configurations (native windows) where interfacing with
virtio devices can cause qemu to freeze.
Helps with bare metal debugging, as we can't be sure our implementation
will work with a given machine.
As reported by someone on Discord, their machine hangs when we attempt
the dummy transfer.
We do support AHCI now, but the implementation could be incomplete for
some chipsets.
Also, we should write the acronym "Non-volatile Memory Express" as
NVMe. not NVME.
If you don't need/want to use Fuse+ex2 then half of the existing
install command is unnecessary, and it's hard to pick out which you
do and don't need to, for example, build Lagom. This makes it clear
which commands you can skip if you don't need ex2 support.
- Fix headings
- Consistent & more accurate code block language specifiers
- Add some newlines where appropriate
- Remove the strange "run ninja but actually you don't have to run ninja
as ninja install takes care of that" part
- Don't repeat specific build commands in "Ports" section
- Reword "Keymap" section to more generic "Customize disk image"
The QEMU's `--accel hvf` command was recently enabled in the `run.sh`
script, but it sadly doesn't work on macOS Big Sur: you need to first
sign your code by adding an `entitlements.xml` file and running a
simple command.
Including 'Build/' is unfortunate, but this seems to be what everyone does,
short of creating a symlink/hardlink from /AK/Debug.h to /Build/AK/Debug.h.
This feels like a crutch, but it's a better crutch than having a workaround
that could easily break or corrupt commits (i.e., the symlinks).
I just ran through successfully building and running SerenityOS under
macOS. I ran into two main things that I struggled with, which were
- properly enabling osxfuse (through System Preferences)
- running the suggested command about compiler versions in such a way
that would be compatible with Ninja (as it turns out, I just needed
to add `-G Ninja` to the command)
This commit clarifies those things for anyone who may follow
Bitmap::load_from_file("foo.png", 2) will now look for "foo-2x.png" and
try load that as a bitmap with scale factor 2 if it exists. If it
doesn't, it falls back to the 1x bitmap as normal.
Only places that know that they'll draw the bitmap to a 2x painter
should pass "2" for the second argument.
Use this new API in WindowServer for loading window buttons and
cursors.
As a testing aid, ctrl-shift-super-i can force HighDPI icons off in
HighDPI mode. Toggling between low-res and high-res icons makes it easy
to see if the high-res version of an icon looks right: It should look
like the low-res version, just less jaggy.
We'll likely have to grow a better API for loading scaled resources, but
for now this suffices.
Things to check:
- `chres 640 480` followed by `chres 640 480 2` followed by
`chres 640 480`
- window buttons in window context menu (in task bar and on title bar)
still have low-res icons
- ctrl-shift-super-i in high-res mode toggles sharpness of window
buttons and of arrow cursorf
- arrow cursor hotspot is still where you'd expect
Gfx::Bitmap can now store its scale factor. Normally it's 1, but
in high dpi mode it can be 2.
If a Bitmap with a scale factor of 2 is blitted to a Painter with
scale factor of 2, the pixels can be copied over without any resampling.
(When blitting a Bitmap with a scale factor of 1 to a Painter with scale
factor of 2, the Bitmap is painted at twice its width and height at
paint time. Blitting a Bitmap with a scale factor of 2 to a Painter with
scale factor 1 is not supported.)
A Bitmap with scale factor of 2 reports the same width() and height() as
one with scale factor 1. That's important because many places in the
codebase use a bitmap's width() and height() to layout Widgets, and all
widget coordinates are in logical coordinates as well, per
Documentation/HighDPI.md.
Bitmap grows physical_width() / physical_height() to access the actual
pixel size. Update a few callers that work with pixels to call this
instead.
Make Painter's constructor take its scale factor from the target bitmap
that's passed in, and update its various blit() methods to handle
blitting a 2x bitmap to a 2x painter. This allows removing some gnarly
code in Compositor. (In return, put some new gnarly code in
LibGfxScaleDemo to preserve behavior there.)
No intended behavior change.
For now, only support 1x and 2x scale.
I tried doing something "smarter" first where the UI would try
to keep the physical resolution constant when toggling between
1x and 2x, but many of the smaller 1x resolutions map to 2x
logical resolutions that Compositor rejects (e.g. 1024x768 becomes
512x384, which is less than the minimum 640x480 that Compositor
wants) and it felt complicated and overly magical.
So this instead just gives you a 1x/2x toggle and a dropdown
with logical (!) resolutions. That is, 800x600 @ 2x gives you
a physical resolution of 1600x1200.
If we don't like this after trying it for a while, we can change
the UI then.
Almost all logic stays in "logical" (unscaled coordinates), which
means the patch is small and things like DnD, window moving and
resizing, menu handling, menuapplets, etc all work without changes.
Screen knows about phyiscal coordinates and mouse handling internally is
in physical coordinates (so that two 1 pixel movements in succession can
translate to one 1 logical coordinate mouse movement -- only a single
event is sent in this case, on the 2nd moved pixel).
Compositor also knows about physical pixels for its backbuffers. This is
a temporary state -- in a follow-up, I'll try to let Bitmaps know about
their intrinsic scale, then Compositor won't have to know about pixels
any longer. Most of Compositor's logic stays in view units, just
blitting to and from back buffers and the cursor save buffer has to be
done in pixels. The back buffer Painter gets a scale applied which
transparently handles all drawing. (But since the backbuffer and cursor
save buffer are also HighDPI, they currently need to be drawn using a
hack temporary unscaled Painter object. This will also go away once
Bitmaps know about their intrinsic scale.)
With this, editing WindowServer.ini to say
Width=800
Height=600
ScaleFactor=2
and booting brings up a fully-functional HighDPI UI.
(Except for minimizing windows, which will crash the window server
until #4932 is merged. And I didn't test the window switcher since the
win-tab shortcut doesn't work on my system.) It's all pixel-scaled,
but it looks pretty decent :^)
This adds a scale factor to Painter, which will be used for HighDPI
support. It's also a step towards general affine transforms on Painters.
All of Painter's public API takes logical coordinates, while some
internals deal with physical coordinates now. If scale == 1, logical
and physical coordinates are the same. For scale == 2, a 200x100 bitmap
would be covered by a logical {0, 0, 100, 50} rect, while its physical
size would be {0, 0, 200, 100}.
Most of Painter's functions just assert that scale() == 1 is for now,
but most functions called by WindowServer are updated to handle
arbitrary (integer) scale.
Also add a new Demo "LibGfxScaleDemo" that covers the converted
functions and that can be used to iteratively add scaling support
to more functions.
To make Painter's interface deal with logical coordinates only,
make translation() and clip_rect() non-public.
I have a local branch that gets us past implementation stage 1
in this doc, and it seems like a useful enough checkpoint to
upstream it and then iterate in tree.