Instead of first doubling the required size for the determined inode
count and then _also_ tripling the sum of that and the determined disk
size, let's be a bit more reasonable and just double the sum of inode
count * size and disk size.
This results in a 1.4GB _disk_image, instead of the 2GB from before
(for < 800MB worth of files).
By providing SERENITY_DISK_SIZE_BYTES as an environment variable, the
calculation of default value considered suitable for the size of files
and number of inodes that will be included can be sidestepped.
Also removes mrsh from the list of ports missing descriptions. I tried
to be descriptive about the patches, but as I picked this port up from
someone else, I'm not 100% sure how to best explain the patches.
This let us test the VMWare SVGA adapter easily. We already use the std
vga (which is compatible with bochs-display that only lacks VGA support)
on the i440FX QEMU machine so we keep testing it there too, and on the
Q35 machine we use a bochs-display device as secondary display.
Add a job to the Azure pipelines to run tests with coverage enabled, and
aggregate the test results in a folder of html pages showing the
coverage results overall, and per-file.
Future work is needed to take the published pipeline artifact for the
coverage results and display them somewhere interesting.
The analyze-qemu-coverage.sh script cracks open the _disk_image for the
given SERENITY_ARCH and SERENITY_TOOLCHAIN and extracts llvm profile
data into a local directory owned by the current user. It then calls a
coverage artifact script from llvm to create a nice html report for all
the source files referenced by the profile data files.
We currently grab a script from llvm via wget. In the future a custom
script to call llvm-cov and llvm-profdata should probably be used.
This option sets -fprofile-instr-generate -fcoverage-mapping for Clang
builds only on almost all of Userland. Loader and LibTimeZone are
exempt. This can be used for generating code coverage reports, or even
PGO in the future.
Before, we wouldn't enable virtualization on Windows anymore unless
SERENITY_VIRTUALIZATION_SUPPORT was set explicitly. As far as we know,
there's no automatic way of detecting whether WHPX is enabled or not. So
we'll just enable virtualization on Windows by default, and if that
doesn't work the user can still disable it manually with
SERENITY_VIRTUALIZATION_SUPPORT=0.
Various Clang binaries are now considered when choosing the compiler for
Lagom.
The selection precedence is as follows:
1. Use the compiler set via CC/CXX if it's a supported version
2. Use newest available GCC if it's supported
3. Use newest available Clang if it's supported
Note that Apple Clang is still not supported, as its versioning scheme
and the fact that it masquerades as both GCC and Clang would complicate
this logic even more.
Fixes#12253
The LLVM patch has been broken up into smaller commits and moved to a
separate directory. CI should look at this new location to determine if
the toolchain needs to be rebuilt.
Besides a version bump, the following changes have been made to our
toolchain infrastructure:
- LLVM/Clang is now built with -march=native if the host compiler
supports it. An exception to this is CI, as the toolchain cache is
shared among many different machines there.
- The LLVM tarball is not re-extracted if the hash of the applied
patches doesn't differ.
- The patches have been split up into atomic chunks.
- Port-specific patches have been integrated into the main patches,
which will aid in the work towards self-hosting.
- <sysroot>/usr/local/lib is now appended to the linker's search path by
default.
- --pack-dyn-relocs=relr is appended to the linker command line by
default, meaning ports take advantage of RELR relocations without any
patches or additional compiler flags.
The formatting of LLVM port's package.sh has been bothering me, so I
also indented the arguments to the CMake invocation.
Previously, we were sending Buffers to the server whenever we had new
audio data for it. This meant that for every audio enqueue action, we
needed to create a new shared memory anonymous buffer, send that
buffer's file descriptor over IPC (+recfd on the other side) and then
map the buffer into the audio server's memory to be able to play it.
This was fine for sending large chunks of audio data, like when playing
existing audio files. However, in the future we want to move to
real-time audio in some applications like Piano. This means that the
size of buffers that are sent need to be very small, as just the size of
a buffer itself is part of the audio latency. If we were to try
real-time audio with the existing system, we would run into problems
really quickly. Dealing with a continuous stream of new anonymous files
like the current audio system is rather expensive, as we need Kernel
help in multiple places. Additionally, every enqueue incurs an IPC call,
which are not optimized for >1000 calls/second (which would be needed
for real-time audio with buffer sizes of ~40 samples). So a fundamental
change in how we handle audio sending in userspace is necessary.
This commit moves the audio sending system onto a shared single producer
circular queue (SSPCQ) (introduced with one of the previous commits).
This queue is intended to live in shared memory and be accessed by
multiple processes at the same time. It was specifically written to
support the audio sending case, so e.g. it only supports a single
producer (the audio client). Now, audio sending follows these general
steps:
- The audio client connects to the audio server.
- The audio client creates a SSPCQ in shared memory.
- The audio client sends the SSPCQ's file descriptor to the audio server
with the set_buffer() IPC call.
- The audio server receives the SSPCQ and maps it.
- The audio client signals start of playback with start_playback().
- At the same time:
- The audio client writes its audio data into the shared-memory queue.
- The audio server reads audio data from the shared-memory queue(s).
Both sides have additional before-queue/after-queue buffers, depending
on the exact application.
- Pausing playback is just an IPC call, nothing happens to the buffer
except that the server stops reading from it until playback is
resumed.
- Muting has nothing to do with whether audio data is read or not.
- When the connection closes, the queues are unmapped on both sides.
This should already improve audio playback performance in a bunch of
places.
Implementation & commit notes:
- Audio loaders don't create LegacyBuffers anymore. LegacyBuffer is kept
for WavLoader, see previous commit message.
- Most intra-process audio data passing is done with FixedArray<Sample>
or Vector<Sample>.
- Improvements to most audio-enqueuing applications. (If necessary I can
try to extract some of the aplay improvements.)
- New APIs on LibAudio/ClientConnection which allows non-realtime
applications to enqueue audio in big chunks like before.
- Removal of status APIs from the audio server connection for
information that can be directly obtained from the shared queue.
- Split the pause playback API into two APIs with more intuitive names.
I know this is a large commit, and you can kinda tell from the commit
message. It's basically impossible to break this up without hacks, so
please forgive me. These are some of the best changes to the audio
subsystem and I hope that that makes up for this :yaktangle: commit.
:yakring:
This new class with an admittedly long OOP-y name provides a circular
queue in shared memory. The queue is a lock-free synchronous queue
implemented with atomics, and its implementation is significantly
simplified by only accounting for one producer (and multiple consumers).
It is intended to be used as a producer-consumer communication
datastructure across processes. The original motivation behind this
class is efficient short-period transfer of audio data in userspace.
This class includes formal proofs of several correctness properties of
the main queue operations `enqueue` and `dequeue`. These proofs are not
100% complete in their existing form as the invariants they depend on
are "handwaved". This seems fine to me right now, as any proof is better
than no proof :^). Anyways, the proofs should build confidence that the
implemented algorithms, which are only roughly based on existing work,
operate correctly in even the worst-case concurrency scenarios.
Each LibGL test can now be tested against a reference QOI image.
Initially, these images can be generated by setting `SAVE_OUTPUT` to
`true`, which will save a bunch of QOI images to `/home/anon`.
Similar reasoning to making Core::Stream::read() return Bytes, except
that every user of read_line() creates a StringView from the result, so
let's just return one right away.
A mistake I've repeatedly made is along these lines:
```c++
auto nread = TRY(source_file->read(buffer));
TRY(destination_file->write(buffer));
```
It's a little clunky to have to create a Bytes or StringView from the
buffer's data pointer and the nread, and easy to forget and just use
the buffer. So, this patch changes the read() function to return a
Bytes of the data that were just read.
The other read_foo() methods will be modified in the same way in
subsequent commits.
Fixes#13687
Alias values are represented by "alias-name=real-name".
We have a lot of repetitive code for converting between ValueID and
property-specific enums. Let's see if we can generate it. :^)
This first step just produces the enums, from a JSON file. The values in
there are a duplication of what's in Properties.json, but eventually
those will go away.
The goal here is to move the parser-internal classes into this namespace
so they can have more convenient names without causing collisions. The
Parser itself won't collide, and would be more convenient to just
remain `CSS::Parser`, but having a namespace and a class with the same
name makes C++ unhappy.
The fnmatch patch that was added in 6de6dff is reverted because it is
not clear why it is necessary, as discussed in #9206.
This also removes diffutils from the list of ports missing descriptions
as it no longer has any patches.
Instead of downloading nearly 20 files individually, we can download a
single .zip file similar to how we download a single CLDR .zip. This is
to reduce the number of connections/downloads to/from unicode.org.
This adds the is_primitive() method as described in the Web IDL
specification. is_primitive() returns true if the type is a bigint,
boolean or numeric type.
This is in Tests/LibTTF instead of Tests/LibGfx because Tests/LibGfx
depends on serenity's file system layout and can't run in lagom,
but this new test runs just fine in lagom.
Now that clang-format-14 ubuntu packages are available, it's time to
finally upgrade our clang-format version. This version brings with it
a bunch of useful features with const-placement being the most notable.
These will be enabled in the following commits.
4k logical blocks are better for block devices in QEMU as they align
with the underlying filesystem which typically has 4k logical blocks
such as our EXT2 filesystem.
This is an editorial change in the Intl spec. See:
https://github.com/tc39/ecma402/commit/087995chttps://github.com/tc39/ecma402/commit/233d29c
This also adds a missing spec link for the sanctioned units and fixes a
broken spec link for IsSanctionedSingleUnitIdentifier. In LibUnicode,
the NumberFormat generator is updated to use the constexpr helper to
retrieve sanctioned units.
WASM_SPEC_TEST_TAR_PATH actually refers to a tarball that has already
been decompressed with gzip, so running `tar -xzf` on it fails.
I introduced this mistake in 66582a875f.
There is no need to keep an intermediary plain .tar file around, we can
pass the WASM_SPEC_TEST_GZ_PATH .tar.gz directly to `tar -xzf`.
Currently this can parse XML and resolve external resources/references,
and read a DTD (but not apply or verify its rules).
That's good enough for _most_ XHTML documents as the HTML 5 spec
enforces its own rules about document well-formedness, and does not make
use of XML DTDs (aside from a list of predefined entities).
An accompanying `xml` utility is provided that can read and dump XML
documents, and can also run the XML conformance test suite.
This does a few things in total:
* Ports the IPC-compiler to LibMain
* Extract some compiler steps into separate functions
* Minify some appends to use appendln (or appendff in the case of
StringBuilder)
This reduces the clang-tidies maximum cognitive-complexity score for
this file from 325 to under 100.
While GNU tar automatically detects the used compression algorithm,
POSIX requires that we specify -z if the tarball is compressed with
gzip.
Fixes a build error on OpenBSD.
We're now able to detect all the regular CPUID feature flags from
ECX/EDX for EAX=1 :^)
None of the new ones are being used for anything yet, but they will show
up in /proc/cpuinfo and subsequently lscpu and SystemMonitor.
Note that I replaced the periods from the SSE 4.1 and 4.2 instructions
with underscores, which matches the internal enum names, Linux's
/proc/cpuinfo and the general pattern of replacing special characters
with underscores to limit feature names to [a-z0-9_].
The enum member stringification has been moved to a new function for
better re-usability and to avoid cluttering up Processor.cpp.
We did already have range checking for the `<integer>` and `<number>`
types, but this patch adds this functionality to all numeric types
(dimensions and percentages).
The syntax in Properties.json is taken from the spec:
https://www.w3.org/TR/css-values-3/#numeric-ranges
eg, `length [0,∞]` defines that a Length is allowed as long as it has a
positive value.
The implementation here allows for any number to be the positive or
negative limit, even though only 0 and positive/negative infinity are
meaningful values without a unit.
This appears to be a remnant from the earlier HeaderCheck revisions,
where CMakeLists.txt was automatically generated.
Now that a (static) copy of CMakeLists.txt is checked in, this file
doesn't have any effect anymore.
This is a bit strange in the IDL syntax, but e.g., in HTMLSelectElement,
we have (simplified):
undefined add(optional (HTMLElement or long)? before = null)
This could instead become:
undefined add(optional (HTMLElement or long) before)
This change generates code for the former as if it were the latter.
I came across some websites that change an elements CSS "opacity" in
their :hover selectors. That caused us to relayout on hover, which we'd
like to avoid.
With this patch, we now check if a property only affects the stacking
context tree, and if nothing layout-affecting has changed, we only
invalidate the stacking context tree, causing it to be rebuilt on next
paint or hit test.
This makes :hover { opacity: ... } rules much faster. :^)
run.sh builds i686 by default, and the aarch64 port of serenity
isn't very far along yet.
Without this change, `run.sh` without arguments unceremoniously
fails with:
[0/1] cd .../serenity/Build/i686 && /usr...
ENITY_ARCH=i686 /home/thakis/src/serenity/Meta/run.sh
qemu-system-i386: invalid accelerator kvm
That's because /dev/kvm exists, but that's no good on a non-intel host.
When building on an arm host system, char defaults to unsigned,
leading to errors such as:
serenity/AK/StringBuilder.cpp:198:20:
error: comparison is always true due to limited range of data type
[-Werror=type-limits]
198 | if (ch >= 0 && ch <= 0x1f)
|
Building with -fsigned-char makes things work like on Intel, and
it's what we already do in Kernel/CMakeLists.txt for the same reasons.
We have seen some cases where the build fails for folks, and they are
missing unzip/tar/gzip etc. We can catch some of these in CMake itself,
so lets make sure to handle that uniformly across the build system.
The REQUIRED flag to `find_program` was only added on in CMake 3.18 and
above, so we can't rely on that to actually halt the program execution.
With regular builds, the generated IPC headers exist inside the Build
directory. The path Userland/Services under the build directory is
added to the include path.
For in-system builds the IPC headers are installed at /usr/include/.
To support this, we add /usr/include/Userland/Services to the build path
when building from Hack Studio.
Co-Authored-By: Andrew Kaster <akaster@serenityos.org>
Day and month name constants are defined in numerous places. This
pulls them together into a single place and eliminates the
duplication. It also ensures they are `constexpr`.
This patch adds CSS::property_affects_layout(PropertyID) which tells us
whether a CSS property would affect layout if it were changed.
This will be used to avoid unnecessary relayout work when something
changes that really only requires us to repaint the page.
To mark a property as not affecting layout, set "affects-layout" to
false in the corresponding Properties.json entry. Note that all
properties affect layout by default.
...and let LibWasm do the validation instead of removing the test when a
module is invalid.
Also, one of the tests has an integer literal starting with zero, so
account for this to make it not fail :^)
Since we want to store an initial value for every CSS::PropertyID,
it's pretty silly to use a HashMap when we can use an Array.
This takes the function from ~2.8% when mousing around on GitHub all the
way down to ~0.6%. :^)
As there is no need for a Prekernel on aarch64, the Prekernel code was
moved into Kernel itself. The functionality remains the same.
SERENITY_KERNEL_AND_INITRD in run.sh specifies a kernel and an inital
ramdisk to be used by the emulator. This is needed because aarch64
does not need a Prekernel and the other ones do.
These work differently from how we validate StyleValues. There, we parse
a StyleValue from the CSS, and then see if it is allowed in the
property. That causes problems when the syntax is ambiguous - for
example, `0` can be a number or a Length.
Here instead, we ask what kinds of value are allowed for a
media-feature, and then only attempt to parse those kinds of value.
This makes the ambiguity problem go away. :^)
Each media-feature in the spec only accepts one type of value, and/or
some identifiers. This makes the switch statements for the type a bit
excessive, but the spec does not *require* that only one type is
allowed, so this is more future-proof.
This works largely the same as the PropertyID and ValueID generators,
but using LibMain, Core::Stream, and TRY().
Rather than have a MediaFeatureID::Invalid, I decided to return an
Optional. We'll see if that turns out better or not. :^)
This patch adds NodeIterator (created via Document.createNodeIterator())
which allows you to iterate through all the nodes in a subtree while
filtering with a provided NodeFilter callback along the way.
This first cut implements the full API, but does not yet handle nodes
being removed from the document while referenced by the iterator. That
will be done in a subsequent patch.
This directory has to be writable if we want to install ports that have
been built inside Serenity. It's owned by root anyway, so having it be
read-only does not provide many security benefits.
Because of ninja's default behavior of using all processors this gave
the correct behaviour because MAKEJOBS was empty. However this meant
that the processor count was printed to stderr when building.
This initial version lays down the basic foundation of IDL overload
resolution, but much of it will have to be replaced with the actual IDL
overload resolution algorithms once we start implementing more complex
IDL overloading scenarios.
The microvm machine type is a modern tool for kernel and firmware
developers to test their software against features like FDTs, second
IOAPIC, lack of legacy devices by default, the ability of using PCIe
without using PCI x86 IO ports, etc.
We can boot into such machine but we are limited in the functionality we
support currently for this type of virtual machine.
The ISA-PC machine type provides no PCI bus support, no IOAPIC support
and other modern PC features of our generation.
This is mainly a good environment for testing abstractions in the kernel
space, and can help with improving on them for the sake of porting the
OS to other chipsets and CPU architectures.
We use the environment variable SERENITY_SOURCE_DIR to resolve and check
icon links. This is a bit inconvenient as SERENITY_SOURCE_DIR needs to
be set correctly before invoking the markdown checker, but as we use it
through the check-markdown script anyways, I think it's not a problem.