The main incentive is much better performance. We could have gone a bit
further in optimizing the Skia painter to blit glyphs produced by LibGfx
more efficiently from the glyph atlas, but eventually, we also want Skia
to improve correctness.
This change does not completely replace LibGfx in text handling. It's
still used at all stages, including layout, up until display list
replaying.
It turns out we were already generating all the necessary include
statements, and we can simply remove all this goofy code soup that
uses the C preprocessor to speculatively look for include files.
This is `counter(name, style?)` or `counters(name, link, style?)`. The
difference being, `counter()` matches only the nearest level (eg, "1"),
and `counters()` combines all the levels in the tree (eg, "3.4.1").
This change removes wrappers inherited from Gfx::Typeface for WOFF and
WOFF2 fonts. The only purpose they served is owning of ttf ByteBuffer
produced by decoding a WOFF/WOFF2 font. Now new FontData class is
responsible for holding ByteBuffer when a font is constructed from
non-externally owned memory.
Override the vcpkg/scripts/detect_compiler behavior of always pulling
$CC and $CXX at the time that vcpkg install is determined to need called
by forcing $ENV{CXX} and $ENV{CC} to our CMake-determined compiler.
This prevents strange behavior such as running the following:
./Meta/ladybird.sh run
make changes...
ninja -C Build/ladybird
Where the second build step would be run without CC or CXX set in the
environment, causing a total cache miss from vcpkg and a full rebuild.
It also helps prevent full rebuilds when an IDE passes a slightly
different compiler to the build step than ladybird.sh.
AK will depend on some vcpkg dependencies, so the Lagom tools build will
need to know how to use vcpkg. We can do this by sym-linking vcpkg.json
to Meta/Lagom (as vcpkg.json has to be in the CMake source directory).
We also need a CMakePresets.json in the source directory, which can just
include the root file. The root CMakePresets then needs to define paths
relative to ${fileDir} rather than ${sourceDir}.
- `-fstack-protection-strong` enables stack canaries for functions where
addresses of local variables are taken or arrays/structures
containing arrays are allocated on the stack.
- `-fstrict-flex-arrays=2` causes the compiler to only treat arrays with
unknown bounds (`[]`) or zero-length-arrays (`[0]`) as *flexible array
members*, allowing the sanitizers to emit bounds checks for structs
with proper arrays as their last member.
More rigorous options (such as AArch64 pointer authentication, Control
Flow Integrity, _FORTIFY_SOURCE) should be investigated in the future,
however this is a good baseline.
If no header includes the prototype of a function, then it cannot be
used from outside the translation unit it was defined in. In that case,
it should be marked as `static`, in order to avoid possible ODR
problems, unnecessary exported symbols, and allow the compiler to better
optimize those.
If this warning triggers in a function defined in a header, `inline`
needs to be added, otherwise if the header is included in more than one
TU, it will fail to link with a duplicate definition error.
The reason this diff got so big is that Lagom-only code wasn't built
with this flag even in Serenity times.
These used to be enabled in `serenity_compile_options.cmake` for
Serenity builds and were removed in 9b05fb98. This is a slightly more
conservative subset of those, with ones that are enabled by default
omitted.
This should prevent our code quality regressing in the long run.
This change updates the Meta/check-debug-flags.sh script to avoid an
apparent Bach 3.2 parser bug. Specifically, it takes a comment and some
code of a process substitution and moves it into a separate function.
Otherwise, without this change, trying to run the check-debug-flags.sh
script with Bash 3.2 fails with the following error:
line 39: bad substitution: no closing `)' in <(
...apparently because Bash 3.2 chokes on the comment (and doesn’t choke
if the comment is completely removed).
Relates to https://github.com/LadybirdBrowser/ladybird/issues/283
This change makes all the pre-commit CI scripts runnable under Bash 3.2,
by replacing “mapfile” invocations in them code that first explicitly
creates an array, and then uses a while loop to populate the array.
Otherwise, without this change, the scripts all fail to run under Bash
3.2 — due to lack of support for “mapfile”.
Fixes https://github.com/LadybirdBrowser/ladybird/issues/283
This also drops bash from the list of homebrew dependencies in the build
instructions — because with this change, homebrew bash (v4) is no longer
needed; things will now work with the Apple-provided bash (v3.2)
We no longer have multiple locations including AK (e.g. LibC). So let's
avoid awkwardly defining the AK library across multiple CMake files.
This is to allow more easily adding third-party dependencies to AK in
the future.
Since we support the multi-memory proposal, we should skip tests that
validate that we have only one memory. Once multi-memory gets included
in the main WebAssembly specification (and the testsuite is updated), we
can revert this commit.
Because `nan:arithmetic` and `nan:canonical` aren't bound to a single
bit pattern, we cannot check against a float-containing SIMD vector
against a single value in the tests. Now, we represent `v128`s as
`TypedArray`s in `testjs` (as opposed to using `BigInt`s), allowing us
to properly check `NaN` bit patterns.
Some spec-tests check the bit pattern of a returned `NaN` (i.e.
`nan:canonical`, `nan:arithmetic`, or something like `nan:0x200000`).
Previously, we just accepted any `NaN`.
`linking.wast` has an unusual pattern for invoke commands, which is now
accounted-for. Also, special unicode characters are now properly
serialized in JavaScript as string literals (this is only relevant for
`names.wast`).
Trying to build VulkanLoader from source is a giant headache of
unnecessary packages. Every modern distro has vulkan packages, let's
depend on those instead of trying to build something for both wayland
and X11.
We need to avoid using vcpkg's pkg-config on non-x86_64 platforms,
because they do very strange things to the default paths.
On Asahi Linux and other 16 KiB page distros, the user must also provide
a properly compiled version of GN.
This commit replaces all TLS connection code with wolfssl.
The certificate parsing code has to remain for now, as wolfssl does not
seem to have any exposed API for that.
Following the rules in the algorithm from
https://webidl.spec.whatwg.org/#js-platform-objects, "To Internally
create a new object implementing the interface interface", this change
incorporates the steps to load a prototype from new.target, and write
it to the created instance returned from constructor_impl(). This
mirrors the code for generate_html_constructor(), which incorporates
additional steps needed by Custom Elements.
Bug #334
Skia now uses GPU-accelerated painting on Linux if Vulkan is available.
Most of the performance gain is currently negated by reading the GPU
backend back into RAM to pass it to the Browser process. In the future,
this could be improved by sharing GPU-allocated memory across the
Browser and WebContent processes.
GPU painter that uses AccelGfx is slower and way less complete compared
to both default Gfx::Painter and Skia painter. It does not make much
sense to keep it, considering Skia painter already uses Metal backend on
macOS by default and there is an option to enable GPU-accelerated
backend on linux.
This allows developers on macOS to open Ladybird.app in Instruments.
Add some documentation for how to use the command as well. It is enabled
automatically when CMAKE_BUILD_TYPE is not Release or RelWithDebInfo.
As recommended by the CMake docs, let's tolerate systems or setups that
don't have backtrace(3) in the `<execinfo.h>` header file, such as those
using libbacktrace directly.
This large commit also refactors LibWebView's process handling to use
a top-level Application class that uses a new WebView::Process class to
encapsulate the IPC-centric nature of each helper process.
Typeface is a more widely used name for the data represented by
class previously named VectorFont.
Now:
- Typeface represents decoded font that is not ready for rendering
- ScaledFont represents the combination of typeface and size for
rendering
We currently build debug and release versions of vcpkg dependencies. We
will most commonly only need the release version, so let's default to
that to approximately halve our dependency build time.
The changes to tests are due to LibTimeZone incorrectly interpreting
time stamps in the TZDB. The TZDB will list zone transitions in either
UTC or the zone's local time (which is then subject to DST offsets).
LibTimeZone did not handle the latter at all.
For example:
The following rule is in effect until November 18, 6PM UTC.
America/Chicago -5:50:36 - LMT 1883 Nov 18 18:00u
The following rule is in effect until March 1, 2AM in Chicago time. But
at that time, a DST transition occurs, so the local time is actually
3AM.
America/Chicago -6:00 Chicago C%sT 1936 Mar 1 2:00
Multiple APIs have moved from the DOM Parsing and Serialization spec to
HTML.
Updates spec URLs and comments.
Delete InnerHTML file:
- Make parse_fragment a member of Element, matching serialize_fragment
on Node.
- Move inner_html_setter inline into Element and ShadowRoot as per the
spec.
Add FIXME to Range.idl for Trusted Types createContextualFragment
H.264 in Matroska can have blocks with unordered timestamps. Without
passing these as the presentation timestamp into the FFmpeg decoder,
the frames will not be returned in chronological order.
VideoFrame will now include a timestamp that is used by the
PlaybackManager, rather than assuming that it is the same timestamp
returned by the demuxer.
VP9 continues to function, but this also allows AV1 to be decoded. With
this commit, H.264 is still non-functional, as the decoder requires
some extra initial data from the track definition in the Matroska file.
It is not needed by code generators anymore, so it is not needed in the
Lagom tools build. And it is not needed as an object library anymore; it
was created this way so it could be included in Serenity's LibC.
LibLocale was split off from LibUnicode a couple years ago to reduce the
number of applications on SerenityOS that depend on CLDR data. Now that
we use ICU, both LibUnicode and LibLocale are actually linking in this
data. And since vcpkg gives us static libraries, both libraries are over
30MB in size.
This patch reverts the separation and merges LibLocale into LibUnicode
again. We now have just one library that includes the ICU data.
Further, this will let LibUnicode share the locale cache that previously
would only exist in LibLocale.
We only need LibCoreMinimal for the lagom-tools build. In particular, by
removing LibUnicode, we remove the lagom-tools dependence on the system
ICU package, as we do not have vcpkg hooked into this build. (We could
probably add vcpkg here, but since this libraries aren't even needed, we
don't need to bother).
This implements most of the CloseWatcher API from the html spec.
AbortSignal support is unimplemented.
Integration with dialogs and popovers is also unimplemented.
For SerenityOS, we parse emoji metadata from the UCD to learn emoji
groups, subgroups, names, etc. We used this information only in the
emoji picker dialog. It is entirely unused within Ladybird.
This removes our dependence on the UCD emoji file, as we no longer
need any of its information. All we need to know is the file path to
our custom emoji, which we get from Meta/emoji-file-list.txt.
There are a couple of differences here due to using ICU:
1. Titlecasing behaves slightly differently. We previously transformed
"123dollars" to "123Dollars", as we would use word segmentation to
split a string into words, then transform the first cased character
to titlecase. ICU doesn't go quite that far, and leaves the string
as "123dollars". While this is a behavior change, the only user of
this API is the `text-transform: capitalize;` CSS rule, and we now
match the behavior of other browsers.
2. There isn't an API to compare strings with case insensitivity without
allocating case-folded strings for both the left- and right-hand-side
strings. Our implementation was previously allocation-free; however,
in a benchmark, ICU is still ~1.4x faster.
This adds a motion preference to the browser UI similar to the existing
ones for color scheme and contrast.
Both AppKit UI and Qt UI has this new preference.
The auto value is currently the same as NoPreference, follow-ups can
address wiring that up to the actual preference for the OS.
This changes the Sanitizer configs to build all the vcpkg dependencies
with our specified CFLAGS and CXXFLAGS for ASAN and UBSAN.
Unfortunately, we can't yet enable actually compiling them with
sanitizers enabled, because this causes test failures that need to be
investigated.
This uses ICU for all of the Intl.PluralRules prototypes, which lets us
remove all data from our plural rules generator.
Plural rules depend directly on internal data from the number formatter,
so rather than creating a separate Locale::PluralRules class (which will
make accessing that data awkward), this adds plural rules APIs to the
existing Locale::NumberFormat.
...and shadow tree with TextNode for "value" attribute is created.
This means InlineFormattingContext is used, and button's text now
respects CSS text-decoration properties and unicode-ranges.
This uses ICU for the Intl.DateTimeFormat `format` `formatToParts`,
`formatRange`, and `formatRangeToParts`.
This lets us remove most data from our date-time format generator. All
that remains are time zone data and locale week info, which are relied
upon still for other interfaces. So they will be removed in a future
patch.
Note: All of the changes to the test files in this patch are now aligned
with other browsers. This includes:
* Some very incorrect formatting of Japanese symbols. (Looking at the
old results now, it's very obvious they were wrong.)
* Old FIXMEs regarding range formatting not including the start/end date
when only time fields were requested, but the dates differ.
* Day period inconsistencies.
Methods and attributes marked with [FIXME] are now implemented as
direct properties with the value `undefined` and are marked with the
[[Unimplemented]] attribute. This allows accesses to these properties
to be reported, while having no other side-effects.
This fixes an issue where [FIXME] methods broke feature detection on
some sites.
This uses ICU for the Intl.NumberFormat `format` and `formatToParts`
prototypes. It does not yet port the range formatter prototypes.
Most of the new code in LibLocale/NumberFormat is simply mapping from
ECMA-402 types to ICU types. Beyond that, the only algorithmic change is
that we have to mutate the output from ICU for `formatToParts` to match
what is expected by ECMA-402. This is explained in NumberFormat.cpp in
`flatten_partitions`.
This lets us remove most data from our number format generator. All that
remains are numbering system digits and symbols, which are relied upon
still for other interfaces (e.g. Intl.DateTimeFormat). So they will be
removed in a future patch.
Note: All of the changes to the test files in this patch are now aligned
with both Chrome and Safari.
Note: We keep locale parsing and syntactic validation as-is. ECMA-402
places additional restrictions on locales above what is required by the
Unicode spec. ICU doesn't provide methods that let us easily check those
restrictions, whereas LibLocale does. Other browsers also implement
their own validators here.
This introduces a locale cache to re-use parsed locale data and various
related structures (not doing so has a non-negligible performance impact
on Intl tests).
The existing APIs for canonicalization and display names are pretty
intertwined, so they must both be adapted at once here. The results of
canonicalization are slightly different on some edge cases. But the
changed results are actually now aligned with Chrome and Safari.
If we get a suggestion from fontconfig, we try those fonts first, before
falling back on the hard coded list of known suitable fonts for each
generic family.
This saves us the trouble of maintaining our own implementation,
and instantly brings us to full WOFF2 feature parity with others.
Co-Authored-By: Andrew Kaster <akaster@serenityos.org>
Also add a vcpkg command to ladybird.sh to ensure that vcpkg is setup,
and use a local binary cache for vcpkg build and install media to
avoid cluttering $XDG_CACHE_HOME.
We were already linking librt to LibCore for shm_open and friends.
Now that we build the code that uses POSIX shm into LibCoreMinimal, we
need to link librt into that as well.
And hook it into ladybird.sh for convenience. The script will set up
PATH and other environment variables automatically.
On CI, vcpkg is theoretically already installed on Linux machines, but
not with the right environment variables, and not on macOS. So this also
makes CI use this script to bootstrap vcpkg.
The Encoding specification maps ISO-8859-1 to windows-1252 and expects
the windows-1252 translation table to be used, which differs from
ISO-8859-1 for 0x80-0x9F.
Other contexts expect to get the actual ISO-8859-1 encoding, with 1-to-1
mapping to U+0000-U+00FF, when requesting it.
`decoder_for_exact_name` is introduced, which skips the mapping from
aliases to the encoding name done by `get_standardized_encoding`.
This was used to convert markdown into HTML for display in the browser,
but no other browser behaves this way, so let's simplify things by
removing it.
(Yes, we could implement all kinds of "convert to HTML and display" for
every file format out there, but that's far outside the scope of a
browser engine.)
For some reason, WebContent fails to load simple sites like xkcd.com
without the Qt event loop, even when using RequestServer instead of the
Qt networking stack. The CMake build on Linux has the same issue if we
skip installing the Qt event loop. It's not clear why this is - whether
something depends on the Qt event loop, or if there's a bug in the Unix
event loop implementation.
This is, however, also needed to use the --enable-qt-networking feature.
Implements `table.get`, `table.set`, `elem.drop`, `table.size`,
and `table.grow`. Also fixes a few issues when generating ref-related
spectests. Also changes the `TableInstance` type to use
`Vector<Reference>` instead of `Vector<Optional<Reference>>`, because
the ability to be null is already encoded in the `Reference` type.
On my Mac system with Homebrew SDL + self-built Clang, SDL2's include
directory is not in the library search path by default. Add it to
unbreak the build.
This allows searching for text with case-insensitivity. As this is
probably what most users expect, the default behavior is changes to
perform case-insensitive lookups. Chromes may add UI to change the
behavior as they see fit.
GC-allocated objects should never have JS::SafeFunction/JS::Handle
fields.
For now the plugin only emits warnings here, as there are many cases
of this occurring in the codebase that aren't trivial to fix. It is also
behind a CMake flag since it is a _very_ loud warning.
EventSource allows opening a persistent HTTP connection to a server over
which events are continuously streamed.
Unfortunately, our test infrastructure does not allow for automating any
tests of this feature yet. It only works with HTTP connections.
Supporting unbuffered fetches is actually part of the fetch spec in its
HTTP-network-fetch algorithm. We had previously implemented this method
in a very ad-hoc manner as a simple wrapper around ResourceLoader. This
is still the case, but we now implement a good amount of these steps
according to spec, using ResourceLoader's unbuffered API. The response
data is forwarded through to the fetch response using streams.
This will eventually let us remove the use of ResourceLoader's buffered
API, as all responses should just be streamed this way. The streams spec
then supplies ways to wait for completion, thus allowing fully buffered
responses. However, we have more work to do to make the other parts of
our fetch implementation (namely, Body::fully_read) use streams before
we can do this.
Download files to a temporary location, then only move the downloaded
file to the real location once the download is complete. This prevents
CMake from being confused about partially-downloaded files, e.g. if
someone presses ctrl+c in the middle of a download.
Note the GN build already behaves this way.