Gfx::Color is always 4 bytes (it's just a wrapper over u32) it's less
work just to pass the color directly.
This also updates IPCCompiler to prevent from generating
Gfx::Color const &, which makes replacement easier.
This will make it easier to support both string types at the same time
while we convert code, and tracking down remaining uses.
One big exception is Value::to_string() in LibJS, where the name is
dictated by the ToString AO.
We have a new, improved string type coming up in AK (OOM aware, no null
state), and while it's going to use UTF-8, the name UTF8String is a
mouthful - so let's free up the String name by renaming the existing
class.
Making the old one have an annoying name will hopefully also help with
quick adoption :^)
The runtime environment of the WASM REPL does not have time zone
information; the file system is virtual (does not have /etc/localtime),
and the TZ environment variable is not set. This causes LibTimeZone to
always fall back to UTC.
Instead, we can get the time zone from the user's browser before we
enter this limited environment. The REPL website will pass the time zone
into the WASM REPL.
Without this, we were in a weird state where LibTimeZone believed it had
TZDB data, but that data wasn't actually available to it. This caused
functions like JS::get_named_time_zone_offset_nanoseconds() to trip an
assertion when entering "new Date();" into the REPL.
This now prepares all the needed (fallible) components before actually
constructing a LoaderPlugin object, so we are no longer filling them in
at an arbitrary later point in time.
The two major changes noticeable on the SerenityOS codebase are:
- Much improved support for const placement, clang-format-14 ignored
our east-const configuration in various places
- Different formatting for requires clauses, now breaking them onto
their own line, which helps with readability a bit
Current versions of CLion also ship LLVM 15, so the built-in formatting
now matches CI formatting again :^)
These are used by esvu, and it is sad that we don't have macOS binaries
availble for consumption by esvu users. Add a matrix job to handle this
separately from the test262 results.
This was forgotten to be added in the LibWeb GC conversion.
This caused some brand checks to fail in skribbl.io's JavaScript and
thus caused unexpected exceptions.
This *also* got missed in the gcc-12 update, because we weren't
installing an explicit gcc version prior. Hopefully that's the last of
the long tail of issues from that migration!
The Demuxer class was changed to return errors for more functions so
that all of the underlying reading can be done lazily. Other than that,
the demuxer interface is unchanged, and only the underlying reader was
modified.
The MatroskaDocument class is no more, and MatroskaReader's getter
functions replace it. Every MatroskaReader getter beyond the Segment
element's position is parsed lazily from the file as needed. This means
that all getter functions can return DecoderErrors which must be
handled by callers.
As new demuxers are added, this will get quite full of files, so it'll
be good to have a separate folder for these.
To avoid too many chained namespaces, the Containers subdirectory is
not also a namespace, but the Matroska folder is for the sake of
separating the multiple classes for parsed information entering the
Video namespace.
Now we attempt to look for the path of e2fsck before checking if the
path can be found in any of the predefined routes. This fixes e2fsck
not being found on some "special" distros like NixOS.
Related #13754
This adds command line flags for WebDriver to pass its IPC socket path
(if running on Serenity) or its FD passing socket (if running elsewhere)
for the headless-browser to connect to.
Hand-picking the smallest index type that fits a particular generated
array started with commit 3ad159537e. This
was to reduce the size of the generated library.
Since then, the number of types using UniqueStorage has grown a ton,
creating a long list of types for which index types are manually picked.
When a new UCD/CLDR/TZDB is released, and the current index type no
longer fits the generated data, we fail to generate. Tracking down which
index caused the failure is a pretty annoying process.
Instead, we can just use size_t while in the generators themselves, then
automatically pick the size needed for the generated code.
When an IPC message returns a single value, we generate a class with a
constructor that is something like:
class MessageResponse {
MessageResponse(SingleReturnType value)
: m_value(move(value))
{
}
};
If that IPC message wants to return a value that SingleReturnType is
constructible from, you have to wrap that return call with braces:
return { value_that_could_construct_single_return_type };
That isn't really an issue except for when we want to mix TRY semantics
with the return type. If SingleReturnType is constructible from an Error
type (i.e. something similar to ErrorOr), the following doesn't work:
TRY(fallible_function());
Because MessageResponse would not be constructible from Error. Instead,
we must do some workaround with a custom TRY macro, as in 31bb792.
This patch generates a constructor that makes TRY usable as-is without
any custom macros. We perform a very similar trick in ThrowCompletionOr
inside LibJS. This constructor will allow you to create MessageResponse
from any type that SingleReturnType is constructible from.
Previously each emoji had its own symbol in the library which was then
referred to by another symbol. This caused thousands of avoidable data
relocations at load time.
This saves about 122kB RAM for each process which uses LibUnicode.
Previously the s_decomposition_mappings variable would refer to other
data in s_decomposition_mappings_data. This would cause thousands of
avoidable relocations at load time.
This saves about 128kB RAM for each process which uses LibUnicode.
Previously we'd fail to execute the resize2fs tool which then results
in us recreating the image from scratch:
resizing disk image...
Image resized.
line 132: /usr/sbin/resize2fs: No such file or directory
failed, not using existing image
done
Otherwise, we end up propagating those dependencies into targets that
link against that library, which creates unnecessary link-time
dependencies.
Also included are changes to readd now missing dependencies to tools
that actually need them.
The shared parts are now firmly compiled into LibC instead of being
defined as a static library and then being copied over manually.
The non-shared ("local") parts are kept as a static library that is
linked into each binary on demand.
This finally allows us to support linking with the -fstack-protector
flag, which now replaces the `ssp` target being linked into each binary
accidentally via CMake.
Even though the toolchain implicitly links against -lc, it does not know
where it should get LibC from except for the sysroot. In the case of
Clang this causes it to pick up the LibC stub instead, which might be
slightly outdated and feature missing symbols.
This is currently not an issue that manifests because we pass through
the dependency on LibC and other libraries by accident, which causes
CMake to link against the LibC target (instead of just the library),
and thus points the linker at the build output directory.
Since we are looking to fix that in the upcoming commits, let's make
sure that everything will still be able to find the proper LibC first.
Currently, if the script fails, it simply runs "exit 1". This exits the
script, but keeps the VM running, so CI hangs until it times out.
Instead of exiting, write a failure status to an error log and shutdown.
CI can then read that error log and fail the run if needed.
This file will be the basis for abstracting away the out-of-thread or
later out-of-process decoding from applications displaying videos. For
now, the demuxer is hardcoded to be MatroskaParser, since that is all
we support so far. The demuxer should later be selected based on the
file header.
The playback and decoding are currently all done on one thread using
timers. The design of the code is such that adding threading should
be trivial, at least based on an earlier version of the code. For now,
though, it's better that this runs in one thread, as the multithreaded
approach causes the Video Player to lock up permanently after a few
frames are decoded.
The class is virtual and has one subclass, SubsampledYUVFrame, which
is used by the VP9 decoder to return a single frame. The
output_to_bitmap(Bitmap&) function can be used to set pixels on an
existing bitmap of the correct size to the RGB values that
should be displayed. The to_bitmap() function will allocate a new bitmap
and fill it using output_to_bitmap.
This new class also implements bilinear scaling of the subsampled U and
V planes so that subsampled videos' colors will appear smoother.
We currently have two build-time parsers for the UCD's emoji-test.txt
file. To prepare for future changes, this removes the Bash parser and
moves its functionality to the newer C++ parser.
So far we've gotten away with using GCC 11 for Lagom and to compile the
toolchain, but via #15795 we discovered a compiler bug that has been
fixed in the latest version but would error the build with CI's GCC 11.
Time for an upgrade :^)
We already use ubuntu-22.04 images in most places, so this is pretty
straightforward. The only exception is Idan's self-hosted runner, which
uses Ubuntu Focal. LibJS should build fine with GCC 11, still.
There were some notable changes to the CLDR JSON format and data in this
release.
The patterns for a date at a specific time, i.e. "{date} at {time}", now
appear under the "atTime" attribute of the "dateTimeFormats" object.
Locale specific changes that affected test-js:
All locales:
* In many patterns, the code points U+00A0 (NO-BREAK SPACE) and U+202F
(NARROW NO-BREAK SPACE) are now used in place of an ASCII space. For
example, before the "dayPeriod" fields AM and PM.
* Separators such as U+2013 (EN DASH) are now surrounded by U+2009 (THIN
SPACE) in place of an ASCII space character.
Locale "en":
* Narrow localizations of time formats are even more narrow. For
example, the abbreviation "wk." for "week" is now just "wk".
Locale "ar":
* The code point U+060C (ARABIC COMMA) is now used in place of an ASCII
comma.
* The code point U+200F (RIGHT-TO-LEFT MARK) now appears at the
beginning of many localizations.
* When the "latn" numbering system is used for currency formatting, the
currency symbol more consistently is placed at the end of the pattern.
Locale "he":
* The "many" plural rules category has been removed.
Locales "zh" and "es-419":
* Several display-name localizations were changed.
This ensures that the toolchain building scripts will use the host
toolchain that has been found manually, and that they won't fall back to
`cc` (which in the worst case may not even be installed).
Homebrew does not add upstream LLVM's install location to $PATH so as
not to conflict with XCode tools, so we need to run `brew --prefix llvm`
to figure out its install path.
Due to the way we lazily construct prototypes and constructors for web
platform interfaces, it's possible for nested GC allocation to occur
while GC objects have been allocated but not fully constructed.
If the garbage collector ends up running in this state, it may attempt
to call JS::Cell::visit_edges() on an object whose vtable pointer hasn't
been set up yet.
This patch works around the issue by deferring GC while intrinsics are
being brought up. Furthermore, we also create a dummy global object for
the internal realm, and populate it with intrinsics. This works around
the same issue happening when allocating something (like the default UA
stylesheets) in the internal realm.
These solutions are pretty hacky and sad, so I've left FIXMEs about
finding a nicer way.
We have logic for serenity_generated_sources which works well for source
files that are specified in GENERATED_SOURCES prior to calling
serenity_lib or serenity_bin. However, code generated with
invoke_generator, and the LibWeb generators do not always follow the
pattern of the IDL and GML files.
For the LibWeb generators, we can just add_dependencies to LibWeb at the
time we declare the generate_Foo custom target. However for LibLocale,
LibTimeZone, and LibUnicode, we don't have the name of the target
available, so export the name in a variable to set into
GENERATED_SOURCES.
To make this work for Lagom, we need to make sure that lagom_lib and
serenity_bin in Lagom/CMakeLists.txt call serenity_generated_sources on
the target.
This enables the Xcode generator on macOS hosts, at least for Lagom.
This matches serenity_lib, and consolidates the logic to strip Lib from
the front of the library name for the Lagom export name into one place
at the top of lagom_lib.
Back when adding support for `pls` as a `sudo` replacement,
`build-image-qemu` dynamically set the `SUDO` variable depending on what
system it was running on and used the contents of that variable to call
the correct elevation utility.
During recent changes to the elevation error message, that usage of the
variable was replicated across all of our scripts, but without also
replicating the logic to set that variable in the first place.
Add back the variable setting logic to all the other scripts to keep
them from running into unset variables.
Also do this for Shell.
This greatly simplifies the CMakeLists in Lagom, replacing many glob
patterns with a big list of libraries. There are still a few special
libraries that need some help to conform to the pattern, like LibELF and
LibWebView.
It also lets us remove essentially all of the Serenity or Lagom binary
directory detection logic from code generators, as now both projects
directories enter the generator logic from the same place.
By deferring to the CMakeLists in each of these libraries' directories,
we can get rid of a lot of curious GLOB patterns and list removals in
the Lagom CMakeLists.
Before, and that was the cause of many confusion in #build-problems
on Discord, the Meta/build-image-*.sh scripts would error out with
the message "this script needs to run as root" while the actual error
was unrelated.
WebDriver aims to implement the WebDriver specification found at
https://w3c.github.io/webdriver/webdriver-spec.html . It's an HTTP
server that can create Browser sessions and control them.
Co-authored-by: Florent Castelli <florent.castelli@gmail.com>
This file implements the POSIX APIs from <regex.h>, and is not suitable
for inclusion in a Lagom build. If we do include it, it will override
the host's regex functions and wreak havoc if it's resolved before the
host's implementation.
Get rid of the bespoke NavigatorObject class and use the modern IDL
strategies for creating platform objects to re-implement Navigator and
its associcated mixin interfaces. While we're here, implement it in a
way that brings WorkerNavigator up to spec :^)
This new code generator takes all the .idl files in LibWeb, looks for
each top level interface in there with an [Exposed=Foo] attribute, and
adds code to add the constructor and prototype for each of those exposed
interfaces to the realm of the relevant global object we're initialzing.
It will soon replace WindowObjectHelper as the way that web interfaces
are added to the Window object, and will be used in the future for
creating proper WorkerGlobalScope objects for dedicated and shared
workers.
Instead, create a tree of Parsers all pointing to a top-level Parser.
All module imports and interfaces are stored at the top level, instead
of in a static map. This allows creating multiple IDL::Parsers in the
same process without them stepping on each others toes.
An "inherit attribute" calls an ancestor's getter with the same name,
but defines its own setter. Since a parent class's public methods are
exposed to child classes, we don't have to do any special handling here
to call the parent's methods, it just works. :^)
The mappings are exposed via `Unicode::code_point_decomposition(u32)`
and `Unicode::code_point_decompositions()`, the latter being useful for
reverse searching a code point from its decomposition.
The normalization code does not make use of `Quick_Check` props (https://www.unicode.org/reports/tr44/#Decompositions_and_Normalization),
meaning no quick check optimizations.
Doesn't use them in libc headers so that those don't have to pull in
AK/Platform.h.
AK_COMPILER_GCC is set _only_ for gcc, not for clang too. (__GNUC__ is
defined in clang builds as well.) Using AK_COMPILER_GCC simplifies
things some.
AK_COMPILER_CLANG isn't as much of a win, other than that it's
consistent with AK_COMPILER_GCC.
This includes punting on the actual file picker implementation all the
way out to the PageClient. It's likely that some of the real details
should be implemented somewhere closer, like the BrowsingContext or the
Page, but we'll get there.
For now, this allows https://copy.sh/v86 to load the emulation of the
preselected images all the way until it hits a call to
URL.createObjectURL.
Without this, GenerateUnicodeData crashes when run during the build.
With this, `serenity.sh run` brings up a running SerenityOS.
Since GenerateUnicodeData doesn't take a lot of time to run, just
disable optimizations to work around the problem for now.
Works around #15449.
This is a preparation to check if our users find noticeable bugs in the
x86-64 target, before we can decide if we want to remove the i686 target
for good.
Instead of trying to create a Window and a Document, and use those to
create a ParsingContext, just use the JS::Realm only constructor to make
sure that bindings are stashed on the main thread VM's realm.
In this generator change, we introduce a new factory method for bound
LibWeb objects that takes a JS::Realm instead of Web::HTML::Window.
The two methods are allowed to co-exist at this point, but the option to
take an HTML::Window will be removed once all clases are converted to
the new API.
We also start using the new Bindings::ensure_web_[prototype/constructor]
helpers from the Bindings/Intrinsics class so that we can eventually
remove the helpers from Window.h for the same.
This matches how LibMain is used within Serenity. This commit makes it
possible to build Lagom with LTO. Previously, `serenity_main` functions
would be dead-stripped away, as the linker could prove that nothing from
the executable ever called them.
This is the initial port of Lagom to win32. This will enable developers
to use Lagom as an alternative to vanilla STL/StandardC++Library - which
gives a much richer environment (think QtCore - but modern).
My main incentive - is to have a native Windows Ladybird working.
I am starting with AK, which does not yet fully compile (on mingw). When
AK is compiling (currently fails building StringBuffer.cpp) - I will
continue to LibCore and then the rest of the user space libraries
(excluding the GUI, which will be another different effort).
Most of the code is happily stollen from Andrew Kaster's fork - he
deserves the credit.
Co-authored-by: Andrew Kaster <akaster@serenityos.org>
Some time zones, like "Asia/Shanghai", use a set of DST rules that end
before present day. In these cases, we should fall back to last possible
RULE entry from the TZDB. The time zone compiler published by IANA (zic)
performs the same fallback starting with version 2 of the time zone file
format.
IDL dictionary members are nullable by default (unless marked as
`required`) and should not get any value assigned unless one was
provided by the userland code that isn't undefined, or if the member has
a default value.
This is so that we can use Optional<T> in the internal representation
and check for "is present" via Optional::has_value().
The SourceGenerator's @else@ mapping is only set in the second iteration
of the loop, causing the generated return for unrecognized values to not
be guarded by an else statement.
We can simply use a hardcoded 'else' here, @else@ is only to create the
first comparison as a plain 'if' and subsequent ones as 'else if'.
Without this, the generated DOMExceptionConstructor does not refer to
the WebIDL::DOMException with its fully qualified name. This caused an
ambiguity error on my machine.
Let's stop putting generic types and AOs from the Web IDL spec into
the Bindings namespace and directory in LibWeb, and instead follow our
usual naming rules of 'directory = namespace = spec name'. The IDL
namespace is already used by LibIDL, so Web::WebIDL seems like a good
choice.
SafeFunction automatically registers its closure memory area in a place
where the JS garbage collector can find it.
This means that you can capture JS::Value and arbitrary pointers into
the GC heap in closures, as long as you're using a SafeFunction, and the
GC will not zap those values!
There's probably some performance impact from this, and there's a lot of
things that could be nicer/smarter about it, but let's build something
that ensures safety first, and we can worry about performance later. :^)
We won't be able to use local servers on Android without some serious
Android work to create background tasks, so just disable this for now,
as it currently relies on Core::Account to take over from SystemServer.
Some systems don't have /usr/bin/time available, and during most runs
of lint-ci we don't actually care that much about the exact timing.
Therefore, let's just remove it. It's easy enough to add back in, if
someone wants to investigate an issue.
This code generator no longer creates JS wrappers for platform objects
in the old sense, instead they're JS objects internally themselves.
Most of what we generate now are prototypes - which can be seen as
bindings for the internal C++ methods implementing getters, setters, and
methods - as well as object constructors, i.e. bindings for the internal
create_with_global_object() method.
Also tweak the naming of various CMake glue code existing around this.
This name more accurately reflects what we are checking. Also add an
explanatory note that only a hand-curated subset of platform object
types is checked in the absence of a full generated list.
With this device being added, we can now boot into graphics mode on
these platforms too. For ISA-PC machine this is basically the only
viable option to use, but in the future, we should remove this device
for the microvm machine type as it should allow us to determine better
options and detect them by using a given device tree blob.
I totally overlooked that /usr/bin/time is not universal, which broke
some systems. This commit instead calls 'time', allowing either a shell
built-in to kick in, or a (potentially different) binary be found
anywhere in the PATH.
This speeds up the script from about 90ms down to about 10ms, for
reasonably common changesets.
80ms may not feel like much, but it adds up quickly, especially since
we run a dozen scripts during pre-commit.
This speeds up the script from about 140ms down to <10ms, even for
changesets that touch a handful of different GML files.
130ms may not feel like much, but it adds up quickly, especially since
we run a dozen scripts during pre-commit.
This speeds up the script from about 120ms down to about 20ms for
reasonably common changesets.
100ms may not feel like much, but it adds up quickly, especially since
we run a dozen scripts during pre-commit.
This speeds up the script from about 170ms down to about 80ms for
changes in Debug.h.in or similarly "DEBUG"-rich files, down to <10ms for
more common changesets.
160ms may not feel like much, but it adds up quickly, especially since
we run a dozen scripts during pre-commit.
This fixes an issue on Twitter where they were instantiating an
IntersectionObserver with a null root. The root IDL type is
`(Element or Document)?` so null needs to be allowed.
Rather than invoking AK::Time::from_timestamp at runtime, we can do so
at compile time. This reduces invoking TimeZone::get_time_zone_offset
100,000 times in a loop from about 7 seconds to 30 milliseconds.
We compute the effective overload sets for each argument count at build
time, to save us having to do it every time a function with overloads
is called.
As part of this, I've moved a couple of methods for checking for
null/undefined from UnionType to Type, and filled in more of their
steps.
This now detects more, and so causes us to hit a `TODO()` which is too
big for me to go after right now, so I've replaced that assertion with
a log message.
Track the kind of Type it is, and use that to provide some convenient
`is_foo()` / `as_foo()` methods. While I was at it, made these all
classes instead of structs and made their data private.
IDL function overload resolution requires knowing each IDL function's
parameters and their types at runtime. The simplest way to do that is
just to make the types the generator uses available to the runtime.
Parsing has moved to LibIDL, but code generation has not, since that is
very specific to WrapperGenerator.
This script is useful when wanting to install lagom libraries for
projects using Lagom via FetchContent, but trips over itself if the
project links other non-Lagom imported targets to itself. So, let's just
skip them.
This mainly changes two aspects:
- The category can now be a single letter, such as 'w' to indicate the
file Utilities/w.cpp
- Spaces in the category (or list) are no longer allowed. This follows
the lived practice of writing category lists as "Foo+Bar: Quux"
Closes#15243.
This was apparently never used by anyone except me, and currently
fails silently.
The script originally allowed easy inspection of potentially missing
resources, but that seems no longer useful. Even after restoring the
script to a working state, I found nothing with it.
A somewhat usable version might be available at
https://github.com/BenWiederhake/serenity/tree/historic/lint-missing-resources.sh
However, there seems to be no interest in the script, so it is better to
remove it.
This was apparently never used by anyone except me, and currently
fails silently.
The script originally allowed easy inspection of the difference between:
1. The list of declared syscalls according to Kernel/API/Syscall.h
2. The list of syscalls implemented by the UserspaceEmulator according
to Userland/DevTools/UserspaceEmulator/Emulator_syscalls.cpp
3. The list of syscalls documented in Base/usr/share/man/man2/.
Here's how the script could have been updated:
SYSCALLS_KERNEL="$(echo 'Kernel syscalls'; echo; \
grep -Eo '^ +S\(.*, NeedsBigProcessLock::' Kernel/API/Syscall.h | \
sed -Ee 's-^ +S\((.+), .*-\1-' | sort)"
SYSCALLS_UE="$(echo 'Implemented in UserspaceEmulator'; echo; \
grep -Eo '^ +case SC_.*:$' \
Userland/DevTools/UserspaceEmulator/Emulator_syscalls.cpp | \
sed -Ee 's,^ +case SC_(.*):$,\1,' | sort)"
SYSCALLS_MAN2="$(echo 'Documented syscalls'; echo; \
find Base/usr/share/man/man2/ ! -type d -printf '%f\n' | \
sed -Ee 's,\.md,,' | sort)"
diff --color=always \
<(echo "${SYSCALLS_KERNEL}") <(echo "${SYSCALLS_UE}")
diff --color=always \
<(echo "${SYSCALLS_KERNEL}") <(echo "${SYSCALLS_MAN2}")
A more readable version might be available at
https://github.com/BenWiederhake/serenity/tree/historic/syscall-linting
However, there seems to be no interest in the script, so it is better to
remove it.
basename from GNU coreutils 8.32 (the default on Debian Bullseye, as
bash 5.2.0 does not include basename as a built-in command) does not
accept multiple arguments. This will cause the command to fail with no
output, and the error code not being propagated for some reason. This
means that the loop never gets executed, and thus the check never
actually does anything. This commit fixes that behavior by calling
'basename' multiple times.
Parse emoji from emoji-serenity.txt to allow displaying their names and
grouping them together in the EmojiInputDialog.
This also adds an "Unknown" value to the EmojiGroup enum. This will be
useful for emoji that aren't found in the UCD, or for when UCD downloads
are disabled.
This allows us to find emoji data for files such as /res/emoji/U+A9.png.
U+00A9 is not fully-qualified (its full form is U+00A9 U+FE0F). But the
UCD has unqualified data for this code point; generating it allows us to
categorize these emoji appropriately in the EmojiInputDialog.
For now this is a lagom only application as it is not compatible with
serenity in its current state.
The only change is that it is released under a different license with
permission from all the authors.
Newer cmake's have internal functions to un-compress files. These
functions will work on pure windows - as well as linux. This
eliminates the need to search for external tools (TAR,GZIP,ZIP) - and
helps fixing #9866.
In order to finally fix#9866 we need to decide to bump the cmake
version requirements and remove the checks. If we demand a newer cmake
version, we will loose Ubuntu 20.04 as a build target - as it ships
with CMake 3.16.
For now - we keep compatibility with CMake 3.16 - and only if CMake
3.18 as been found - we use its new functionality.
Remove the Corrosion dependency, and use the now-builtin
add_jakt_executable function from the Jakt install rules to build our
example application.
By using find_package(Jakt), we now have to set ENABLE_JAKT manually on
both serenity and Lagom at the same time, so the preferred method to do
this for now is:
cmake -B Build/superbuild<arch><toolchain> \
-S Meta/CMake/Superbuild \
-DENABLE_JAKT=ON \
-DJAKT_SOURCE_DIR=/path/to/jakt
Where omitting JAKT_SOURCE_DIR will still pull from the main branch of
SerenityOS/jakt. This can be done after runing Meta/serenity.sh run.
According to TR #51, the "best definition of the full set [of emojis] is
in the emoji-test.txt file". This defines not only the emoji themselves,
but the order in which they should be displayed, and what "group" of
emojis they belong to.
There are still some remaining cases where generated code depends on the
existence of FooWrapper => Web::NS::Foo mappings. Fixing those will
require figuring out the appropriate namespace for all IDL types, not
just the currently parsed interface.
Unlike ensure_web_prototype<T>(), the cached version doesn't require the
prototype type to be fully formed, so we can use it without including
the FooPrototype.h header. It's also a bit less verbose. :^)
This is a monster patch that turns all EventTargets into GC-allocated
PlatformObjects. Their C++ wrapper classes are removed, and the LibJS
garbage collector is now responsible for their lifetimes.
There's a fair amount of hacks and band-aids in this patch, and we'll
have a lot of cleanup to do after this.
This patch moves the following things to being GC-allocated:
- Bindings::CallbackType
- HTML::EventHandler
- DOM::IDLEventListener
- DOM::DOMEventListener
- DOM::NodeFilter
Note that we only use PlatformObject for things that might be exposed
to web content. Anything that is only used internally inherits directly
from JS::Cell instead, making them a bit more lightweight.
This tells the wrapper generator that there is no separate wrapper class
for this interface, and it should refer directly to the C++ "Foo" object
instead of "FooWrapper".
To enable incremental movement towards the removal of DOM object
instance wrappers, this patch adds a NO_INSTANCE argument that can be
passed to libweb_js_wrapper().
The UCD only cares about a few locales for special casing rules (az, lt,
and tr). Unfortunately, LibUnicode cannot use LibLocale once the
libraries are separate because LibLocale will need to use LibUnicode for
many more things; thus there would be a circular dependency. Instead,
just generate the small enum needed for this one use case.
When LibLocale is placed in the Locale namespace, this will conflict
with the Locale structure in each CLDR generator. Rename this to
"LocaleData", and rename its parent UnicodeLocaleData to just "CLDR"
to avoid confusion between LocaleData and UnicodeLocaleData.
Currently, LibUnicodeData contains the generated UCD and CLDR data. Move
the UCD data to the main LibUnicode library, and rename LibUnicodeData
to LibLocaleData. This is another prepatory change to migrate to
LibLocale.
To prepare for placing all CLDR generated data in a new library,
LibLocale, this moves the code generators for the CLDR data to the
LibLocale subfolder.
The FLAC "spec tests", or rather the test suite by xiph that exercises
weird FLAC features and edge cases, can be found at
https://github.com/ietf-wg-cellar/flac-test-files and is a good
challenge for our FLAC decoder to become more spec compliant. Running
these tests is similar to LibWasm spec tests, you need to pass
INCLUDE_FLAC_SPEC_TESTS to CMake.
As of integrating these tests, 23 out of 63 fail. :yakplus:
We don't need this AHCI controller to be present as we already have one
in the Q35 machine. This will help using the correct boot device in GRUB
setups later on.
The current lookup code and emoji.txt generator expects codepoints to
not include leading zeros. This may change in the future, but it's worth
enforcing the current convention until then.
Since LibUnicode depends on this data it used to include
Intl/AbstractOperations which in turn includes a number of other LibJS
headers. By moving this to its own header with minimal includes we can
save on rebuilding LibUnicode for unrelated LibJS header changes.
Intrinsics, i.e. mostly constructor and prototype objects, but also
things like empty and new object shape now live on a new heap-allocated
JS::Intrinsics object, thus completing the long journey of taking all
the magic away from the global object.
This represents the Realm's [[Intrinsics]] slot in the spec and matches
its existing [[GlobalObject]] / [[GlobalEnv]] slots in terms of
architecture.
In the majority of cases it should now be possibly to fully allocate a
regular object without the global object existing, and in fact that's
what we do now - the realm is allocated before the global object, and
the intrinsics between both :^)
The primary motivation for this is to make `generate-emoji-txt.sh` more
useful for generating a compact list of new emoji being added (e.g.
for use in commit messages / PRs) if it's run with an emoji image
directory that contains only the new emojis.
It currently takes upwards of 40 minutes to download the ccache on macOS
and often errors-out near the end. Change the cache version to bust it
so we can start anew. Reduce its max size to 2 GB (a clean build is ~0.9
GB, so this allows just over 2 clean builds to be cached).
- Prefer VM::current_realm() over GlobalObject::associated_realm()
- Prefer VM::heap() over GlobalObject::heap()
- Prefer Cell::vm() over Cell::global_object()
- Prefer Wrapper::vm() over Wrapper::global_object()
- Inline Realm::global_object() calls used to access intrinsics as they
will later perform a direct lookup without going through the global
object
This is needed so that the allocated NativeFunction receives the correct
realm, usually forwarded from the Object's initialize() function, rather
than using the current realm.
Global object initialization is tightly coupled to realm creation, so
simply pass it to the function instead of relying on the non-standard
'associated realm' concept, which I'd like to remove later.
This works essentially the same way as regular Object::initialize() now.
Additionally this allows us to forward the realm to GlobalObject's
add_constructor() / initialize_constructor() helpers, so they set the
correct realm on the allocated constructor function object.