- Add support for placement of abspos items into track formed by last
line and padding edge of grid container
- Correctly handle auto-positioned abspos items by placing them between
padding edges of grid container
Fixes crashing on https://wpt.live/css/css-grid/abspos/positioned-grid-descendants-001.html
We have support for using (shift+)tab to move focus to the next/previous
element on the page. However, there were several ways for this to crash
as written. This updates our implementation to check if we did not find
a node to move focus to, and to reset focus to the first/last node in
the document.
This doesn't seem to work when wrapping around from the first to the
last node. A FIXME has been added for that, as this would already not
work before this patch (the main focus here is not crashing).
The spec says we don't need to await navigations if we navigate to the
same URL that we are already on, but at least in our implementation, we
should still await the page load. Otherwise, we will invoke WebDriver
endpoints on the wrong page.
This is necessary when we add more ServiceWorker capabilities, that
actually check this value. The more this spoof functionality is used,
the more we'll need to actually support serving test files over https.
Our handling of left vs. right modifiers keys (shift, ctrl, etc.) was
largely not to spec. This patch adds explicit UIEvents::KeyCode values
for these keys, and updates the UI to match native key events to these
keys (as best as we are able).
...traversal. We've already fixed step 3 and 9 to not filter out
non-positioned stacking contexts, because modern CSS has more ways to
create stacking context besides being positioned with z-index (like by
using "transform", "filter" or "clip-path" properties).
See following spec issue for more details https://github.com/w3c/csswg-drafts/issues/2717
Visual improvement on https://basecamp.com/
Prior to this change, SVGs were following the CSS painting order, which
means SVG boxes could have established stacking context and be sorted by
z-index. There is a section in the spec that defines what kind of SVG
boxes should create a stacking context
https://www.w3.org/TR/SVG2/render.html#EstablishingStackingContex
Although this spec is marked as a draft and rendering order described in
this spec does not match what other engines do.
This spec issue comment has a good summary of what other engines
actually do regarding painting order
https://github.com/w3c/svgwg/issues/264#issuecomment-246432360
"as long as you're relying solely on the default z-index (which SVG1
does, by definition), nothing ever changes order when you apply
opacity/filter/etc".
This change aligns our implementation with other engines by forbidding
SVGs to create a formatting context and painting them in order they are
defined in tree tree.
When the TokenStream code was originally written, there was no such
concept in the CSS Syntax spec. But since then, it's been officially
added, (https://drafts.csswg.org/css-syntax/#css-token-stream) and the
parsing algorithms are described in terms of it. This patch brings our
implementation in line with the spec. A few deprecated TokenStream
methods are left around until their users are also updated to match the
newer spec.
There are a few differences:
- They name things differently. The main confusing one is we had
`next_token()` which consumed a token and returned it, but the spec
has a `next_token()` which peeks the next token. The spec names are
honestly better than what I'd come up with. (`discard_a_token()` is a
nice addition too!)
- We used to store the index of the token that was just consumed, and
they instead store the index of the token that will be consumed next.
This is a perfect breeding ground for off-by-one errors, so I've
finally added a test suite for TokenStream itself.
- We use a transaction system for rewinding, and the spec uses a stack
of "marks", which can be manually rewound to. These should be able to
coexist as long as we stick with marks in the parser spec algorithms,
and stick with transactions elsewhere.
We have a bit of forgiveness around allowing tests to pass with varying
trailing newlines. Only write a rebaselined test to disk if it would not
have passed under those conditions.
Before this change, we transferred the input element's line-height to
both the editable text *and* the placeholder. This caused some strange
doubling of the effective line-height when the editable text was empty,
pushing down the placeholder.
These were used to provide a layer of abstraction between ResourceLoader
and the networking backend. Now that we only have RequestServer, we can
remove these adapters to make the code a bit easier to follow.
Now that we use libcurl, there's no reason to keep Qt networking around.
Further, it doesn't support all features we need anyways, such as non-
buffered request handling for SSE.
The spec expects `postMessage()` to act as if it is invoked
immediately. Since `postMessage()` isn't actually invoked immediately,
keep tasks with source `PostedMessage` in the task queue, so that these
tasks are processed. Fixes a hang when `WorkerGlobalScope.close()` is
called immediately after `postMessage()`.
https://www.w3.org/TR/event-timing/#sec-performance-event-timing
Add idl, header and stubs for PerformanceEventTiming interface.
Two missing `PerformanceEntry` types that have come up in issues
are the `first-input` and the `event` entryTypes. Those are both
this.
Also, because both of those are this same interface, the static
methods from the parent class are difficult to implement because
of instance-specific details. Might either need subclasses or to
edit the parent and also everything that inherits from it :/
We don't create a ChromeProcess in headless-browser, so it is currently
not increasing it's open file limit. This is causing crashes on macOS,
which has a very low default limit.
Before this change viewport was allowed to be scrolled whenever it has a
scrollable overflow, which is not correct when overflow is specified to
be hidden.
Partially reverting a3149c1ce9
Spinning the event loop was causing a crash on:
https://wpt.live/url/percent-encoding.window.html
As it was turning what is meant to be a synchronous operation into an
asynchronous one.
The sequence demonstrated by the reproducing test is as follows:
* A src attribute is changed for the iframe
* process_the_iframe_attributes entered with valid content navigable
* Event loop is spun, allowing the queued iframe removal to execute
* process_the_iframe_attributes continues with null content navigable
* 💥
This reverts commit 556a0936dd.
This was causing a large slow down in WPT, and a crash on macOS during
session shutdown when running WebDriver manually.
Transitions are currently not implemented for pseudo elements which
causes the transition to be applied to the "real"/"parent" element. When
a transition adjusts width/height on a pseudo element this causes the
real elements layout to break.
As a quick fix we just skip doing transitions when they are against
pseudo elements.
We were overly aggressive in clipping SVG roots, which effectively made
them behave as if they always had `overflow: hidden`.
This fixes incorrect clipping of the logo on https://basecamp.com/
Which fixes the following WPT test from failing due to issues stemming
from all of the windows which have been opened.
https://wpt.live/url/failure.html
This will give us 1205 new subtests passing in WPT.
We currently create a single WebView and run all 1400+ LibWeb tests in
serial over that WebView. Instead, let's create as many WebViews as
there are processes on the system, and run LibWeb tests concurrently
over those views.
To do this performantly requires that we never block the main thread of
the headless-browser process once the tests are running. Doing so will
effectively pause execution of all other tests. So test execution is now
Promise-based.
On my machine (with a hardware concurrency of 32), this reduces the run
time of LibWeb tests from 31.382s to 3.640s. CPU utilization increases
from 5% to 67%.
Instead, just log the view's current URL when the WebView crashes. It
won't make any sense to track the executing test this way once there are
many tests running concurrently.
We currently run all tests in a single WebView instance. That instance
owns the process-wide RequestClient / ImageDecoderClient, so if we were
to create a second instance, we'd run into trouble.
This migrates ownership of these services to the Application class, and
makes the Application own the WebView. In the future, this will let the
Application own a list of views.
We currently pass around the individual fields of the Application class
to a bunch of free functions. This makes adding a new field, and passing
it all the way to e.g. run_dump_test pretty annoying, as we have to go
through about 5 function calls.
This will get much worse in an upcoming patch to run LibWeb tests
concurrently. There, we will have to further pass these flags around as
async lambda value captures.
To make this nicer, just access the flags from Application::the(), which
is how the "real" UIs access their application objects as well.
If, by the time we need to schedule rendering of the next frame, the
previous one is still not processed, we could skip it instead of growing
task queue.
Should help with https://github.com/LadybirdBrowser/ladybird/issues/1647
When the flex container is sized under a min-content constraint in the
main axis, any flex items with a percentage main size should collapse
to zero width, not take up their own intrinsic min-content size.
This is not in the spec, but matches how other browsers behave.
Fixes an issue where the cartoons on https://basecamp.com/ were way
too large. :^)
Static position of a box is defined by formatting context type it
belongs to, so let's define this algorithm separately for each FC
instead of assuming FormattingContext::calculate_static_position_rect()
understands how to handle all of them.
Also with this change calculate_static_position_rect() is no longer
virtual function.
At least on my mac, clock_gettime only provides millisecond resolution.
So if many WebContent processes are opened at once, it is not unlikely
that they will all create their backing stores within the same ms. When
that happens, all but the first will fail (and crash).
To prevent this, generate the shared memory file name based on the PID
and a static counter.
We are currently trying to access the current parent and top-level
browsing contexts from the current BC itself. However, if the current BC
is closed, its association to the parent and top-level BCs is lost, and
we are no longer able to handle WebDriver endpoints involving those BCs.
Instead, let's store the parent and top-level BCs separately, and update
them in accordance with the spec.
We were already allowing intrinsic height layout to see definite widths,
and I can't think of a reason *not* to allow it the other way around.
More importantly, this fixes an issue where things with an aspect ratio
didn't have a height to resolve against before.
Makes the logo show up on https://basecamp.com/ :^)
If we end up in a situation where the navigable no longer has an active
window, we can't perform navigation or many other navigable operations.
These are all ad-hoc, since the navigables spec is basically all written
as if there's always an active window. Unfortunately, the active window
comes from the active document's browsing context, which is a nullable
concept even in the spec, so we do need to deal with null here.
This removes all the locally reproducible crashes when running WPT over
the legacy Japanese encoding directory on my computer.
Yes, this is a bit of a monkey patch, but it should be harmless since
we're (as I understand it) dealing with navigables that are still
hanging around with related tasks queued on them. Once all these tasks
have been completed, the navigables will go away anyway.
Because of the previous awkward factoring of Origin we had two
implementations of Origin serializing and creation. Move the
implementation of DOMURL::url_origin into URL::origin, and
instead use the implemenation of URL::Origin::serialize for
serialization (replacing URL::serialize_origin).
This happens to fix 8 URL subtests as the two implemenations had
diverged, and URL::serialize_origin was previously missing the spec
changes of: whatwg/url@eee49fd and whatwg/url@fff33c3
This closer matches specification - and removes any dependency on
LibWeb in the implementation of DOMURL::url_origin.
It is also one step closer to moving BlobURLRegistry to a singleton
process to match LibWeb's multiprocess Worker architecture.
While Origin is defined in the HTML spec - this leaves us with quite an
awkward relationship as the URL spec makes use of AO's from what is
defined in the HTML spec.
To simplify this factoring, relocate Origin into LibURL.
Before this change we were serializing them in a bogus 8-digit hex color
format that isn't actually recognized by HTML.
This code will need more work when we start supporting color spaces
other than sRGB.
Now we can register jobs and they will be executed on the event loop
"later". This doesn't feel like the right place to execute them, but
the spec needs some updates in this regard anyway.
Implements https://github.com/whatwg/html/pull/10007 which basically
moves style, layout and painting from HTML processing task into HTML
task with "rendering" source.
The biggest difference is that now we no longer schedule HTML event loop
processing whenever we might need a repaint, but instead queue a global
rendering task 60 times per second that will check if any documents
need a style/layout/paint update.
That is a great simplification of our repaint scheduling model. Before
we had:
- Optional timer that schedules animation updates 60 hz
- Optional timer that schedules rAF updates
- PaintWhenReady state to schedule a paint if navigable doesn't have a
rendering opportunity on the last event loop iteration
Now all that is gone and replaced with a single timer that drives
repainting at 60 hz and we don't have to worry about excessive repaints.
In the future, hard-coded 60 hz refresh interval could be replaced with
CADisplayLink on macOS and similar API on linux to drive repainting in
synchronization with display's refresh rate.
update_layout() need to be invoked before checking if layout node is
present, because layout not being updated might be the reason why layout
node doesn't exist yet.
...otherwise animated style invalidation will be skipped.
This change is a preparation before applying latest HTML event loop
procesing spec changes to avoid regressing our tests.
Fixes at least one WPT test that was previously timing out:
- html/semantics/document-metadata/the-base-element/base_target_does_not_affect_iframe_src_navigation.html
Horizontal scrollbar has to leave space at right edge for the vertical
scrollbar to fully extend from top-to-bottom edge of viewport. Before,
this was done by just moving it leftward beyond the edge of viewport.
Now, it gets scaled down appropriately to fit between left edge of
viewport & vertical scrollbar without clipping.
The existing rebaseline script is a bit limiting in that it can only
rebaseline a single test at a time. When making sweeping changes, this
patch will let us rebaseline any number of tests at once.
For example, in the following abbreviated test HTML:
<span>some text</span>
<script>println("whf")</script>
We would have to craft the expectation file to include the "some text"
segment, usually with some leading whitespace. This is a bit annoying,
and makes it difficult to manually craft expectation files.
So instead of comparing the expectation against the entire DOM inner
text, we now send the inner text of just the <pre> element containing
the test output when we invoke `internals.signalTextTestIsDone`.
There is an issue where gifs with many frames cannot be loaded, as each
bitmap is sent over IPC using a separate file descriptor, and there is
limit on the maximum number of descriptors per IPC message. Thus, trying
to load gifs with more than 64 frames (the current limit) causes the
image decoder process to die.
This commit introduces the BitmapSequence class, which is a thin wrapper
around the type Vector<Optional<NonnullRefPtr<Gfx::Bitmap>>> and
provides an IPC encode/decode routine that collates all bitmap data into
a single buffer so that only a single file descriptor is required per
IPC transfer, even if multiple frames are being sent.
Previously, if you ran with a relative path, then everything would run
fine except that the test name would be blank in the output, eg:
1/1234:
instead of:
1/1234: Text/input/canvas/export.html
Multiple font properties are either the `normal` keyword or some
non-keyword value, so this lets us avoid some boilerplate for those, at
the cost of the existing `none` users having marginally more verbose
code.
This is a special form of `<string>` so doesn't need its own style value
type. It's used in a couple of font-related properties. For completeness
it's included in ValueType.
Two font properties, font-feature-settings and font-variation-settings,
contain a list of values that are an `<opentype-tag>` followed by a
single value. This class is intended to fill both roles.
StyleComputer is responsible for assigning animation targets, so we
have to make sure there are no pending style updates before querying
animations of an element.
This change also introduces a version of getAnimations() that does not
check style updates and used by StyleComputer to avoid mutual recursion.
The HTML tokenizer specification says that we're supposed to do this
when leaving the Attribute name or when emitting the token, as
appropriate.
Hopefully 'as appropriate' can mean only when emitting the token, as
that's the easiest place to insert this logic without complicating the
tokenizer any more.
Similar to script execution, this spins the WebDriver process until the
action is complete (rather than spinning the WebContent process, which
we've seen result in deadlocks).
This implements execution of the pointer up, pointer down, and pointer
move actions.
This isn't 100% complete. Pointer move actions are supposed to break
the move into iterations over the specified duration, which we currently
do not do.
In particular, we need to convert web element references to the actual
element. The AO isn't fully implemented because we will need to work out
mixing JsonValue types with JS value types, which currently isn't very
straight forward with our JSON clone algorithm.
This is only used for finding font directories for now, but having a
convenient function for it means if anyone needs to use XDG_DATA_DIRS
in future, they're less likely to implement it themselves and miss the
case of it being present but empty.
We also now canonicalize the data directory paths, as we do for the
other standard paths.
The XDG spec repeatedly says, for example:
> If $XDG_DATA_HOME is either not set or empty, a default equal to
$HOME/.local/share should be used.
- https://specifications.freedesktop.org/basedir-spec/latest/index.html
It's rare in practice, but does happen, for example in #1507 where we
would fail to find any system fonts if `XDG_DATA_DIRS` was blank.
This code now treats whitespace-only variables as empty too, which may
be overkill, but seems better to me than not doing so.
We had numerous NiH-based implementations of audio formats and metadata
that we now no longer need because we either don't make use of the code,
or we replaced its implementation by FFmpeg.
This loader supports whatever format libavformat and libavcodec can
handle. Currently only seekable streams are supported, and we still have
some limitations as to the number of channels and sample format.
Plays all non-streaming audio files at:
https://tools.woolyss.com/html5-audio-video-tester/
Previously, we set the "needs style update" flag to false at the
beginning of recomputing the style. This meant that if any code within
the cascade set this flag to true, then we would end style computation
thinking the element still needed its style updating. This could occur
when starting a transition, and would make TreeBuilder crash.
By ensuring that we always set the flag to false at the very end of
style computation, this is avoided, along with any similar issues - I
noticed a comment in `Animation::cancel()` which sounds like a
workaround was needed for a similar problem previously.
This has no visible effect, but internally it's also highlighting any
CSS and JS embedded in the page, which will be made use of later. We'll
also be able to use this code for highlighting CSS or JS files directly
in the future.
It's not a perfect fit - the syntax highlighters give specific styles to
their spans, which we then ignore and just use their data integer to
figure out which CSS class to give to the span. It feels cleaner to me
to produce HTML styled that way, instead of every token having
`style="color: ...; font-weight: ...; text-decoration: ...;"` set on
it.
Most of this new `to_html_string()` code is adapted from Serenity's
`TextEditor::paint_event()`, so it should be pretty solid.
The code previously ensured that JS/CSS tokens did not share values with
the HTML tokens, but still let them share values with each other. The
numbers chosen (1000 and 2000) are somewhat arbitrary, but give us
plenty of room to avoid overlaps.
Fixes crashing on https://playbiolab.com/ in
VERIFY(page.client().is_ready_to_paint()) caused by attempting to start
the next repaint before the ongoing repaint is done.
This is an ad-hoc implementation that resolves the ready() promise once
the document and all fonts collected by the style computer are done
loading. A spec-compliant implementation would include creating a proxy
CSS::FontFace for each @font-face and correctly implementing the
specification steps for font fetching, but we are far from there yet.
This hackish implementation should yield good WPT progress because it
will actually start waiting for the Ahem font to load before capturing
layout measurements. For example, it makes
https://wpt.live/css/css-grid/abspos/positioned-grid-descendants-001.html
go from 0/100 to 36/100 passing subtests.
We were generating click events always using the primary mouse button
instead of the provided button, and with the buttons field set to that
provided button.
After closing a window, it is the client's job to switch to another
window before executing any other command. Currently, we will crash if
that did not happen when we try to send an IPC to a window handle that
we no longer hold. This patch makes us return a "no such window" error
instead.
The exceptions to this new check are the "Switch to Window" and "Get
Window Handles" commands.
This is what the spec tells us to do:
The root element’s display type is always blockified,
and its principal box always establishes an independent
formatting context.
Additionally, a display of contents computes to block
on the root element.
Spec link: https://drafts.csswg.org/css-display/#rootFixes#1562
CSS Fonts level 4 renames font-stretch to font-width, with font-stretch
being left as a legacy alias. Unfortunately the other specs have not yet
been updated, so both terms are used in different places.
It's possible to resolve box's height without doing inner layout, when
computed value is not auto. Doing that fixes height resolution, when box
with percentage height has containing block with percentage height.
Before:
- resolve used width
- layout box's content
- resolve height
After:
- resolve used width
- resolve height if treated as not auto
- layout box's content
- resolve height if treated as auto
When a property is a "legacy name alias", any time it is used in CSS or
via the CSSOM its aliased name is used instead.
(See https://drafts.csswg.org/css-cascade-5/#legacy-name-alias)
This means we only care about the alias when parsing a string as a
PropertyID - and we can just return the PropertyID it is an alias for.
No need for a distinct PropertyID for it, and no need for LibWeb to
care about it at all.
Previously, we had a bunch of these properties, which misused our code
for "logical aliases", some of which I've discovered were not even
fully implemented. But with this change, all that code can go away, and
making a legacy alias is just a case of putting it in the JSON. This
also shrinks `StyleProperties` as it doesn't need to contain data for
these aliases, and removes a whole load of `-webkit-*` spam from the
style inspector.
We now use the "report an exception" AO when a script has an execution
error. This has mostly replaced the older "report the exception" AO in
various specifications. Using this newer AO ensures that
`window.onerror` is invoked when a script has an execution error.
Rather than checking the avcodec version in CMake, check it using the
avcodec version macros in the only source file that needs to know about
the AVFrame API/ABI change in version 59.24.100. This is friendlier to
other build systems that would rather avoid configure time checks.
We are currently returning a JSON object of the form:
{
"name": "element-6066-11e4-a52e-4f735466cecf",
"value": "foo"
}
Instead, we are expected to return an object of the form:
{
"element-6066-11e4-a52e-4f735466cecf": "foo"
}
Very similar to commit e5877cda61.
By sending as much data as we can in a single write, we see a massive
performance improvement on WPT tests that hammer WebDriver with errors.
On my Linux machine, this reduces the runtime of:
/webdriver/tests/classic/perform_actions/invalid.py
from 45-60s down to 3-4s.
We must send a Cache-Control header, which then also requires that we
respond with an HTTP/1.1 response (the Pragma cache option is HTTP/1.0).
We should also send the Content-Type header using the same casing as is
written in the WebDriver spec (lowercase).
Both of these are explicitly tested by WPT.
Instead of creating a unique new prototype shape every time a function
object is instantiated, we now keep one cached with the intrinsics.
This avoids a whole lot of shape allocations, reducing GC pressure.
Instead of converting images to alpha masks on the CPU, we now delegate
that work to the GPU if possible, by way of SkSL shaders.
This noticeably speeds up https://vercel.com/ which has a ton of SVG
masking going on. The old implementation used 15% of CPU time when
loading the page, this one uses basically none.
I originally believed that this could never receive a null URL and the
spec was inaccurate, but it seems like it can indeed.
I don't have a distilled test, but this makes logging in with GitHub
work on https://v0.dev/
The spec allows us to either treat them as part of the UA origin, or as
its own origin before author styles. This second behaviour turns out to
be what we are currently doing, which is nice!
Funnily enough this was clarified in the spec barely a month after this
original comment was written. :^)
`revert` is supposed to revert to the previous cascade origin, but we
previously had it reverting to the previous layer. To support both,
track them separately during the cascade.
As part of this, we make `set_property_expanding_shorthands()` fall back
to `initial` if it can't find a previous value to revert to. Previously
we would just shrug and do nothing if that happened, which only works
if the value you want to revert to is whatever is currently in `style`.
That's no longer the case, because `revert` should skip over any layer
styles that have been applied since the previous origin.
It's difficult to know what we need to implement if we silently ignore
these endpoints. Let's log the endpoints and their parameters, and clean
up the wall of FIXME comments to be easier to grok.
WPT uses Python's http.client.HTTPConnection to send/receive WebDriver
messages. For some reason, on Linux, we see an ~0.04s delay between the
WPT server receiving the WebDriver response headers and its body. There
are tests which make north of 1100 of these requests, which adds up to
~44s.
These connections are almost always going to be over localhost and able
the be sent in a single write. So let's send the response all at once.
On my Linux machine, this reduces the runtime of /cookies/name/name.html
from 45-60s down to 3-4s.
If we don't recognize a given transition-property value as a known CSS
property (one that we know about, not necessarily an invalid one),
we should not extrapolate the other transition-foo values for it.
Fixes#1480
https://github.com/whatwg/console/pull/240 is an editorial change to use
the term "implementation-defined" more consistently. This seems to be
the only instance in the spec text which we quote verbatim.
This mainly uses forward declarations as appropriate for input element
related files. This reduces the number of targets being built when we
change HTMLInputElement.h from 430 to 44.
Previously, we would crash when attempting to establish a web socket
connection from inside a worker, as we were assuming that the ESO's
global object was a `Window`.
Previously, some otherwise unimplemented WebDriver endpoints were
indicating that they had executed successful, this was causing a large
number of Web Platform Tests to time out when they should have failed.
The thread pool test is currently flakey and takes over 2 minutes to run
on CI. It also currently has no users now that RequestServer uses curl,
so let's just remove it for now. If we need it in the future, we can
revive it from git history.