CSS fragmentation implies 1:N relationship between layout nodes and
paintables. This change is a preparation for implementation of inline
fragmentation where InlinePaintable will be replaced with
PaintableWithLines corresponding to each line.
Always create a new formatting context for <math> elements. Previously
that didn't happen if they only had inline children, e.g. mtable.
This fixes a crash in the WPT MathML test
mathml/crashtests/children-with-negative-block-sizes.html
A couple of parts of this:
- Store the source text for Declarations of custom properties.
- Then save that in the UnresolvedStyleValue.
- Serialize UnresolvedStyleValue using the saved source when available -
that is, for custom properties but not for regular properties that
include var() or attr().
This is in a weird position where the spec tells us to discard the
comments, but then we have to preserve the original source text which
may include comments. As a compromise, I'm treating each comment as a
whitespace token - comments are functionally equivalent to whitespace
so this should not have any behaviour changes beyond preserving the
original text.
Previously we would serialize these as the empty string. eg, this:
```
<div style="grid-auto-columns: auto"></div>
```
would have a computed `grid-auto-columns` value of ``.
In order to know whether `calc(2.5)` is a number or an integer, we have
to see what the property will accept. So, add that knowledge to
`Parser::expand_unresolved_values()`.
This makes `counter-increment: foo calc(2 * var(--n));` work correctly,
in a test I'm working on.
Selectors and at-rules both made assumptions about their indentation
level, which made it difficult to read the dump output. It'll become
even worse once rules can be further nested within each other, so let's
fix it now. :^)
This will be the first step is making better use of system libraries
like fontconfig and CoreText to load system fonts for use by the UI
process and the CSS style computer.
This reverts 6d25bf3aac
Invalidating the style here means that transitions can cause an element
to leave style computation with its "needs style update" flag set to
true. This then causes a VERIFY to fail in the TreeBuilder.
This invalidation does not otherwise seem to have any effect. The
original commit suggests this was to fix a bug, but it's not clear what
bug that was. If it reappears, we can try to solve the issue in a
different way.
I had made a stab at implementing this to determine whether it could
assist in fixing an issue where scroll_to_the_fragment was not getting
called at the appropriate time. It did not fix that issue, and actually
ended up breaking one of our in tree tests. In the meantime, factor out
this method into a standalone function.
These don't have to worry about the input not being valid UTF-8 and
so can be infallible (and can even return self if no changes needed.)
We use this instead of Infra::to_ascii_{upper,lower}_case in LibWeb.
- Include vertical border spacing in row group offset calculation so
that they are axis-aligned with child row/cell elements. This makes it
so there isn't horizontal and vertical overflow caused by child
row/cell elements.
- Include horizontal border spacing in tr width calculations. This makes
it so tr elements don't have overflow anymore when there are multiple
columns.
- Apply vertical caption offset to row group top offset.
- Don't double-count top padding when calculating vertical offset for
tr and row groups.
Instead of re-symbolicating entire stacks from scratch every time
we want a JS VM backtrace, we now use the ExecutionContext object as
cache storage via a new CachedSourceRange object.
This means that once a stack frame has been symbolicated, we don't
have to resymbolicate it again (unless the program counter moves
within that stack frame).
This drastically reduces time spent in symbolication in some WPT tests.
Cookies have a minimum expiry resolution of 1 second. So to test cookie
expiration, the test had to idle for at least a second, which is quite a
noticeable delay now that LibWeb tests are parallelized.
Instead, we can add an internal API to expire cookies with a time offset
to avoid this idle delay.
CSS Syntax 3 (https://drafts.csswg.org/css-syntax) has changed
significantly since we implemented it a couple of years ago. Just about
every parsing algorithm has been rewritten in terms of the new token
stream concept, and to support nested styles. As all of those
algorithms call into each other, this is an unfortunately chonky diff.
As part of this, the transitory types (Declaration, Function, AtRule...)
have been rewritten. That's both because we have new requirements of
what they should be and contain, and also because the spec asks us to
create and then gradually modify them in place, which is easier if they
are plain structs.
This is an ad-hoc change to account for the fact that we may run
arbitrary code while waiting for the tasks in this function to complete.
I don't have a way to reproduce it, but I've seen trouble caused by
navigables disappearing, which causes the history step numbers to be
disturbed.
Previously Selection.extend() used only the relative node order to decide which
direction to extend the selection. This leads to incorrect behaviour if
both the existing and new boundary points are within the same DOM node
and the selection direction is reversed.
This change fixes all the failing subtests in the WPT extend-* test
suites.
Loading Ladybird on Github results in 37 debug logs about being unable
to parse an empty Date string. This log is intended to catch Date
formats we do not support to detect web compatability problems, which
makes this case not particuarly useful to log.
Instead of trying to parse all of the different date formats and
logging that the string is not valid, let's just return NAN immediately.
This was previously negated due to a misread of
https://url.spec.whatwg.org/#concept-url-equals. This change fixes a
bunch of WPT crashes such as
"/html/browsers/history/the-history-interface/001".
There was no need to use FlyString for error messages, and it just
caused a bunch of churn since these strings typically only existed
during the lifetime of the error.
If the image data to decode is incomplete, e.g. a corrupt image missing
its last scanlines the decoder would previously keep looping for ever.
By breaking out of the loop if no more scanlines were produced we can at
least display the partial image up to that point.
Contradictory to the spec, the Set Timeouts endpoint should update the
existing timeouts configuration in-place, rather than replacing it. WPT
expects this, and other browsers already implement this endpoint this
way.
This is a method defined in the WebDriver spec, but requires access to a
bunch of private fields in these classes, so this is implemented in the
same manner as the reset algorithm.
The "isCollapsed" attribute on a selection must "return true if and only
if the anchor and focus are the same".
In addition to checking that the anchor and focus belonged to the same
DOM node, we now also check that they refer to the same position within
the node.
With this change Ladybird passes all the subtests in the "isCollapsed"
WPT suite.
https://wpt.live/selection/isCollapsed.html
This way we don't have to allocate separate vector with both scroll and
sticky frame that is used for display list player (scroll and sticky
frames share id pool), so player could access offset by frame id.
No behavior change.
...and add display list item that does translation instead. By doing
that we no longer need to map each coordinate in display list by
translation in recorder state.
Set the connection timeout which only limits the connection phase of the
request.
Previously, CURLOPT_TIMEOUT would apply to all transfer operations which
could result in legitimate upload or download operations being
cancelled.
Printing the whole array causes wpt
console/console-log-large-array.any.html to crash.
This limits logged arrays to 100 elements and
truncates the rest with ...
The headless-browser source is getting a bit unwieldy. The ordering of
class and method definitions is fragile; e.g. the application and web
view classes each require full definitions of each other. So it has
reached the point where it makes sense to give headless-browser some
better file structure.
To prepare for that, this patch simply moves its source to live along-
side the other browser chromes. This location is a bit better prepared
for creating more files, as the Utilities folder doesn't even have its
own CMakeLists.txt.
- Add support for placement of abspos items into track formed by last
line and padding edge of grid container
- Correctly handle auto-positioned abspos items by placing them between
padding edges of grid container
Fixes crashing on https://wpt.live/css/css-grid/abspos/positioned-grid-descendants-001.html
We have support for using (shift+)tab to move focus to the next/previous
element on the page. However, there were several ways for this to crash
as written. This updates our implementation to check if we did not find
a node to move focus to, and to reset focus to the first/last node in
the document.
This doesn't seem to work when wrapping around from the first to the
last node. A FIXME has been added for that, as this would already not
work before this patch (the main focus here is not crashing).
The spec says we don't need to await navigations if we navigate to the
same URL that we are already on, but at least in our implementation, we
should still await the page load. Otherwise, we will invoke WebDriver
endpoints on the wrong page.
This is necessary when we add more ServiceWorker capabilities, that
actually check this value. The more this spoof functionality is used,
the more we'll need to actually support serving test files over https.
Our handling of left vs. right modifiers keys (shift, ctrl, etc.) was
largely not to spec. This patch adds explicit UIEvents::KeyCode values
for these keys, and updates the UI to match native key events to these
keys (as best as we are able).
...traversal. We've already fixed step 3 and 9 to not filter out
non-positioned stacking contexts, because modern CSS has more ways to
create stacking context besides being positioned with z-index (like by
using "transform", "filter" or "clip-path" properties).
See following spec issue for more details https://github.com/w3c/csswg-drafts/issues/2717
Visual improvement on https://basecamp.com/
Prior to this change, SVGs were following the CSS painting order, which
means SVG boxes could have established stacking context and be sorted by
z-index. There is a section in the spec that defines what kind of SVG
boxes should create a stacking context
https://www.w3.org/TR/SVG2/render.html#EstablishingStackingContex
Although this spec is marked as a draft and rendering order described in
this spec does not match what other engines do.
This spec issue comment has a good summary of what other engines
actually do regarding painting order
https://github.com/w3c/svgwg/issues/264#issuecomment-246432360
"as long as you're relying solely on the default z-index (which SVG1
does, by definition), nothing ever changes order when you apply
opacity/filter/etc".
This change aligns our implementation with other engines by forbidding
SVGs to create a formatting context and painting them in order they are
defined in tree tree.
When the TokenStream code was originally written, there was no such
concept in the CSS Syntax spec. But since then, it's been officially
added, (https://drafts.csswg.org/css-syntax/#css-token-stream) and the
parsing algorithms are described in terms of it. This patch brings our
implementation in line with the spec. A few deprecated TokenStream
methods are left around until their users are also updated to match the
newer spec.
There are a few differences:
- They name things differently. The main confusing one is we had
`next_token()` which consumed a token and returned it, but the spec
has a `next_token()` which peeks the next token. The spec names are
honestly better than what I'd come up with. (`discard_a_token()` is a
nice addition too!)
- We used to store the index of the token that was just consumed, and
they instead store the index of the token that will be consumed next.
This is a perfect breeding ground for off-by-one errors, so I've
finally added a test suite for TokenStream itself.
- We use a transaction system for rewinding, and the spec uses a stack
of "marks", which can be manually rewound to. These should be able to
coexist as long as we stick with marks in the parser spec algorithms,
and stick with transactions elsewhere.
We have a bit of forgiveness around allowing tests to pass with varying
trailing newlines. Only write a rebaselined test to disk if it would not
have passed under those conditions.