It's possible to resolve box's height without doing inner layout, when
computed value is not auto. Doing that fixes height resolution, when box
with percentage height has containing block with percentage height.
Before:
- resolve used width
- layout box's content
- resolve height
After:
- resolve used width
- resolve height if treated as not auto
- layout box's content
- resolve height if treated as auto
When a property is a "legacy name alias", any time it is used in CSS or
via the CSSOM its aliased name is used instead.
(See https://drafts.csswg.org/css-cascade-5/#legacy-name-alias)
This means we only care about the alias when parsing a string as a
PropertyID - and we can just return the PropertyID it is an alias for.
No need for a distinct PropertyID for it, and no need for LibWeb to
care about it at all.
Previously, we had a bunch of these properties, which misused our code
for "logical aliases", some of which I've discovered were not even
fully implemented. But with this change, all that code can go away, and
making a legacy alias is just a case of putting it in the JSON. This
also shrinks `StyleProperties` as it doesn't need to contain data for
these aliases, and removes a whole load of `-webkit-*` spam from the
style inspector.
We now use the "report an exception" AO when a script has an execution
error. This has mostly replaced the older "report the exception" AO in
various specifications. Using this newer AO ensures that
`window.onerror` is invoked when a script has an execution error.
Rather than checking the avcodec version in CMake, check it using the
avcodec version macros in the only source file that needs to know about
the AVFrame API/ABI change in version 59.24.100. This is friendlier to
other build systems that would rather avoid configure time checks.
We are currently returning a JSON object of the form:
{
"name": "element-6066-11e4-a52e-4f735466cecf",
"value": "foo"
}
Instead, we are expected to return an object of the form:
{
"element-6066-11e4-a52e-4f735466cecf": "foo"
}
Very similar to commit e5877cda61.
By sending as much data as we can in a single write, we see a massive
performance improvement on WPT tests that hammer WebDriver with errors.
On my Linux machine, this reduces the runtime of:
/webdriver/tests/classic/perform_actions/invalid.py
from 45-60s down to 3-4s.
We must send a Cache-Control header, which then also requires that we
respond with an HTTP/1.1 response (the Pragma cache option is HTTP/1.0).
We should also send the Content-Type header using the same casing as is
written in the WebDriver spec (lowercase).
Both of these are explicitly tested by WPT.
Instead of creating a unique new prototype shape every time a function
object is instantiated, we now keep one cached with the intrinsics.
This avoids a whole lot of shape allocations, reducing GC pressure.
Instead of converting images to alpha masks on the CPU, we now delegate
that work to the GPU if possible, by way of SkSL shaders.
This noticeably speeds up https://vercel.com/ which has a ton of SVG
masking going on. The old implementation used 15% of CPU time when
loading the page, this one uses basically none.
I originally believed that this could never receive a null URL and the
spec was inaccurate, but it seems like it can indeed.
I don't have a distilled test, but this makes logging in with GitHub
work on https://v0.dev/
The spec allows us to either treat them as part of the UA origin, or as
its own origin before author styles. This second behaviour turns out to
be what we are currently doing, which is nice!
Funnily enough this was clarified in the spec barely a month after this
original comment was written. :^)
`revert` is supposed to revert to the previous cascade origin, but we
previously had it reverting to the previous layer. To support both,
track them separately during the cascade.
As part of this, we make `set_property_expanding_shorthands()` fall back
to `initial` if it can't find a previous value to revert to. Previously
we would just shrug and do nothing if that happened, which only works
if the value you want to revert to is whatever is currently in `style`.
That's no longer the case, because `revert` should skip over any layer
styles that have been applied since the previous origin.
It's difficult to know what we need to implement if we silently ignore
these endpoints. Let's log the endpoints and their parameters, and clean
up the wall of FIXME comments to be easier to grok.
WPT uses Python's http.client.HTTPConnection to send/receive WebDriver
messages. For some reason, on Linux, we see an ~0.04s delay between the
WPT server receiving the WebDriver response headers and its body. There
are tests which make north of 1100 of these requests, which adds up to
~44s.
These connections are almost always going to be over localhost and able
the be sent in a single write. So let's send the response all at once.
On my Linux machine, this reduces the runtime of /cookies/name/name.html
from 45-60s down to 3-4s.
If we don't recognize a given transition-property value as a known CSS
property (one that we know about, not necessarily an invalid one),
we should not extrapolate the other transition-foo values for it.
Fixes#1480
https://github.com/whatwg/console/pull/240 is an editorial change to use
the term "implementation-defined" more consistently. This seems to be
the only instance in the spec text which we quote verbatim.
This mainly uses forward declarations as appropriate for input element
related files. This reduces the number of targets being built when we
change HTMLInputElement.h from 430 to 44.
Previously, we would crash when attempting to establish a web socket
connection from inside a worker, as we were assuming that the ESO's
global object was a `Window`.
Previously, some otherwise unimplemented WebDriver endpoints were
indicating that they had executed successful, this was causing a large
number of Web Platform Tests to time out when they should have failed.
The thread pool test is currently flakey and takes over 2 minutes to run
on CI. It also currently has no users now that RequestServer uses curl,
so let's just remove it for now. If we need it in the future, we can
revive it from git history.
If we decide to fetch another linked resource, we don't care about the
earlier fetch and can safely abort it.
This fixes an issue on GitHub where we'd load the same style sheet
multiple times and invalidate style for the entire document every time
it finished fetching.
By aborting the ongoing fetch, no excess invalidation happens.
As useful as they may be to web developers, :has() selectors complicate
the style invalidation process quite a lot.
Let's have StyleComputer keep track of whether they are present at all
in the current set of active style sheets. This will allow us to
implement fast-path optimizations when there are no :has() selectors.
When an element is invalidated, it's possible for any subsequent sibling
or any of their descendants to also need invalidation. (Due to the CSS
sibling combinators, `+` and `~`)
For DOM node insertion/removal, we must also invalidate preceding
siblings, since they could be affected by :first-child, :last-child or
:nth-child() selectors.
This increases the amount of invalidation we do, but it's more correct.
In the future, we will implement optimizations that drastically reduce
the number of elements invalidated.
The expensive part of creating a segmenter is doing the locale and UCD
data lookups at creation time. Instead of doing this once per text node,
cache the segmenters on the document, and clone them as needed (cloning
is much, much cheaper).
On a profile loading Ladybird's GitHub repo, the following hot methods
changed as follows:
ChunkIterator ctor: 6.08% -> 0.21%
Segmenter factory: 5.86% -> 0%
Segmenter clone: N/A -> 0.09%
We now expand shorthands into their respective longhand values when
assigning to a shorthand named property on a CSSStyleDeclaration.
We also make sure that shorthands can be round-tripped by correctly
routing named property access through the getPropertyValue() AO,
and expanding it to handle shorthands as well.
A lot of WPT tests for CSS parsing rely on these mechanisms and should
now start working. :^)
Note that multi-level recursive shorthands like `border` don't work
100% correctly yet. We're going to need a bunch more logic to properly
serialize e.g `border-width` or `border` itself.
The algorithm for starting a transition requires us to examine the
before-change and after-change values of the property, without taking
any current animations into account.
..and delay static position calculation in IFC until trailing
whitespace are removed, because otherwise it's not possible to correctly
calculate x offset.
Containing block for abspos grid items depends on their grid placement:
- if element has definite grid position, then corresponding grid area
should be used as a containing block
- if element does not have definite grid position, then padding edge of
grid container should be used as a containing block
So offset should be adjusted for paddings only for boxes without
definite grid position.
The web server for WPT has a tendency to just disconnect after sending
us a resource. This makes curl think an error occurred, but it's
actually still recoverable and we have the data.
So instead of just bailing, do what we already do for other kinds of
resources and try to parse the data we got. If it works out, great!
It would be nice to solve this in the networking layer instead, but
I'll leave that as an exercise for our future selves.
This fixes an issue where document.write() with only text input would
leave all the character data as unflushed text in the parser.
This fixes many of the WPT tests for document.write().
...instead of directly mutating Gfx::Bitmap.
This change is preparation for using GPU-backend for canvas painting
where direct mutating of backing storage that bypasses painter is no
longer possible.
Our current text iterator is not aware of multi-code point graphemes.
Instead of simply incrementing an iterator one code point at a time, use
our Unicode grapheme segmenter to break text into fragments.
Instead of trying to locate the relevant StyleSheetList on style element
removal from the DOM, we now simply keep a pointer to the list instead.
This fixes an issue where using attachShadow() on an element that had
a declarative shadow DOM would cause any style elements present to use
the wrong StyleSheetList when removing themselves from the tree.
When a block container has `clear` set and some clearance is applied,
that clearance prevents margins from adjoining and therefore resets
the margin state. But when a floating box has `clear` set, that
clearance only goes between floating boxes so should not reset margin
state. BlockFormattingContexts already do that correctly, and this PR
changes InlineFormattingContext to do the same.
Fixes#1462; adds reduced input from that issue as test.
This is not that easy to use for test developers, as forgetting to set
the url back to its original state after testing your specific API will
cause future navigations to fail in inexplicable ways.
This patch implements `Range::getClientRects` and
`Range::getBoundingClientRect`. Since the rects returned by invoking
getClientRects can be accessed without adding them to the Selection,
`ViewportPaintable::recompute_selection_states` has been updated to
accept a Range as a parameter, rather than acquiring it through the
Document's Selection.
With this change, the following tests now pass:
- wpt[css/cssom-view/range-bounding-client-rect-with-nested-text.html]
- wpt[css/cssom-view/DOMRectList.html]
Note: The test
"css/cssom-view/range-bounding-client-rect-with-display-contents.html"
still fails due to an issue with Element::getClientRects, which will
be addressed in a future commit.
The current min/max zoom levels are supposed to be: 30% and 500%.
Before, due to floating point error accumulation in incremental addition
of zoom-step into zoom-level, one extra zoom step would get allowed,
enabling user to zoom 20%-to-510%.
Now, using rounding, the intermediate zoom-level values should be as
close to the theoretical value as FP32 can represent. E.g. zoom-level of
70% (theoretical multiplier 0.7) is 0.69... .
The IPCs to request a page's text, layout tree, etc. are currently all
synchronous. This can result in a deadlock when WebContent also makes
a synchronous IPC call, as both ends will be waiting on each other.
This replaces the page info IPCs with a single, asynchronous IPC. This
new IPC is promise-based, much like our screenshot IPC.
If we already destroyed our timer during destruction, and then curl
tries to flush its timeouts when we tear down the multi, we can just
ignore the timer callbacks.
DOM nodes that didn't have a layout node before being removed from the
DOM are not going to change the shape of the layout tree after being
removed.
Observing this, we can avoid a full layout tree rebuild on some DOM node
removals.
This avoids a bunch of tree building work when loading https://x.com/
This makes a big difference on macOS, where the default buffer size
for local sockets is 8 KiB. With bigger buffers, we don't have to
block on IPC nearly as often.
To prevent deadlocks when both IPC peers are trying to send to each
other but both sides have too much in their buffer already, we now
move the send operation to a secondary thread where it can block until
the peer is able to handle it.
Attributes have a max value length of 1024. So we theoretically need to
support values in the range -${"9".repeat(1023)} to ${"9".repeat(1024)}.
These obviously do not fit in an i64, so we were previously failing to
parse the attribute.
We will now cap the parsed value to the numeric limits of an i64, after
ensuring that the attribute value is indeed a number.
We were only looking at the current top-level navigable and its children
when searching for the specified window handle. We need to search *all*
known navigables if the handle belongs to a window not in the current
tree.
We very much assume that the SQL storage backend runs in a singleton
process. When this is not the case, and multiple UI processes try to
write to the database at the same time, one of them will fail.
Since --force-new-process is a testing mode flag, let's just disable the
SQL backend when that flag is present.
Previously, attempting to get the computed value for a
grid-template-rows or grid-template-columns property would cause a crash
if the element had no associated paintable.
Computing the "contained text auto directionality" is now its own
algorithm, with an extra parameter, and is additionally called from
step 2.1.3.2 instead of calling "auto directionality".
...because calculate_inner_width() assumes layout state has resolved
paddings that could be used to account for "box-sizing: border-box".
Fixes regression introduced in 5f74da6ae8
Function is defined as `round(<rounding-strategy>?, A, B?)`
With this change resolved type is `typeof(resolve(A))`, instead of
`typeof(A)`.
For example `round(up, 20%, 1px)` with 200px percentage basis is now
correctly resolved in 40px instead of 40%.
Progress on https://www.notion.so/ landing page.
Before this change, each BFC child that established an FC root was laid
out at least twice: the first time to perform a normal layout, and the
second time to perform an intrinsic layout to determine the automatic
content height. With this change, we avoid the second run by querying
the formatting context for the height it used after performing the
normal layout.
The `calculate_inner_width()` and `calculate_inner_height()` resolve
percentage paddings using the width returned by
`containing_block_width_for()`. However, this function does not account
for grids where the containing block is defined by the grid area to
which an item belongs.
This change fixes the issue by modifying `calculate_inner_width()` and
`calculate_inner_height()` to use the already resolved paddings from the
layout state. Corresponding changes ensure that paddings are resolved
and saved in the state before box-sizing is handled.
As a side effect, this change also improves abspos layout for BFC where
now paddings are resolved using padding box of containing block instead
of content box of containing block.
Fixes yet another case of GFC bug, where Node::containing_block() should
not be used for grid items, because their containing block is grid area
which is not represented in layout tree.
We currently implement the official cookie RFC, which was last updated
in 2011. Unfortunately, web reality conflicts with the RFC. For example,
all of the major browsers allow nameless cookies, which the RFC forbids.
There has since been draft versions of the RFC published to address such
issues. This patch implements the latest draft.
Major differences include:
* Allowing nameless or valueless (but not both) cookies
* Formal cookie length limits
* Formal same-site rules (not fully implemented here)
* More rules around cookie domains
This is one of the few endpoints that does not ensure a top-level BC is
open. It's a bit of an implementation-defined endpoint, so let's protect
against a non-existent BC explicitly.
Reftest screenshots are now captured using the dimensions specified in
the draw a bounding box from the framebuffer AO defined in the
WebDriver specification.
Although the parameter is named "available size," it is always supposed
to represent the containing block size whenever it has a definite value.
Therefore, it is possible to simply use this value instead of performing
a containing block lookup.
This change actually improves correctness for grid items whose
containing block is defined by the grid area, as
`Node::containing_block()` does not account for this.
Our abspos layout code assumes that available space is containing block
size, so this change aligns us with the spec by using grid area for this
value.
This change does not have attached test because it is required for
upcoming fix in calculate_inner_height() that will reveal the problem.
compute_width() could never be invoked for abspos boxes because they
are skipped during normal layout and processed in
parent_context_did_dimension_child_root_box()
All places where text shaping happens, the callback is used to simply
append a glyph into the end of glyphs vector. This change removes the
callback parameter and makes the text shaping function return a glyph
run.
When we create a WebDriverConnection object, we currently hand it the
page client for which it was opened, and perform all actions on that
client. However, some WebDriver endpoints change the browsing context
(and therefore page client) on which future commands should be executed.
For example, the switch-frame endpoint will switch the current browsing
context to a frame/iframe context.
This patch implements the current browsing context (and current top-
level browsing context) concepts. They are initialized to that of the
original page. Most of this patch is making sure we execute actions on
the correct context.
We currently spin the platform event loop while awaiting scripts to
complete. This causes WebContent to hang if another component is also
spinning the event loop. The particular example that instigated this
patch was the navigable's navigation loop (which spins until the fetch
process is complete), triggered by a form submission to an iframe.
So instead of spinning, we now return immediately from the script
executors, after setting up listeners for either the script's promise to
be resolved or for a timeout. The HTTP request to WebDriver must finish
synchronously though, so now the WebDriver process spins its event loop
until WebContent signals that the script completed. This should be ok -
the WebDriver process isn't expected to be doing anything else in the
meantime.
Also, as a consequence of these changes, we now actually handle time
outs. We were previously creating the timeout timer, but not starting
it.
Use this cached pointer to the containing block's used values when
obviously possible. This avoids a hash lookup each time, and these
hash lookups do show up in profiles.
Change try_compute_width() to check whether min-width/max-width or width
is auto instead of always using `computed_values.width()`.
`grid/min-max-content.html` test is affected but it's progression.
UI event handlers currently return a boolean where false means the event
was cancelled by a script on the page, or otherwise dropped. It has been
a point of confusion for some time now, as it's not particularly clear
what should be returned in some special cases, or how the UI process
should handle the response.
This adds an enumeration with a few states that indicate exactly how the
WebContent process handled the event. This should remove all ambiguity,
and let us properly handle these states going forward.
There should be no behavior change with this patch. It's meant to only
introduce the enum, not change any of our decisions based on the result.
Logging a parse error when the attribute is not present, is not useful,
but does fill the debug log with errors that hide any real parsing
errors. This patch introduces an early-out in this situation to prevent
this spam.
The following spec algorithms had changed since we implemented them:
- "parse a sizes attribute"
- "update the source set"
- "create a source set"
This commit brings them up to date, as well as adding some additional
logging when parsing the sizes attribute fails in some way.
Before this change, a formatting context was responsible for layout of
absolutely positioned boxes whose FC root box was their parent (either
directly or indirectly). This only worked correctly when the containing
block of the absolutely positioned child did not escape the FC root.
This is because the width and height of an absolutely positioned box are
resolved based on the size of its containing block, so we needed to
ensure that the containing block's layout was completed before laying
out an absolutely positioned box.
With this change, the layout of absolutely positioned boxes is delayed
until the FC responsible for the containing block's layout is complete.
This has affected the way we calculate the static position. It is no
longer possible to ask the FC for a box's static position, as this FC's
state might be gone by the time the layout for absolutely positioned
elements occurs. Instead, the "static position rectangle" (a concept
from the spec) is saved in the layout state, along with information on
how to align the box within this rectangle when its width and height are
resolved.
Absolutely positioned boxes do not affect the size of the formatting
context box they belong to, so it's safe to skip their layout entirely
when calculating intrinsic size.
FormattingContext::run() does not allow reentrancy, so it's safe to
save and access layout mode from FC object. This avoids need to drill it
through methods of a formatting context and makes it clear that this
value could never be changed after FC construction.
Root formatting context box is passed into constructor and saved in FC,
so it's possible to access it from there instead of passing the same
box into run().
This change takes all existing WebIDL files in the repo that had
definition lines without four leading spaces, and fixes them so they
have four leading spaces.
This change ensures that the value sanitization algorithm is run and
the text cursor is set to the correct position when the type attribute
of an input is changed.
Instead of throwing all pseudo-element rules in one bucket, let's have
one bucket per pseudo-element.
This means we only run ::before rules for ::before pseudo-elements,
only ::after rules for ::after, etc.
Average style update time on https://tailwindcss.com/ 250ms -> 215ms.
Once we know the final value of the `content` property for a
pseudo-element, we can bail early if the value is `none` or `normal`
(note that `normal` only applies to ::before and ::after).
In those cases, no pseudo-element will be generated, so everything
that follows in StyleComputer would be wasted work.
This noticeably improves performance on many pages, such as
https://tailwindcss.com/ where style updates go from 360ms -> 250ms.
This makes the way we've implemented the CSS `revert` keyword a lot less
expensive.
Until now, we were making a deep copy of all property values at the
start of each cascade origin. (Those are the values that `revert` would
bring us back to if encountered.)
With this patch, the revert property set becomes a shallow copy, and we
only clone the property set if the cascade ends up writing something.
This knocks a 5% profile item down to 1.3% on https://tailwindcss.com
Instead of asking Skia for the family name every time we're called,
just cache the string once and make subsequent calls fast.
This knocks a 3.2% item off the profile entirely on
https://tailwindcss.com (at least on macOS with the CoreText backend)
This brings back the optimization we had in the old OpenType
implementation where we cache lookup tables for code point / glyph ID
mappings.
This noticeably improves performance on https://tailwindcss.com/ by
knocking an 8% profile item down to 0.2%. :^)
Skia is more permissive when it comes to font loading, compared to our
own OpenType implementation, which it has superseded, parsing an invalid
TTF does not result in an error but rather produces a font that is
incorrectly displayed. This change updates the FontLoader to address
this behavior and to stop attempting to parse a font as a last resort
when format detection has failed.
Fixes regression on x.com when text is not displayed introduced in
a9d5a99568
Also, remove blank lines. (https://w3c.github.io/aria/#ARIAMixin source
doesn’t have any blank lines, and it’s not clear that the blank lines in
ours follow any intended structure/logic.)
This change adds the [CEReactions] attributes to all ARIA attributes in
the ARIAMixin WebIDL — as required by the WebIDL in the current spec at
https://w3c.github.io/aria/#ARIAMixin, and by the WPT test case at
http://wpt.live/custom-elements/reactions/AriaMixin-string-attributes.html,
and as implemented in other existing engines.
Otherwise, without this change, Ladybird doesn’t conform to the current
spec, fails all those tests, and isn’t interoperable with other engines.
If grid-template-rows or grid-template-columns queried for a box that is
not a grid container, the result should be computed value instead of
null.
Fixes crashing in inspector.
Instead of only bucketing these by class name, let's also bucket by
tag name and ID.
Reduces the number of selectors evaluated on https://tailwindcss.com/
from 2.9% to 1.9%.
By filtering first, we end up allocating much less vector space
most of the time.
This is mostly helpful in pathological cases where there's a huge number
of rules present, but most of them get rejected early.
Previously, there was a bug in the specification that would cause an
assertion failure, due to the abort event being fired before all
dependent signals were aborted.
That's awkward, but getComputedStyle needs to return used track values
for gridTemplateColumns and gridTemplateRows properties. This change
implements it by saving style values with used values into layout state,
so it could be assigned to paintables during LayoutState::commit() and
later accessed by style_value_for_property().
I haven't seen it used in the wild, but WPT grid tests extensively use
it. For example this change helps to go from 0/10 to 8/10 on this test:
https://wpt.live/css/css-grid/layout-algorithm/grid-fit-content-percentage.html
Before this change, the ancestor filter would only reject rules that
required a certain set of descendant strings (class, ID or tag name)
to be present in the current element's ancestor chain.
An immediate child is also a descendant, so we can include this
relationship in the ancestor filter as well.
This substantially improves the efficiency of the ancestor filter on
websites using Tailwind CSS.
For example, https://tailwindcss.com/ itself goes from full style
updates taking ~1400ms to ~350ms. Still *way* too long, but a huge
improvement nonetheless.
By bucketing these seletors by class or ID, we can avoid running them
in more cases.
Before, we were only avoiding them if the context element wasn't a div.
Now we avoid them for any element that doesn't have that specific class
or ID.
This reduces the number of selectors ran on https://vercel.com by a bit
more, from 1.90% to 1.65%.
These are just roundabout ways of writing .foo, so we can still put them
in the rules-by-class bucket and skip running them when the element
doesn't have that class.
Note that :is(.foo .bar) is also bucketed as a class rule, since the
context element must have the `bar` class for the selector to match.
This is a massive speedup on https://vercel.com/ as it cuts the number
of selectors we actually evaluate from 7.0% to 1.9%.
Fixes implementation of the following line from the spec:
"However, limit the growth of any fit-content() tracks by their
fit-content() argument."
Now we correctly apply a limit to increased growth limit rather than to
the planned increase.
Change in "Tests/LibWeb/Layout/input/grid/fit-content-2.html" is a
progression and "Item as wide as the content." is actually as wide as a
content.
Fixes a bug when "'Await' expression is not allowed in formal parameters
of an async function" is thrown for "await" encountered in a function
definition assigned to a default function parameter.
Fixes loading of https://excalidraw.com/
Added the following Routes, IPC definitions, and boilerplates for the
missing endpoints:
- Switch To Frame
- Switch To Parent Frame
- Element Clear
- Element Send Keys
Getting the first font in a font cascade list with an empty font list
results in an OOTB index error. If the font list is empty, the last
resort font should be returned instead.
This didn't make any sense, and was already handled by pushing a new
execution context anyway.
By simply removing these bogus lines of code, we fix a bug where
throwing inside a function whose bytecode was shorter than the calling
function would crash trying to generate an Error stack trace (because
the bytecode offset we were trying to symbolicate was actually from
the longer caller function, and not valid in the callee function.)
This makes --log-all-js-exceptions less crash prone and more helpful.
This permits the user to use shift and the arrow/home/end keys to mutate
the document selection. Arrow key presses on non-editable text without
the shift key held has no effect.
This behavior differs browser-to-browser. The behavior here most closely
matches Firefox, though all browsers support this to some degree.
When the user clicks on a text node, the event handler sets the cursor
position to the location that was clicked. But it would then be set back
to 0 in the DOM node's focus handler. Leave the cursor alone, unless the
the DOM node was never set as the cursor position node (which will occur
when the user clicks on the DOM node, but outside the shadow text node).
In that case, move the cursor to the end of the text node.
The end result here is that the cursor is placed where the user clicked,
or set to the end of node if the user clicked outside of the shadow text
node.
This allows rendering the elements with a dark color in dark mode. We
must also assign a `fill` color to the <select> element's chevron SVG
to match the text color.
When setting an element attribute to the value it already had, we don't
need to update style or invalidate anything that depends on the DOM
version counter.
This was a source of much pointless busywork.
You can now build with STYLE_INVALIDATION_DEBUG and get a debug stream
of reasons why style invalidations are happening and where.
I've rewritten this code many times, so instead of throwing it away once
again, I figured we should at least have it behind a flag.
It would be nice if we could somehow move this work to the GPU, but even
with some basic local optimization (mostly coalescing bounds checks and
inlining pixel data access), this knocks a 13% item down to 9% in a
profile of loading https://vercel.com/
When accessed on the root/document element, the following properties are
derived from the viewport, not layout-dependent metrics:
- scrollLeft
- scrollTop
- scrollWidth
- scrollHeight
We now avoid synchronous layout in such cases. This was causing some
unnecessary layout work when loading https://vercel.com/
Before this change, we were cascading custom properties for each layer,
and then replacing any previously cascaded properties for the element
with only the set from this latest layer.
The patch fixes the issue by making each pass of the custom property
cascade add to the same set, and then finally assigning that set of
properties to the element.
Looking at the spec it doesn't seem like there's a chance for a service
worker client to be an environment but not an environment settings
object. In the case that that changes in the implementation, we can
move it.
This API is a relic from the time when it was important for objects to
have easy access to the Window, and to know it was the global object.
We now have more spec-related concepts like relevant_global_object and
current_global_object to pull the Window out of thin air.
This adds a storage tab which contains just a cookie viewer for now. In
the future, storage like Local Storage and Indexed DB can be added here
as well.
In this patch, the cookie table is read-only.
Cookies are typically deleted by setting their expiry time to an ancient
time stamp (i.e. this is how WebDriver is required to delete cookies).
Previously, we would update the cookie in the cookie jar, which would
mark the cookie as dirty. We would then purge expired cookies from the
jar's transient storage, which removed the cookie from the dirty list.
If the cookie was also in the persisted storage, it would never become
expired there as it was no longer in the dirty list when the timer for
synchronization fired.
Now, we don't remove any cookies from the transient dirty list when we
purge expired cookies. We hold onto the dirty cookie until sync time,
where we now update the cookie in the persisted storage *before* we
delete expired cookies.
It's getting a bit unwieldy to maintain as an inlined string. Move it to
its own file so it can be edited with syntax highlighting and other IDE
features.
This change makes the aria-relevant content attribute the ariaRelevant
IDL/DOM attribute get reflected — which makes the Ladybird behavior
interoperable with the implemented behavior in other existing engines.
Otherwise, without this change, Ladybird fails the relevant test case in
https://wpt.fyi/results/html/dom/aria-attribute-reflection.html — which
other existing engines all pass.
Before this change, we were only checking for actual glyph containment
in a font if unicode ranges were specified. However that is not
sufficient for emoji support, where we want to continue searching for
a font until one containing emojis is found.
This change updates `ExecuteScript::execute_script()` and
`ExecuteScript::execute_script()` to bring their behavior in line with
each other and the current specification text.
Instances of the variable `timeout` have also been renamed to
`timeout_ms`, for clarity.
I didn't want to add another set of boilerplatey tree-walking methods,
so here's a general-purpose one. :^)
`for_each_effective_rule()` walks the tree of effective style rules, and
runs the callback on each one, in either pre- or postorder. The
previous `for_each_effective_style/keyframes_rule()` methods of
`CSSStyleSheet` are then reimplemented in terms of
`for_each_effective_rule()`, and we can get rid of their equivalents
elsewhere.
Depending on usage, `@layer` has two forms, with two different CSSOM
types. One simply lists layer names and the other defines a layer with
its contained rules.