This includes a protocol for creating LibGC Heap allocated Swift
objects. Pay no attention to the Unmanaged shenanigans, they are
all behind the curtain.
This will allow us to use the GC to manage the lifetime of objects
that are not C++ objects, such as Swift objects. In the future we
could expand this cursed FFI to other languages as well.
While we don't want arbitrary callers deferring GC, we do want
deferral to be available to the Swift. In order for Swift to
understand the RAII nature of DeferGC, we need to mark it as
non-copyable.
CDATASection inherits from Text, and so it was incorrect for them to
claim not to be Text nodes.
This fixes at least two WPT subtests. :^)
It also exposed a bug in the DOM Parsing and Serialization spec,
where we're not told how to serialize CDATASection nodes.
Spec bug: https://github.com/w3c/DOM-Parsing/issues/38
Instead of checking the header in ZlibDecompressor::create(), we now
check it in read_some() when it is called for the first time. This
resolves a FIXME in the new DecompressionStream implementation.
We don't need nanosecond precision here anyways, as we only display
millisecond resolution.
This uses our simple duration formatter from AK, which is updated to
accept a Duration here. This method did not have any users after the
move from Serenity.
The Duration record no longer exists in Temporal. Implement it according
to the DurationFormat spec to prepare for its removal from our Temporal
implementation.
We also implement the DurationSign AO here as well, as the Temporal
implementation will now require a Temporal.Duration JS object.
Implemented by reusing AddMask display list item that was initially
added for `background-clip` property.
Progress on flashlight effect on https://null.com/games/athena-crisis
For a while we used the wider Paintable type for stacking context,
because it was allowed to be created by InlinePaintable and
PaintableBox. Now, when InlinePaintable type is gone, we can use more
specific PaintableBox type for a stacking context.
I believe this is an error in the UI Events spec, and it should be
updated to match the HTML spec (which uses WindowProxy everywhere).
This fixes a bunch of issues already covered by existing WPT tests.
Spec bug: https://github.com/w3c/uievents/issues/388
Note that WebKit has been using WindowProxy instead of Window in
UI Events IDL since 2018:
816158b4aa
These compressors will be used by w3c's CompressionStream, which can run
arbitrary JS, and thus never reach their "finish" steps. Let's not crash
the WebContent process if that happens.
GzipCompressor is currently written assuming that it's write_some method
is only called once. When we use this class for LibWeb, we may very well
receive data to compress in small chunks. So this patch makes us write
the gzip header and footer only once, which now resembles the zlib and
deflate compressors.
TL;DR: There are two available sets of coefficients for the conversion
matrices from XYZ to sRGB. We switched from one set to the other, which
is what the WPT tests are expecting.
All RGB color spaces, like display-p3 or rec2020, are defined by their
three color chromacities and a white point. This is also the case for
the video color space Rec. 709, from which the sRGB color space is
derived. The sRGB specification is however a bit different.
In 1996, when formalizing the sRGB spec the authors published a draft
that is still available here [1]. In this document, they also provide
the matrix to convert from the XYZ color space to sRGB. This matrix can
be verified quite easily by using the usual math equations. But hold on,
here come the plot twist: at the time of publication, the spec contained
a different matrix than the one in the draft (the spec is obviously
behind a pay wall, but the numbers are also reported in this official
document [2]). This official matrix, is at a first glance simply a
wrongly rounded version of the one in the draft publication. It however
has some interesting properties: it can be inverted twice (so a
roundtrip) in 8 bits and not suffer from any errors from the
calculations.
So, we are here with two versions of the XYZ -> sRGB matrix, the one
from the spec, which is:
- better for computations in 8 bits,
- and official. This is the one that, by authority, we should use.
And a second version, that can be found in the draft, which:
- makes sense, as directly derived from the chromacities,
- is publicly available,
- and (thus?) used in most places.
The old coefficients were the one from the spec, this commit change them
for the one derived from the mathematical formulae. The Python script to
compute these values is available at the end of the commit description.
More details about this subject can be found here [3].
[1] https://www.w3.org/Graphics/Color/sRGB.html
[2] https://color.org/chardata/rgb/sRGB.pdf
[3] https://photosauce.net/blog/post/making-a-minimal-srgb-icc-profile-part-3-choose-your-colors-carefully
The Python script:
```python
# http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html
from numpy.typing import NDArray
import numpy as np
### sRGB
# https://www.w3.org/TR/css-color-4/#predefined-sRGB
srgb_r_chromacity = np.array([0.640, 0.330])
srgb_g_chromacity = np.array([0.300, 0.600])
srgb_b_chromacity = np.array([0.150, 0.060])
##
## White points
white_point_d50 = np.array([0.345700, 0.358500])
white_point_d65 = np.array([0.312700, 0.329000])
#
r_chromacity = srgb_r_chromacity
g_chromacity = srgb_g_chromacity
b_chromacity = srgb_b_chromacity
white_point = white_point_d65
def tristmimulus_vector(chromacity: NDArray) -> NDArray:
return np.array([
chromacity[0] /chromacity[1],
1,
(1 - chromacity[0] - chromacity[1]) / chromacity[1]
])
tristmimulus_matrix = np.hstack((
tristmimulus_vector(r_chromacity).reshape(3, 1),
tristmimulus_vector(g_chromacity).reshape(3, 1),
tristmimulus_vector(b_chromacity).reshape(3, 1),
))
scaling_factors = (np.linalg.inv(tristmimulus_matrix) @
tristmimulus_vector(white_point))
M = tristmimulus_matrix * scaling_factors
np.set_printoptions(formatter={'float_kind':'{:.6f}'.format})
xyz_65_to_srgb = np.linalg.inv(M)
# http://www.brucelindbloom.com/index.html?Eqn_ChromAdapt.html
# Let's convert from D50 to D65 using the Bradford method.
m_a = np.array([
[0.8951000, 0.2664000, -0.1614000],
[-0.7502000, 1.7135000, 0.0367000],
[0.0389000, -0.0685000, 1.0296000]
])
cone_response_source = m_a @ tristmimulus_vector(white_point_d50)
cone_response_destination = m_a @ tristmimulus_vector(white_point_d65)
cone_response_ratio = cone_response_destination / cone_response_source
m = np.linalg.inv(m_a) @ np.diagflat(cone_response_ratio) @ m_a
D50_to_D65 = m
xyz_50_to_srgb = xyz_65_to_srgb @ D50_to_D65
print(xyz_50_to_srgb)
print(xyz_65_to_srgb)
```
fe46b2c141 added the reset-temp-inverse flag, but set it up so all
tempinverse ops were negated at the start of the next op; this commit
makes it so these flags actually persist for one op and not zero.
Fixes#2296.
This change ensures that:
- if an element for which an accessible name otherwise wouldn’t be
computed is referenced in an aria-labelledby value, the accessible
name for the element will be computed as expected.
- if an element has both an aria-label value and also an
aria-labelledby value, the text from the aria-label value gets
included in the computation of the element’s accessible name.
Otherwise, without this change, some elements with aria-labelledby
values will unexpectedly end up without accessible names, and some
elements with aria-label values will unexpectedly not have that
aria-label value included in the element’s accessible name.
This color space is often used as a reference in WPT tests, having
support for it makes us pass 15 new tests:
- css/css-color/rec2020-001.html
- css/css-color/rec2020-002.html
- css/css-color/rec2020-003.html
- css/css-color/rec2020-004.html
- css/css-color/rec2020-005.html
- css/css-color/predefined-011.html
- css/css-color/predefined-012.html
That makes us pass the following WPT tests:
- css/css-color/prophoto-rgb-001.html
- css/css-color/prophoto-rgb-002.html
- css/css-color/prophoto-rgb-003.html
- css/css-color/prophoto-rgb-004.html
- css/css-color/prophoto-rgb-005.html
- css/css-color/predefined-009.html
- css/css-color/predefined-010.html
This color space is often used as a reference in WPT tests, having
support for it makes us pass 15 new tests:
- css/css-color/display-p3-001.html
- css/css-color/display-p3-002.html
- css/css-color/display-p3-003.html
- css/css-color/display-p3-004.html
- css/css-color/display-p3-005.html
- css/css-color/display-p3-006.html
- css/css-color/lab-008.html
- css/css-color/lch-008.html
- css/css-color/oklab-008.html
- css/css-color/oklch-008.html
- css/css-color/predefined-005.html
- css/css-color/predefined-006.html
- css/css-color/xyz-005.html
- css/css-color/xyz-d50-005.html
- css/css-color/xyz-d65-005.html
This makes us pass the following WPT tests:
- css/css-color/a98rgb-001.html
- css/css-color/a98rgb-002.html
- css/css-color/a98rgb-003.html
- css/css-color/a98rgb-004.html
- css/css-color/predefined-007.html
- css/css-color/predefined-008.html
In commit 1b82cb43c2 I accidentally
removed the paint transformation altogether. The result was that
zoomed-in SVGs, or SVG elements with a transformation applied could have
their gradient coordinates misplaced significantly.
This was also exposed in the `svg-text-effects` test by way of a slight
visual difference. Add a new test that very clearly exposes the fixed
issue by rotating the gradient coordinates by 45 degrees.
In #1537, determine_the_origin() changed to take
`Optional<URL::URL> const&` as first parameter, but it's passed
`Web::Fetch::Infrastructure::Response::url()`, which returns
`Optional<URL::URL const&>`. Ladybird does not have
SerenityOS/serenity#22870 (yet?), so this mismatch silently creates
a copy.
Change determine_the_origin() to take `Optional<URL::URL const&>`
instead. No behavior change, saves a copy, and is probably what
was originally intended.
The WebSocket spec tells us to queue tasks instead of firing events
synchronously at WebSockets, so this commit does exactly that.
The way we've implemented web sockets means that the work is spread
across multiple libraries and even processes, which is why it doesn't
look like the spec verbatim.
That makes us pass the following WPT tests:
- css/css-color/srgb-linear-001.html
- css/css-color/srgb-linear-002.html
- css/css-color/srgb-linear-003.html
Previously it was only pushing the module context for the call to
capture the module execution context. This is incorrect, as the capture
occurs upon function construction. This resulted in it capturing the
execution context that execute_module was called from, instead of the
newly created module_context.
f87041bf3a/Libraries/LibJS/Runtime/ECMAScriptFunctionObject.cpp (L92)
This can be demonstrated with the following setup:
index.html:
```html
<script>
var foo = 1;
</script>
<script type="module">
import {test} from "./scriptA.mjs";
</script>
```
scriptA.mjs:
```js
function foo() {
return {a: "b"};
}
export let test = await foo();
```
Before this fix, this would throw:
```
[TypeError] 1 is not a function (evaluated from 'foo')
at module code with top-level await
at module code with top-level await
at <unknown>
at <unknown>
```
Fixes#2245.
Fixes a bug when https://wpt.live/css/CSS2/positioning/abspos-001.xht
saved as file fails because we incorrectly recognized its MIME type
as HTML, leading to incorrect self-closing tag handling and thus
incorrect rendering.
The MessagePort one in particular is required by Cloudflare Turnstile,
as the method it takes to run JS in a worker is to `eval` the contents
of `MessageEvent.data`. However, it will only do this if
`MessageEvent.isTrusted` is true, `MessageEvent.origin` is the empty
string and `MessageEvent.source` is `null`.
The Window version is a quick fix whilst in the vicinity, as its
MessageEvent should also be trusted.
Resulting in a massive rename across almost everywhere! Alongside the
namespace change, we now have the following names:
* JS::NonnullGCPtr -> GC::Ref
* JS::GCPtr -> GC::Ptr
* JS::HeapFunction -> GC::Function
* JS::CellImpl -> GC::Cell
* JS::Handle -> GC::Root
The spec just says to follow "most backwards-compatible, then shortest"
when serializing these (and it does so in a very hand-wavy fashion).
By omitting some keywords when they are implied, we end up matching
other engines and pass a bunch of WPT tests.
This shouldn't just be a simple reflection of the label attribute.
It also needs fallback to the HTMLOptionElement.text property if the
label attribute is absent.
None of the algorithms actually set the `extractable` internal slot in
their implementations, and looking at `SubtleCrypto::import_key()` it
seems likely that a step is missing here.
Fix the function signatures of Canvas.toDataURL() and Canvas.toBlob()
and make both functions accept non-numbers as the quality parameter, in
which case it will just use the default quality instead of raising an
exception.
This makes toDataURL.arguments.1.html, toDataURL.arguments.2.html and
toDataURL.jpeg.quality.notnumber.html in
wpt/html/semantics/embedded-content/the-canvas-element pass :^)
This means that an `<input type=password>` will show the correct number
of *s in it when non-ASCII characters are entered.
We also don't need to perform text-transform on these as that doesn't
affect the output length, so I've moved it earlier.
After we absolutize the contents of :has(), we check that those child
selectors don't contain anything that :has() rejects.
This is a separate path than the checks inside the parser, which is
unfortunate.
Fixes a WPT ref test. :^)
It's possible for absolutizing a selector to return an invalid selector
(eg, it could cause `:has()` inside `:has()`) so we need to be able to
express that.
The CSSOM spec tells us to potentially add up to three different IDL
attributes to CSSStyleDeclaration for every CSS property we support:
- A camelCased attribute, where a dash indicates the next character
should be uppercase
- A camelCased attribute for every -webkit- prefixed property, with the
first letter always being lowercase
- A dashed-attribute for every property with a dash in it.
Additionally, every attribute must have the CEReactions and
LegacyNullToEmptyString extended attributes specified on it.
Since we specify every property we support with Properties.json, we can
use that file to generate the IDL file and it's implementation.
We import it from the Build directory with the help of multiple import
base paths. Then, we add it to CSSStyleDeclaration via the mixin
functionality and inheriting the generated class in
CSSStyleDeclaration.
This allows us to specify multiple base paths to look for imported IDL
files in. This will allow us to import IDL files from sources and from
the Build directory (i.e. for generated IDL files).