This includes a protocol for creating LibGC Heap allocated Swift
objects. Pay no attention to the Unmanaged shenanigans, they are
all behind the curtain.
This will allow us to use the GC to manage the lifetime of objects
that are not C++ objects, such as Swift objects. In the future we
could expand this cursed FFI to other languages as well.
While we don't want arbitrary callers deferring GC, we do want
deferral to be available to the Swift. In order for Swift to
understand the RAII nature of DeferGC, we need to mark it as
non-copyable.
CDATASection inherits from Text, and so it was incorrect for them to
claim not to be Text nodes.
This fixes at least two WPT subtests. :^)
It also exposed a bug in the DOM Parsing and Serialization spec,
where we're not told how to serialize CDATASection nodes.
Spec bug: https://github.com/w3c/DOM-Parsing/issues/38
Instead of checking the header in ZlibDecompressor::create(), we now
check it in read_some() when it is called for the first time. This
resolves a FIXME in the new DecompressionStream implementation.
We don't need nanosecond precision here anyways, as we only display
millisecond resolution.
This uses our simple duration formatter from AK, which is updated to
accept a Duration here. This method did not have any users after the
move from Serenity.
The Duration record no longer exists in Temporal. Implement it according
to the DurationFormat spec to prepare for its removal from our Temporal
implementation.
We also implement the DurationSign AO here as well, as the Temporal
implementation will now require a Temporal.Duration JS object.
Implemented by reusing AddMask display list item that was initially
added for `background-clip` property.
Progress on flashlight effect on https://null.com/games/athena-crisis
This was causing issues for my Ubuntu 24.04 build when building
the Distribution preset, so just stash this constant config in
the CMake cache to not worry about it anymore.
For a while we used the wider Paintable type for stacking context,
because it was allowed to be created by InlinePaintable and
PaintableBox. Now, when InlinePaintable type is gone, we can use more
specific PaintableBox type for a stacking context.
I believe this is an error in the UI Events spec, and it should be
updated to match the HTML spec (which uses WindowProxy everywhere).
This fixes a bunch of issues already covered by existing WPT tests.
Spec bug: https://github.com/w3c/uievents/issues/388
Note that WebKit has been using WindowProxy instead of Window in
UI Events IDL since 2018:
816158b4aa
These compressors will be used by w3c's CompressionStream, which can run
arbitrary JS, and thus never reach their "finish" steps. Let's not crash
the WebContent process if that happens.
GzipCompressor is currently written assuming that it's write_some method
is only called once. When we use this class for LibWeb, we may very well
receive data to compress in small chunks. So this patch makes us write
the gzip header and footer only once, which now resembles the zlib and
deflate compressors.
Some LibCompress API changes for LibWeb will make these utilities a bit
difficult to keep up to date. Given that these are unused anways, let's
just not bother.
TL;DR: There are two available sets of coefficients for the conversion
matrices from XYZ to sRGB. We switched from one set to the other, which
is what the WPT tests are expecting.
All RGB color spaces, like display-p3 or rec2020, are defined by their
three color chromacities and a white point. This is also the case for
the video color space Rec. 709, from which the sRGB color space is
derived. The sRGB specification is however a bit different.
In 1996, when formalizing the sRGB spec the authors published a draft
that is still available here [1]. In this document, they also provide
the matrix to convert from the XYZ color space to sRGB. This matrix can
be verified quite easily by using the usual math equations. But hold on,
here come the plot twist: at the time of publication, the spec contained
a different matrix than the one in the draft (the spec is obviously
behind a pay wall, but the numbers are also reported in this official
document [2]). This official matrix, is at a first glance simply a
wrongly rounded version of the one in the draft publication. It however
has some interesting properties: it can be inverted twice (so a
roundtrip) in 8 bits and not suffer from any errors from the
calculations.
So, we are here with two versions of the XYZ -> sRGB matrix, the one
from the spec, which is:
- better for computations in 8 bits,
- and official. This is the one that, by authority, we should use.
And a second version, that can be found in the draft, which:
- makes sense, as directly derived from the chromacities,
- is publicly available,
- and (thus?) used in most places.
The old coefficients were the one from the spec, this commit change them
for the one derived from the mathematical formulae. The Python script to
compute these values is available at the end of the commit description.
More details about this subject can be found here [3].
[1] https://www.w3.org/Graphics/Color/sRGB.html
[2] https://color.org/chardata/rgb/sRGB.pdf
[3] https://photosauce.net/blog/post/making-a-minimal-srgb-icc-profile-part-3-choose-your-colors-carefully
The Python script:
```python
# http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html
from numpy.typing import NDArray
import numpy as np
### sRGB
# https://www.w3.org/TR/css-color-4/#predefined-sRGB
srgb_r_chromacity = np.array([0.640, 0.330])
srgb_g_chromacity = np.array([0.300, 0.600])
srgb_b_chromacity = np.array([0.150, 0.060])
##
## White points
white_point_d50 = np.array([0.345700, 0.358500])
white_point_d65 = np.array([0.312700, 0.329000])
#
r_chromacity = srgb_r_chromacity
g_chromacity = srgb_g_chromacity
b_chromacity = srgb_b_chromacity
white_point = white_point_d65
def tristmimulus_vector(chromacity: NDArray) -> NDArray:
return np.array([
chromacity[0] /chromacity[1],
1,
(1 - chromacity[0] - chromacity[1]) / chromacity[1]
])
tristmimulus_matrix = np.hstack((
tristmimulus_vector(r_chromacity).reshape(3, 1),
tristmimulus_vector(g_chromacity).reshape(3, 1),
tristmimulus_vector(b_chromacity).reshape(3, 1),
))
scaling_factors = (np.linalg.inv(tristmimulus_matrix) @
tristmimulus_vector(white_point))
M = tristmimulus_matrix * scaling_factors
np.set_printoptions(formatter={'float_kind':'{:.6f}'.format})
xyz_65_to_srgb = np.linalg.inv(M)
# http://www.brucelindbloom.com/index.html?Eqn_ChromAdapt.html
# Let's convert from D50 to D65 using the Bradford method.
m_a = np.array([
[0.8951000, 0.2664000, -0.1614000],
[-0.7502000, 1.7135000, 0.0367000],
[0.0389000, -0.0685000, 1.0296000]
])
cone_response_source = m_a @ tristmimulus_vector(white_point_d50)
cone_response_destination = m_a @ tristmimulus_vector(white_point_d65)
cone_response_ratio = cone_response_destination / cone_response_source
m = np.linalg.inv(m_a) @ np.diagflat(cone_response_ratio) @ m_a
D50_to_D65 = m
xyz_50_to_srgb = xyz_65_to_srgb @ D50_to_D65
print(xyz_50_to_srgb)
print(xyz_65_to_srgb)
```