H.264 in Matroska can have blocks with unordered timestamps. Without
passing these as the presentation timestamp into the FFmpeg decoder,
the frames will not be returned in chronological order.
VideoFrame will now include a timestamp that is used by the
PlaybackManager, rather than assuming that it is the same timestamp
returned by the demuxer.
LibLocale was split off from LibUnicode a couple years ago to reduce the
number of applications on SerenityOS that depend on CLDR data. Now that
we use ICU, both LibUnicode and LibLocale are actually linking in this
data. And since vcpkg gives us static libraries, both libraries are over
30MB in size.
This patch reverts the separation and merges LibLocale into LibUnicode
again. We now have just one library that includes the ICU data.
Further, this will let LibUnicode share the locale cache that previously
would only exist in LibLocale.
If a flex item has a preferred aspect ratio and the flex basis is not
definite, we were falling back to using stretch-fit for the main size,
since that appeared to match other browsers.
However, we missed the case where we actually have a definite cross size
through which the preferred aspect ratio can be naturally resolved.
To get away from the ancient (and buggy) text layout code in
Gfx::Painter, we want to remove all the uses of Painter::draw_text().
As a first step towards this, we implement the draw_text() display list
command in terms of the draw_text_run() command by performing the very
simple necessary layout in draw_text() beforehand.
Our current segmenter implementation lives in LibUnicode, and is not
locale-aware. We will need such awareness for ECMA-402, and so LibLocale
will be the new home for text segmentation.
The tests here are ported directly from LibUnicode/TestSegmentation.cpp.
There are a couple of differences here due to using ICU:
1. Titlecasing behaves slightly differently. We previously transformed
"123dollars" to "123Dollars", as we would use word segmentation to
split a string into words, then transform the first cased character
to titlecase. ICU doesn't go quite that far, and leaves the string
as "123dollars". While this is a behavior change, the only user of
this API is the `text-transform: capitalize;` CSS rule, and we now
match the behavior of other browsers.
2. There isn't an API to compare strings with case insensitivity without
allocating case-folded strings for both the left- and right-hand-side
strings. Our implementation was previously allocation-free; however,
in a benchmark, ICU is still ~1.4x faster.
Add the missing `print` function to the spectest namespace. Also, spec
externs cannot be re-used because operations that modify "memory", for
example, will carry over into subsequent spec test runs. This can be
remedied in the future by implementing some sort of garbage collector
for allocations in the store.
The values aren't that complex, so it doesn't make much sense to have a
dedicated generator for them. Parsing them manually also allows us to
have much more control over the produced values, so as a result of this
change, EasingStyleValue becomes much more ergonomic.
...and shadow tree with TextNode for "value" attribute is created.
This means InlineFormattingContext is used, and button's text now
respects CSS text-decoration properties and unicode-ranges.
This uses ICU for the Intl.DateTimeFormat `format` `formatToParts`,
`formatRange`, and `formatRangeToParts`.
This lets us remove most data from our date-time format generator. All
that remains are time zone data and locale week info, which are relied
upon still for other interfaces. So they will be removed in a future
patch.
Note: All of the changes to the test files in this patch are now aligned
with other browsers. This includes:
* Some very incorrect formatting of Japanese symbols. (Looking at the
old results now, it's very obvious they were wrong.)
* Old FIXMEs regarding range formatting not including the start/end date
when only time fields were requested, but the dates differ.
* Day period inconsistencies.
The [UTF-8](https://datatracker.ietf.org/doc/html/rfc3629#page-5)
standard says to reject strings with upper or lower surrogates. However,
in many standards, ECMAScript included, unpaired surrogates (and
therefore UTF-8 surrogates) are allowed in strings. So, this commit
extends the UTF-8 validation API with `AllowSurrogates`, which will
reject upper and lower surrogate characters.
Note: We keep locale parsing and syntactic validation as-is. ECMA-402
places additional restrictions on locales above what is required by the
Unicode spec. ICU doesn't provide methods that let us easily check those
restrictions, whereas LibLocale does. Other browsers also implement
their own validators here.
This introduces a locale cache to re-use parsed locale data and various
related structures (not doing so has a non-negligible performance impact
on Intl tests).
The existing APIs for canonicalization and display names are pretty
intertwined, so they must both be adapted at once here. The results of
canonicalization are slightly different on some edge cases. But the
changed results are actually now aligned with Chrome and Safari.
Instead of attempting a stack use-after-free by reading an out-of-scope
object's data member, let's keep a flag that checks if the destructor
had been called in the outer scope.
Fixes#64
We don't need intrinsic scale factors for Gfx::Bitmap in Ladybird,
as everything flows through the CSS / device pixel ratio mechanism.
This patch also removes various unused functions instead of adapting
them to the change.
The main intention of this change is to have a consistent look and
behavior across all scrollbars, including elements with
`overflow: scroll` and `overflow: auto`, iframes, and a page.
Before:
- Page's scrollbar is painted by Browser (Qt/AppKit) using the
corresponding UI framework style,
- Both WebContent and Browser know the scroll position offset.
- WebContent uses did_request_scroll_to() IPC call to send updates.
- Browser uses set_viewport_rect() to send updates.
After:
- Page's scrollbar is painted on WebContent side using the same style as
currently used for elements with `overflow: scroll` and
`overflow: auto`. A nice side effects: scrollbars are now painted for
iframes, and page's scrollbar respects scrollbar-width CSS property.
- Only WebContent knows scroll position offset.
- did_request_scroll_to() is no longer used.
- set_viewport_rect() is changed to set_viewport_size().
This was used to convert markdown into HTML for display in the browser,
but no other browser behaves this way, so let's simplify things by
removing it.
(Yes, we could implement all kinds of "convert to HTML and display" for
every file format out there, but that's far outside the scope of a
browser engine.)
The DocumentTimeline constructor used the current millisecond time to
initialize its currentTime, but that means that a newly created timeline
would always have a different time value than other timelines that have
been through the update_animations_and_send_events function.
Implements `table.get`, `table.set`, `elem.drop`, `table.size`,
and `table.grow`. Also fixes a few issues when generating ref-related
spectests. Also changes the `TableInstance` type to use
`Vector<Reference>` instead of `Vector<Optional<Reference>>`, because
the ability to be null is already encoded in the `Reference` type.
The color indexing transform shouldn't make single-channel images
larger (by needlessly writing a palette). If there <= 16 colors
in the single channel, it should make the image smaller.
GC-allocated objects should never have JS::SafeFunction/JS::Handle
fields.
For now the plugin only emits warnings here, as there are many cases
of this occurring in the codebase that aren't trivial to fix. It is also
behind a CMake flag since it is a _very_ loud warning.
`Painting::paint_all_borders()` only uses `.draw_line()` for simple
borders and `.fill_path()` for more complex cases. These are both
already supported by the `RecordingPainter` so removing this command
simplifies the painting API.
Two test changes:
css-background-clip-text: Borders are now drawn via the AA painter
(which makes them closer to how they appear in other browsers).
corner-clip-inside-scrollable: Borders removed (does not change test)
due to imperceptible sub-pixel changes.
Previously, we always cast to a HTMLInputElement when getting the value
of an auto directionality form associated element. This caused
undefined behavior when determining the directionality of an element
that wasn't a HTMLInputElement.
Previously, we assumed that all label control paintables were of type
`LabelablePaintable`. This caused a crash when clicking on a label with
a text input control.
This initializes 'arrayBuffer' to an UInt8Array so that we can
manipulate the contents of 'arrayBuffer'. The test verifies that the
internal buffer is an ArrayBuffer.
Also:
* Correct comparison in test so that we compare arrayBuffer to
arrayClone, not to itself.
* Remove FIXME, this outputs [object ArrayBuffer] in Firefox and Chrome
too.
All painting commands except SetClipRect are shifted by scroll offset
before command list execution. This change removes scroll offset
translation for sample/blit corner commands in
`PaintableWithLines::paint` so it is only applied once in
`CommandList::apply_scroll_offsets()`.
...and use a different color name until a (relatively harmless) bug
writing fully-opaque frames to an animation that also has transparent
frames is fixed. (I've had a local fix for that for a while, but
I'm waiting for #24397 to land.)
The if statement in the dispatch implies we are in the idle state, so of
course the active time will always be undefined. If this was cancelled
via a call to cancel(), we can save the time at that point. Otherwise,
just send 0.
This patch fixes two issues:
- Animation events that should go to the target element now do
(some were previously being dispatched on the animation itself.)
- We update the "previous phase" and "previous iteration" fields of
animation effects, so that we can actually detect phase changes.
This means we stop thinking animations always just started,
something that caused each animation to send 60 animationstart
events every second (to the wrong target!)
Now that the lambda capture plugin isn't full of false-positives, we can
make the jump and start halting builds for these errors. It also allows
these plugins to be useful in CI.
Instead of being opt-out with NOESCAPE, it is now opt-in with ESCAPING.
Opt-out is ideal, but unfortunately this was extremely noisy when
compiling the entire codebase. Escaping functions are rarer than non-
escaping ones, so let's just go with that for now.
This also allows us to gradually add heuristics for detecting missing
ESCAPING annotations and emitting them as errors. It also nicely matches
the spelling that Swift uses (@escaping), which is where this idea
originally came from.
This change adds the `width` and `height` properties to
`HTMLVideoElement` and `HTMLSourceElement`. These properties reflect
their respective content attribute values.
This returns the secondary target of a mouse event. For `onmouseenter`
and `onmouseover` events, this is the EventTarget the mouse exited
from. For `onmouseleave` and `onmouseout` events, this is the
EventTarget the mouse entered to.
For now, this slot is always 0 - (the default value per spec). But
once we start actually processing audio streams this internal slot
should be changed correspondingly.
To determine the palette of colors we use the median cut algorithm.
While being a correct implementation, enhancements are obviously
existing on both the median cut algorithm and the encoding side.
This is useful to find the best matching color palette from an existing
bitmap. It can be used in PixelPaint but also in encoders of old image
formats that only support indexed colors e.g. GIF.
For example, for 7z7c.gif, we now store one 500x500 frame and then
a 94x78 frame at (196, 208) and a 91x78 frame at (198, 208).
This reduces how much data we have to store.
We currently store all pixels in the rect with changed pixels.
We could in the future store pixels that are equal in that rect
as transparent pixels. When inputs are gif files, this would
guaranteee that new frames only have at most 256 distinct colors
(since GIFs require that), which would help a future color indexing
transform. For now, we don't do that though.
The API I'm adding here is a bit ugly:
* WebPs can only store x/y offsets that are a multiple of 2. This
currently leaks into the AnimationWriter base class.
(Since we potentially have to make a webp frame 1 pixel wider
and higher due to this, it's possible to have a frame that has
<= 256 colors in a gif input but > 256 colors in the webp,
if we do the technique above.)
* Every client writing animations has to have logic to track
previous frames, decide which of the two functions to call, etc.
This also adds an opt-out flag to `animation`, because:
1. Some clients apparently assume the size of the last VP8L
chunk is the size of the image
(see https://github.com/discord/lilliput/issues/159).
2. Having incremental frames is good for filesize and for
playing the animation start-to-end, but it makes it hard
to extract arbitrary frames (have to extract all frames
from start to target frame) -- but this is mean tto be a
delivery codec, not an editing codec. It's also more vulnerable to
corrupted bytes in the middle of the file -- but transport
protocols are good these days.
(It'd also be an idea to write a full frame every N frames.)
For https://giphy.com/gifs/XT9HMdwmpHqqOu1f1a (an 184K gif),
output webp size goes from 21M to 11M.
For 7z7c.gif (an 11K gif), output webp size goes from 2.1M to 775K.
(The webp image data still isn't compressed at all.)
Truncating the value is mathematically incorrect, this error made the
conversion to grayscale unstable. In other world, calling `to_grayscale`
on a gray value would return a different value. As an example,
`Color::from_string("#686868ff"sv).to_grayscale()` used to return
#676767ff.
We are often trying to click the image before it has finished loading.
This results in us trying to click a 0x0 rect. Instead, wait until the
image load event.
This fixes a flake with form-image-submission.html often seen on CI.
Two bugs:
1. Correctly set bits in VP8X header.
Turns out these were set in the wrong order.
2. Correctly set the `has_alpha` flag.
Also add a test for writing webp files with icc data. With the
additional checks in other commits in this PR, this test catches
the bug in WebPWriter.
Rearrange some existing functions to make it easier to write this test:
* Extract encode_bitmap() from get_roundtrip_bitmap().
encode_bitmap() allows passing extra_args that the test uses to pass
in ICC data.
* Extract expect_bitmaps_equal() from test_roundtrip()
If this turns out to be too strict in practice, we can replace it with
a `dbgln("VP8X and VP8L headers disagree about alpha; ignoring VP8X");`
instead.
ALso update catdog-alert-13-alpha-used-false.webp to not trigger this.
I had manually changed the VP8L alpha flag at offset 0x2a in
da48238fbd to clear it, but I hadn't changed the VP8X flag.
This changes the byte at offset 0x14 from 0x10 (has_alpha) to 0x00
(no alpha) as well, to match.
Explicit template arguments must be wrapped in parens,
else they confuse the preprocessor.
Add the parens instead of avoiding the use of explicit template
arguments.
No behavior change.
Bilevel images are not required to have a BitsPerSample or a
SamplesPerPixel tag, while this is unusual these images are still valid.
The test case has been generated by first making a copy of
ccitt3_1d_fill.tiff and then, using `tiffset` to remove both tags:
tiffset -u 258 ccitt3_no_tags.tiff
tiffset -u 277 ccitt3_no_tags.tiff
The spec isn't _super_ clear on how this is meant to be done, but the
way I understand this is that we should simply clamp the returned
'current value' between 'min' and 'max'.
Firefox does not appear to do this clamping, but Chrome does.
This is a simple getter and setter of the OscillatorType enum, with
error checking to not allow 'custom', as that should only be changed
through 'setPeriodicWave()'.
This is still missing a bunch of spec steps to construct the
audio node based on the parameters of the OscillatorNode, but it is at
least enough to construct an object to be able to add a basic test which
can get built upon as more is implemented.
Previously, we would apply any adopted style sheet to the document if
its alternate flag was not set. This meant that all adopted style
sheets would be applied, since constructed style sheets never have this
flag set.
...and add a test case that shows why it's incorrect.
If one dimension is 2^n + 1 and the other side is just 1, then the
topmost node will have 2^n x 1 and 1 x 1 children. The first child will
have n levels of children. The 1 x 1 child could end immediately, or it
could require that it also has n levels of (all 1 x 1) children. The
spec isn't clear on which of the two alternatives should happen. We
currently have n levels of 1 x 1 blocks.
This test case shows that a VERIFY we had was incorrect, so remove it.
The alternative implementation is to keep the VERIFY and to add a
if (x_count == 1 && y_count == 1)
level = 0;
to the top of TagTreeNode::create(). Then we don't have multiple levels
of 1 x 1 nodes, and we need to read fewer bits.
The images in the spec suggest that all nodes should have the same
number of levels, so go with that interpretation for now. Once we can
actually decode images, we'll hopefully see which of the two
interpretations is correct.
(The removed VERIFY() is hit when decoding
Tests/LibGfx/test-inputs/jpeg2000/buggie-gray.jpf in a local branch that
has some image decoding implemented. That file contains a packet with
1x3 code-blocks, which hits this case.)
The main difference was that our implementation was writing
the final line of a series of repeated lines, whereas the
spec says "The second and succeeding copies of repeated adjacent
input lines shall not be written."
Additionally, there was a mistake in the -f flag implementation
causing the number of fields skipped to be one greater than
required.
Flags that rely on counting lines (-c and -d) were
producing results that were off by one. This is fixed
by initializing the `count` variable to 1, which is
consistent with behavior in the main loop, where it
is reset to 1 when lines don't match.
Calls to `read_line` are replaced with `read_line_with_resize`
and `swap`s of StringViews, which assume a consistent location
of the underlying ByteBuffers, are replaced. A test file has
been added for uniq, which includes a test case for long lines.
`ceil_div(-1, 2)` used to return -1.
Now it returns 0, which is the correct ceil(-0.5).
(C++'s division semantics have floor semantics for numbers > 0,
but ceil semantics for numbers < 0.)
This will be important for the JPEG2000 decoder eventually.
Previously, [Global] interfaces were not excluded from the
`internal_own_property_keys()` call. This caused a crash when iterating
over the properties of the Window object.
Fixes bug when CSS transform is applied twice to clip rect:
- While calculating absolute clip rectangles in `refresh_clip_state()`
- While executing `PushStackingContext` painting command.
Duplicated transform is already removed for PaintableBox and this change
adds this for InlinePaintable.
Previously we would look for a matching ID, and then for a matching
name. If there was an element in the collection which had a matching ID
as well as an element with a matching name, we would always return the
element with a matching ID irrespective of what order that element was
in.
Otherwise, the thread will continue to run and access the media data
buffer, which will have been freed.
The test here is a bit strange, but the issue would only consistently
repro after several GC runs.
Fixes crashing after following steps:
1. Open https://github.com/SerenityOS/serenity
2. Click on "Pull requests" tab
The problem was `navigable` null pointer dereferencing in
`decode_favicon()`. But navigable is null because the document was
created by `parseFromString()` DOMParser API.
With this change we skip fetching initiated by HTMLLinkElement if
document does not have a browsing context:
- Favicon is not displayed for such documents so no need to fetch.
- Stylesheets fetching won't affect such document because style or
layout does not run for them.
The following command was used to clang-format these files:
clang-format-18 -i $(find . \
-not \( -path "./\.*" -prune \) \
-not \( -path "./Base/*" -prune \) \
-not \( -path "./Build/*" -prune \) \
-not \( -path "./Toolchain/*" -prune \) \
-not \( -path "./Ports/*" -prune \) \
-type f -name "*.cpp" -o -name "*.mm" -o -name "*.h")
There are a couple of weird cases where clang-format now thinks that a
pointer access in an initializer list, e.g. `m_member(ptr->foo)`, is a
lambda return statement, and it puts spaces around the `->`.
Whenever the floating-point values are in integer range, we can use the
various FCVT functions with static rounding mode to perform fast
rounding. I took this opportunity to clean up the architecture
differentiation for these functions, which allows us to use the software
rounding implementation for all extreme and unimplemented cases,
including AArch64.
Also adds more round & trunc tests.
The HTMLLinkElement caller is a bit hairy, so we shove an await() in
there temporarily. This is sure to cause fun times for anyone debugging
task/microtask execution order.
We have to unregister link element stylesheets from the old document's
StyleSheetList when moving them into a new document.
This makes it possible to load GitHub contributor graphs. :^)