We currently have 2 base64 coders: one in AK, another in LibWeb for a
"forgiving" implementation. ECMA-262 has an upcoming proposal which will
require a third implementation.
Instead, let's use the base64 implementation that is used by Node.js and
recommended by the upcoming proposal. It handles forgiving decoding as
well.
Our users of AK's implementation should be fine with the forgiving
implementation. The AK impl originally had naive forgiving behavior, but
that was removed solely for performance reasons.
Using http://mattmahoney.net/dc/enwik8.zip (100MB unzipped) as a test,
performance of our old home-grown implementations vs. the simdutf
implementation (on Linux x64):
Encode Decode
AK base64 0.226s 0.169s
LibWeb base64 N/A 1.244s
simdutf 0.161s 0.047s
When traversing the layout tree to find an appropriate box child to
derive the baseline from. Only the child's margin and offset was being
applied. Now we sum each offset on the recursive call.
The spec says to just call the XML serialization algorithm, but it
returns the "outer serialization", and we need the "inner" one. Let's
just concatenate serializations of children; then the result produced is
similar to one from Blink or Gecko.
This method puts the given node and all of its sub-tree into a
normalized form. A normalized sub-tree has no empty text nodes and no
adjacent text nodes.
Because `nan:arithmetic` and `nan:canonical` aren't bound to a single
bit pattern, we cannot check against a float-containing SIMD vector
against a single value in the tests. Now, we represent `v128`s as
`TypedArray`s in `testjs` (as opposed to using `BigInt`s), allowing us
to properly check `NaN` bit patterns.
Some spec-tests check the bit pattern of a returned `NaN` (i.e.
`nan:canonical`, `nan:arithmetic`, or something like `nan:0x200000`).
Previously, we just accepted any `NaN`.
Previously the input element was displayed with value 0, when no value
was set in the HTML. Now it uses `value_sanitization_algorithm()`, which
will calculate the default value.
In `value_sanitization_algorithm()` there was a logical mistake/typo.
The comment from the spec says "unless the maximum is less than the
minimum".
The added layout test would fail without the code changes.
Fixes#520
When the min option is given the read will only be fulfilled when there
are min or more elements available in the readable byte stream.
When the min option is not given the default value for min is 1.
The first time Document learns its viewport size, we now suppress firing
of the resize event.
This fixes an issue on multiple websites that were not expecting resize
events to fire so early in the loading process.
Previously, setting CSS `line-height: 0` on an `input` element would
result in no text being displayed.
Other browsers handle this by setting the minimum height to the
"normal" value for single line inputs.
From https://html.spec.whatwg.org/multipage/scripting.html#script-processing-model:
When a script element el that is not parser-inserted experiences one
of the events listed in the following list, the user agent must
immediately prepare the script element el:
- [...]
- The script element is connected and has a src attribute set where
previously the element had no such attribute.
Previously, when `WindowOrWorkerGlobalScope.reportError()` was called
the `filename` property of the dispatched error event was blank. It is
now populated with the full path of the active script.
This commit replaces all TLS connection code with wolfssl.
The certificate parsing code has to remain for now, as wolfssl does not
seem to have any exposed API for that.
Like 1132c858e9, out-of-flow elements such
as float elements would get inserted into block level `::before` and
`::after` pseudo-element nodes when they should instead be inserted as a
sibling to the pseudo element. This change fixes that.
This fixes a few layout issues on the swedish tax agency website
(skatteverket.se). :^)
This makes it so that `rebaseline-libweb-test` can be run on *nix or
MacOS as the path to `headless-browser` on MacOS is
`./bin/Ladybird.app/Contents/MacOS/headless-browser` while it is
`./bin/headless-browser` on *nix.
Following the rules in the algorithm from
https://webidl.spec.whatwg.org/#js-platform-objects, "To Internally
create a new object implementing the interface interface", this change
incorporates the steps to load a prototype from new.target, and write
it to the created instance returned from constructor_impl(). This
mirrors the code for generate_html_constructor(), which incorporates
additional steps needed by Custom Elements.
Bug #334
This is a non-standard API that other browsers implement, which
highlights matching text in the current window.
This is just a thin wrapper around our find in page functionality, the
main motivation for adding this API is that it allows us to write tests
for our find in page implementation.
The changes to tests are due to LibTimeZone incorrectly interpreting
time stamps in the TZDB. The TZDB will list zone transitions in either
UTC or the zone's local time (which is then subject to DST offsets).
LibTimeZone did not handle the latter at all.
For example:
The following rule is in effect until November 18, 6PM UTC.
America/Chicago -5:50:36 - LMT 1883 Nov 18 18:00u
The following rule is in effect until March 1, 2AM in Chicago time. But
at that time, a DST transition occurs, so the local time is actually
3AM.
America/Chicago -6:00 Chicago C%sT 1936 Mar 1 2:00
This required updating some LibJS spec steps to their latest versions,
as the data expected by the old steps does not quite match the APIs that
are available with the ICU. The new spec steps are much more aligned.
Prior to this commit, the test runner was ignoring the result, which
meant that values that fit in less than 128 bits were shifted to remove
the zero words (which is obviously incorrect).
H.264 in Matroska can have blocks with unordered timestamps. Without
passing these as the presentation timestamp into the FFmpeg decoder,
the frames will not be returned in chronological order.
VideoFrame will now include a timestamp that is used by the
PlaybackManager, rather than assuming that it is the same timestamp
returned by the demuxer.
LibLocale was split off from LibUnicode a couple years ago to reduce the
number of applications on SerenityOS that depend on CLDR data. Now that
we use ICU, both LibUnicode and LibLocale are actually linking in this
data. And since vcpkg gives us static libraries, both libraries are over
30MB in size.
This patch reverts the separation and merges LibLocale into LibUnicode
again. We now have just one library that includes the ICU data.
Further, this will let LibUnicode share the locale cache that previously
would only exist in LibLocale.
If a flex item has a preferred aspect ratio and the flex basis is not
definite, we were falling back to using stretch-fit for the main size,
since that appeared to match other browsers.
However, we missed the case where we actually have a definite cross size
through which the preferred aspect ratio can be naturally resolved.
To get away from the ancient (and buggy) text layout code in
Gfx::Painter, we want to remove all the uses of Painter::draw_text().
As a first step towards this, we implement the draw_text() display list
command in terms of the draw_text_run() command by performing the very
simple necessary layout in draw_text() beforehand.
Our current segmenter implementation lives in LibUnicode, and is not
locale-aware. We will need such awareness for ECMA-402, and so LibLocale
will be the new home for text segmentation.
The tests here are ported directly from LibUnicode/TestSegmentation.cpp.
There are a couple of differences here due to using ICU:
1. Titlecasing behaves slightly differently. We previously transformed
"123dollars" to "123Dollars", as we would use word segmentation to
split a string into words, then transform the first cased character
to titlecase. ICU doesn't go quite that far, and leaves the string
as "123dollars". While this is a behavior change, the only user of
this API is the `text-transform: capitalize;` CSS rule, and we now
match the behavior of other browsers.
2. There isn't an API to compare strings with case insensitivity without
allocating case-folded strings for both the left- and right-hand-side
strings. Our implementation was previously allocation-free; however,
in a benchmark, ICU is still ~1.4x faster.
Add the missing `print` function to the spectest namespace. Also, spec
externs cannot be re-used because operations that modify "memory", for
example, will carry over into subsequent spec test runs. This can be
remedied in the future by implementing some sort of garbage collector
for allocations in the store.