Checking if CSSPixels contains a finite value is no longer makes sense
after converting to fixed-point arithmetics. Instead there should
assertion that used value is not saturated.
This patch implements "react to changes in the environment" from the
HTML spec and hooks HTMLImageElement up with viewport rect change
notifications (from the browsing context).
This fixes the issue where we'd load a low-resolution image and not
switch to a high-resolution image after resizing the window.
Note that we currently can't resolve calc() values without a layout
node, so when normalizing an image's source set, we'll flush any pending
layout updates and hope that gives us an up-to-date layout node.
I've left a FIXME about implementing this in a more elegant and less
layout-thrashy way, as that will require more architectural work.
A small workaround is needed here as <stop> elements don't create a
layout node, so we can't get the current color from value->to_color().
This fixes the gradients in the Atlassian logo and icons.
Because "this" value cannot be changed during function execution it is
safe to compute it once and then use for future access.
This optimization makes ai-astar.js run 8% faster.
The LocaleData generator currently stores vectors of unique instances of
CLDR data (e.g. languages, currencies, etc.). For each CLDR file that we
parse, we linearly search through those vectors to decide if the current
field being parsed is unique. Given the size of the CLDR, this adds up
to quite a bit of time.
Augment these vectors with a hash map to store the index of each unique
instance in those vectors. This allows for quickly checking if a field
is unique, and to later look up those indices.
We do not apply this technique to every bit of CLDR data here. For
example, CLDR::character_orders contains only 2 entries. In that case,
it is quicker to search the vector than it is to hash a string key.
This reduces the runtime of GenerateLocaleData from to 2.03s to 1.09s.
Similar to languages and currencies, extract the loop to collect the
unique set of date fields to a preprocessing function. This alone does
not yield any performance improvement, but combined with an upcoming
patch will make the parse_locale_date_fields() a bit faster.
We currently parse each CLDR calendar, then decide based on its primary
key whether we want to skip it. Instead, we can decide to skip it based
on its file name.
This reduces the runtime of GenerateLocaleData from 2.03s to 1.97s.
The LocaleData generator has to read a few of the CLDR files more than
once, to e.g. prepare some data up front (for reasons why, see commits
c86f7a6 and 0b69e9f). This takes non-neglible time, especially for large
JSON files such as currencies.json. So in these cases, cache the parsed
JSON in a map.
This reduces the runtime of GenerateLocaleData from 2.32s to 2.03s.
Since this is the block size that file system drivers *should* set,
let's name it the logical block size, just like most file systems such
as ext2 already do anyways.
This never was a logical block size, it always was a device specific
block size. Ideally the block size would change in accordance to
whatever the driver wants to use, but that is a change for the future.
For now, let's get rid of this confusing naming.
This also makes it easier to understand and reference where these
(sometimes rather arbitrary) calculations come from.
This also fixes a bug where group_index_from_block_index assumed 1KiB
blocks.
Allow the left margin of a box which creates a block formatting context
to overlap with left floating boxes which are siblings in the document
tree.
Fixes#20233 and the comment layout on https://lobste.rs.
Required by Atlassian to continue to their authorization process.
Also used by the SerenityOS FAQ redirect on the website, the Bootstrap
documentation for going to older versions from the dropdown and
likely several other sites.
This change makes tree builder omit elements with "display: contents"
from the layout tree during construction. Their child elements are
instead directly appended to the parent element in layout tree.
These were used to generate specialized tables. Now that those tables
have been migrated to general 2-stage lookup tables, these fields are
all unused.
Similar to commit 0652cc4, we now generate 2-stage lookup tables for
case conversion information. Only about 1500 code points are actually
cased. This means that case information is rather highly compressible,
as the blocks we break the code points into will generally all have no
casing information at all.
In total, this change:
* Does not change the size of libunicode.so (which is nice because,
generally, the 2-stage lookup tables are expected to trade a bit
of size for performance).
* Reduces the runtime of the new benchmark test case added here from
1.383s to 1.127s (about an 18.5% improvement).
There is no functional change here. This information will compose the
upcoming multistage casing tables in an upcoming patch. Extract it to
its own struct to prepare for that.
There is no functional change here. This just adjusts the changes made
in commit 0652cc4 to be a bit more generic for code point casing tables.
We currently only generate property tables, which boil down to a vector
of booleans. Casing tables will be a struct of varying types, so this
generalizes some of the generator to prepare for that ahead of time, to
make the upcoming casing patch smaller / easier to grok.
When generating code point property tables, we currently binary search
the code point range lists for each property to decide if a code point
has that property. However, we are both iterating over the code points
and through the sorted properties in order. This means we do not need
to search code point ranges that are below the current code point at
all. We can even remove the code point ranges that fall below the
current code point, as we will not see a code point in those ranges
again.
On my machine, this reduces the run time of GenerateUnicodeData from
3.4 seconds to 1.2 seconds.
Updated the version of Cave Story that is pulled from my repo.
The original port of this was missing game files that would've been
extracted on first boot such as .pbm files, and some .pxt files.
DisplaySettings uses the optional `screen_dpi` value before checking
if it is set, causing an assertion failure. This change moves the
usage into the block where it is known to be set.
One situation where this is known to occur is on real hardware when
using the MULTIBOOT_VIDEO_MODE multiboot flag to enable graphical
display output.
Before this change, we would process each image as it finished
downloading. This often led to a situation where we'd decode 1 image,
schedule a layout, do the layout, then decode another image, schedule
a layout, do the layout, etc. Basically decoding and layouts would get
interleaved even though we had multiple images fetched and ready for
decoding.
This patch adds a simple BatchingDispatcher thingy that HTMLImageElement
uses to batch the handling of successful fetches.
With this, the number of layouts while loading https://shopify.com/ goes
from 48 to 6, and the page loads noticeably faster. :^)
NetworkSettings normally filters out `loop` when populating its list of
adapters. However, when checking if there aren't any adapters it did
not take this into account. This causes it to crash later when trying
to set the selected index of an empty combo box.
This moves the check for no adapters to after filtering the list, so
that shows the error message and exits.
The idl file lists are used for two things:
1. As inputs for `generate_window_or_worker_interfaces`
2. In a loop in `generate_idl_bindings` and the loop variable
is passed to `rebase_path`
Both these cases can handle a normal fully-qualified GN path,
so there's no need for the "abspath".
No behavior change.
The conversion to AK::Stream makes everything much more straightforward
and understandable now, because we remove the custom reader we had.
Because AK::Stream is more tested, it means that the code should be now
more robust against bugs as well.
Due to overload resolutions rules, this simple code provokes a crash:
ReadonlyBytes readonly_bytes{};
FixedMemoryStream stream{readonly_bytes};
ReadonlyBytes give_them_back{stream.bytes()};
// -> Panics on VERIFY(m_writing_enabled);
// but this is fine:
auto bytes = static_cast<FixedMemoryStream const&>(*stream).bytes()
If we need to be explicit about it, let's rename the overload instead of
adding that `static_cast`.
Currently, the `isobmff` utility will only print the media file type
info from the FileTypeBox (major brand and compatible brands), as well
as the names and sizes of top-level boxes.
Transforms are a paint-level concept for us, so we should be okay to
only update the stacking context tree and repaint.
This makes a lot of CSS animations use way less CPU.
For malformed tables which only have cells with span greater than 1, the
content sizes for row and column aren't initialized to non-zero values.
Avoid undefined behavior in such cases, which sometimes show up on
Wikipedia.
This disables running benchmark test cases on Lagom in CI. When we run
unit tests on-board Serenity, run-tests sets this environment variable.
We do not use that utility to run unit tests on Lagom, so we've been
running benchmark tests unnecessarily.
Previously, this was reimplementing the same thing by removing all the
document text and then inserting the new text - which internally would
insert each code-point individually and fire change notifications for
each one. This made the "Reformat GML" button very slow, since it not
only had to recalculate the visual lines of the document each time, but
also rebuild the preview GUI.
The reason not to use `set_text()` is that it would throw away the undo
stack, since it always behaved as if the text is a new document. So,
let's add a parameter to disable that behaviour.
This takes the time for reformatting a ~200 line GML file from several
seconds, to basically instantaneous. :^)
When markdown-check is built, it outputs hundreds of lines of "ignoring
this and that link because reasons". This is extremely not helpful when
trying to figure out exactly which check failed on your commit. Also
remove the timing numbers from lint-ci.sh These are just noise and also
don't help to figure out which pre-commit check failed. Ideally the
output on fail should be "[OK]: Check A" for all the passing checks and
"[FAIL] Check N" with the required context for the failed check.