The context can vary for every bit we read.
This does not affect the one use in the test which reuses the same
context for all bits, but it is necessary for future changes.
The API value of a <textarea> element is its raw value with normalized
newlines. This should be used in a couple of places where we currently
use the raw value.
Patch up existing style properties instead of using the regular style
invalidation path, which requires rule matching for each element in the
invalidated subtree.
- !important properties: this change introduces a flag used to skip the
update of animated properties overridden by !important.
- inherited animated properties: for now, these are invalidated by
traversing animated element's subtree to propagate the update.
- StyleProperties has a separate array for animated properties that
allows the removal animated properties after animation has ended,
without requiring full style invalidation.
Rather than adding a bunch of `get_*_from_mime_type` functions, add just
one to get the Core::MimeType instance. We will need multiple fields at
once in Browser.
I think the context normally changes for every bit. But this here
is enough to correctly decode the test bitstream in Annex H.2 in
the spec, which seems like a good checkpoint.
The internals of the decoder use spec naming, to make the code
look virtually identical to what's in the spec. (Even so, I managed
to put in several typos that took a while to track down.)
The behavior of Crypto::UnsignedBigInt::export_data unexpectedly
does not actually remove leading zero bytes when the corresponding
parameter is passed. The caller must manually adjust for the location
of the zero bytes.
EXTTEMPLATE=1 was added later and doesn't seem to be used much in
practice -- it doesn't appear in no simple generic regions in any PDF
I tested so far at least. Since the spec contradicts itself on what
to do with these as far as I can tell, error out on them for now and
then add support once we find actual files using this, so that we can
check our implementation actually works.
Deduplicate the data reading for the different cases, and
zero-initialize all adaptive template pixels to zero to make that
possible.
Other than prohibiting EXTTEMPLATE=1, no behavior change.
By following the spec more closely, we can actually make this function
a bit more efficient (by comparing the parent against the document
instead of looking for the first element child of the document).
If a selector must match a pseudo element, or must match the root
element, we now cache that information in the MatchingRule struct.
We also introduce separate buckets for these rules, so we can avoid
running them altogether if the current element can't possibly match.
This cuts the number of selectors evaluated by 32% when loading our
GitHub repo page https://github.com/SerenityOS/serenity
We frequently end up matching hundreds or even thousands of rules. By
giving this vector some inline capacity, we avoid a lot of the
repetitive churn from dynamically growing it all the way from 0
capacity.
This is required to upload files to GitHub. Unfortunately, this is not
currently testable with our test infrastructure. This path is only hit
from HTTP/S uploads, whereas all of our tests are limited to file://.
We were unconditionally creating new File objects for all Blob-type
values passed to `FormData.append`. We should only do so if the value is
not already a File object (or if the `filename` attribute is present).
We must also carry MIME type information forward from the underlying
Blob object.
This does not implement any of the IDL methods, but GitHub requires the
interface exists to upload files via an <input type="file"> element.
Their JS handles uploads via this element and via drag-and-drop in one
function, and check if the uploaded file is `instanceof DataTransfer` to
decide how to handle it.
The memmem() call passes `data.size() - 19 - sizeof(u32)` for big_len,
(18 prefix bytes skipped, the flag byte, and the trailing u32), so the
buffer needs to be at least that large.
Should fix https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=67332
Either we mount from a loop device or other source, the user might want
to obfuscate the given source for security reasons, so this option will
ensure this will happen.
If passed during a mount, the source will be hidden when reading from
the /sys/kernel/df node.
This patch implements and tests window.crypto.sublte.generateKey with
an RSA-OAEP algorithm. In order for the types to be happy, the
KeyAlgorithms objects are moved to their own .h/.cpp pair, and the new
KeyAlgorithms for RSA are added there.
This patch throws away some of the spec suggestions for how to implement
the normalize_algorithm AO and uses a new pattern that we can actually
extend in our C++.
Also update CryptoKey to store the key data.
This causes a behavior change in which the read FD is now non-blocking.
This is intentional, as this change avoids a deadlock between RS and
WebContent, where WC could block while reading from the request FD,
while RS is blocked sending a message to WC.
The underlying issue here isn't quite understood yet, but for some
reason, when we defer this connection, we ultimately end up blocking
indefinitely on macOS when a subsequent StartRequest message tries to
send its request FD over IPC. We should continue investigating that
issue, but for now, this lets us use RequestServer more reliably on
macOS.
This was copying the vector behind our backs, let's remove it and make
the copying explicit by putting it behind COWVector::mutable_at().
This is a further 64% performance improvement on Wasm validation.
These vector copies accounted for more than 50% of the current runtime
of the validator on a large wasm file, this commit makes them
copy-on-write to avoid the copies where possible, gaining nearly a 50%
speedup.
When inserting a node into a parent, any live DOM ranges that reference
the parent may need to be updated. The spec does this by increasing or
decreasing the start/end offsets of each live range *before* actually
performing the insertion.
This caused us to crash with a verification failure, since it was
possible to set the range offset to an invalid value (that would go on
to immediately become valid after the insertion was finished).
This patch fixes the issue by adding special badged helpers on Range for
Node to reach into it and increase/decrease the offsets during node
insertion. This skips the offset validity check and actually makes our
code read slightly more like the spec.
Found by Domato :^)
According to the WebIDL specification, any leading underscores in
identifier names are ignored. A leading underscore is used when an
identifier would otherwise be a reserved word.
Rather than try to lay out masks normally, this updates the TreeBuilder
to create layout nodes for masks as a child of their user (i.e. the
masked element). This allows each use of a mask to be laid out
differently, which makes supporting `maskContentUnits=objectBoundingBox`
fairly easy.
The `SVGFormattingContext` is then updated to lay out masks last (as
their sizing may depend on their parent), and treats them like
viewports.
This is pretty ad-hoc, but the SVG specification does not give any
guidance on how to actually implement this.
When navigating to about:srcdoc we try to populate the session history
by calling populate_session_history_entry_document, if the resource is
an about:srcdoc this method will rely on the document_resource in the
SessionHistoryEntry to have a String in order for it call the right
method to create navigation params.
However when navigating to about:srcdoc directly, document_resource
will have an Empty, this leads populate_session_history_entry_document
to call the wrong method to create navigation params.
This fixes the issue by populating document_resource with an empty
string if it has an Empty and we're dealing with an about:srcdoc.
Issue: #23216
Instead of crashing with a TODO() on half of the test cases generated by
Domato, let's just return a zeroed-out SVGAnimatedLength or
SVGAnimatedNumber from getters that return them.
We'll eventually have to implement these correctly, but crashing is not
productive since it blocks us from finding other issues.
The loop that was supposed to check the chain of previous or next
siblings had a logic mistake where it would never traverse the chain,
so we would get stuck looking at the immediate sibling forever.
After removing an iframe from the DOM, its contentWindow will be
detached from its browsing context, per spec.
Because the contentWindow is still accessible, we cannot assume that
Window objects always have an associated browsing context.
This needs to be fixed in the spec, but let's add a sensible null check
in the meantime.
Since SVG gradients can reference each other, we have to keep track of
visited gradients when traversing the link chain, or we will recurse
infinitely when there's a reference cycle.
Instead of creating a generic Layout::Box, make a BlockContainer. This
allows them to be laid out by BFC, which is better than nothing(?),
even if it's not going to be correct at all.
Normally, assigning to e.g document.body.onload will forward to
window.onload. However, in a detached DOM tree, there is no associated
window, so we have nowhere to forward to, making this a no-op.
The bulk of this change is making Document::window() return a nullable
pointer, as documents created by DOMParser or DOMImplementation do not
have an associated window object, and so must be able to return null
from here.
That's not actually a DOM invariant, just something the HTML parser
refuses to build. You can still construct table-less th and td elements
using the DOM API.
If the "MMR" bit is set, the generic region decoding procedure just
uses ITU-T T.6 (2D CCITT), which we already have an implementation of.
In practice, this is used almost never in .jbig2 files and in none of
the PDFs I have.
The two files that do use MMR are:
1.) JBIG2_ConformanceData-A20180829/F01_200_TT9.jb2
2) 042_3.jb2 from the ghostpdl tests
The first uses an immediate _lossless_ generic region, which we don't
implement yet (but I think it should just forward to the normal
immediate generic region code? Not in this commit, though). The second
uses a regular immediate generic region, and we can decode it now:
Build/lagom/bin/image -o out.png \
path/to/ghostpdl/tests/jbig2/042_3.jb2
While IMO, the change makes sense on its own as flattening this function
will just duplicate the code from `draw_glyph()` with no benefits, this
is not what motivated this patch.
When compiling with debug information and ASAN, GCC 13.2 would issue
this warning:
Userland/Libraries/LibGfx/Painter.cpp: In member function ‘void Gfx::Pai
nter::draw_glyph(Gfx::FloatPoint, u32, Gfx::Color)’:
Userland/Libraries/LibGfx/Painter.cpp:1354:14: note: variable tracking s
ize limit exceeded with ‘-fvar-tracking-assignments’, retrying without
1354 | FLATTEN void Painter::draw_glyph(FloatPoint point, u32 code_poin
t, Color color)
| ^~~~~~~
From what I've read online, this is caused by some limit on the number
of symbols in the compiler's internal data structures. People at Google
have fixed this warning by splitting functions:
https://codereview.chromium.org/1164893003
While getting us rid of the warning, it also drastically improves
compilation time. Going from 1min27 to 59s on my machine.
Performance handles the document origin time correctly, and prevents
these times from being unusually large. Also initialize the
DocumentTimeline time in the constructor, since these can be created
from JS.
With this, `image` can convert any jbig2 file, as long as it's
black (or white), and LibPDF can draw jbig2 files (again, as long
as they only contain a single color stored in just a
PageInformation segment).
Also make scan_for_page_size() not early return, so that it has the
same behavior as the main decoding look. (Multiple page information
segments for a single page are likely invalid and don't happen in
practice, so this is mostly an academic change.)
Add a BitBuffer class to store the bit image data, and introduce a
Page struct for storing data associated with a page. We currently
only handle a single page, but a) this makes it easier to decode
multiple pages in the future if we want b) it makes the code easier
to understand.
7.4.8.2 Page bitmap height:
"In some cases, this value may not be known at the time that the page
information segment is written. In this case, this field must contain
0xFFFFFFFF, and the actual page height may be communicated later, once
it is known."
Sounds like the spec guarantees that that's the number of the first
page.
(In practice, all but one of all jbig2 files I've found contain just
page 1. PDFs almost always contain just page 1, and very rarely a
page 0 for globally shared parameters.)
If the provided ID is smaller than the corner clipping vector, we would
shrink the vector to match. This causes a crash when we have nested
PaintContext instances (as these IDs are allocated by PaintContext),
each of which perform radius painting.
This is seen on https://www.strava.com/login when it loads a reCAPTCHA.
The outer div has a border radius, which contains the reCAPTCHA in an
iframe. That iframe contains an SVG which also has a border radius.
Now, if an element belongs to a shadow tree, we use only the style
sheets from the corresponding shadow root during style computation,
instead of using all available style sheets as was the case
previously.
The only exception is the user agent style sheets, which are still
taken into account for all elements.
Tests/LibWeb/Layout/input/input-element-with-display-inline.html
is affected because style of document no longer affects shadow tree
of input element, like it is supposed to be.
Co-authored-by: Simon Wanner <simon+git@skyrising.xyz>
Doing that will allow us to get a list of style sheets for each shadow
root from StyleComputer without having to traverse the entire tree in
upcoming changes.
If a style element belongs to a shadow tree, its CSSStyleSheet is now
added to the corresponding ShadowRoot instead of the document.
Co-authored-by: Simon Wanner <simon+git@skyrising.xyz>
This patch also makes FlexFormattingContext::calculate_static_position
use computed values for margins and borders, since this function may be
called before the box's state has been finalized.
This allows `file` to correctly print the dimensions of a .jbig2 file,
and it allows us to write a test that covers much of all the code
written so far.
Several ramifications:
* /JBIG2Globals is an indirect reference, which means we now need
a Document for unfiltering. (Technically, other decode parameters
can also be indirect objects and we should use the Document to
resolve() those too, but in practice it only seems to be needed
for /JBIG2Globals.)
* Since /JBIG2Globals are so rare, we just parse once for each
image that use them, and decode_embedded() now receives a
Vector<ReadonlyBytes> with all sections of sequences of
segments.
* Internally, decode_segment_headers() is now called several times
for embedded JBIG2s with multiple such sections (e.g. PDFs with
/JBIG2Globals).
* That means `data` is now no longer part of JBIG2LoadingContext
and things get slightly reshuffled due to this.
This completes the LibPDF part of JBIG2 support. Once LibGfx
implements actual decoding of JBIG2s, things should start to
Just Work in PDFs.
They're in different places for Sequential/Embedded (right after
the header) and RandomAccess (which has all headers first, followed
by all data bits next).
We don't do anything with the data yet, but now everything's in
place to actually process segment data.
Except for /JBIG2Globals, which we bail out on for now. In my 1000
files, 13 use JBIG2, and of those, 2 use JBIG2Globals (0000372.pdf e.g.
page 11 and 0000857.pdf e.g. page 1), and only one (the latter) of the
two uses the same JBIG2Globals stream for more than a single image.
JBIG2ImageDecoderPlugin cannot decode the data yet, so no behavior
change, but with `#define JBIG2_DEBUG 1` at the top of that file,
it now prints segment header info for PDFs containing JBIG2 data :^)
With `#define JBIG2_DEBUG 1` at the top of the file:
% Build/lagom/bin/image --no-output \
.../JBIG2_ConformanceData-A20180829/F01_200_TT10.jb2
JBIG2LoadingContext: Organization: 0 (Sequential)
JBIG2LoadingContext: Number of pages: 1
Segment number: 0
Segment type: 48
Referred to segment count: 0
Segment page association: 1
Segment data length: 19
Segment number: 1
Segment type: 39
Referred to segment count: 0
Segment page association: 1
Segment data length: 12666
Segment number: 2
Segment type: 49
Referred to segment count: 0
Segment page association: 1
Segment data length: 0
Runtime error: JBIG2ImageDecoderPlugin: Draw the rest of the owl
With `#define JBIG2_DEBUG 1` at the top of the file:
% Build/lagom/bin/image --no-output \
.../JBIG2_ConformanceData-A20180829/F01_200_TT10.jb2
JBIG2LoadingContext: Organization: 0 (Sequential)
JBIG2LoadingContext: Number of pages: 1
Segment number: 0
Segment type: 48
Referred to segment count: 0
Segment page association: 1
Segment data length: 19
Runtime error: JBIG2ImageDecoderPlugin: Draw the rest of the owl
This is closer to what the spec instructs us to do, and matches how
associations are maintained in the timelines. Also note that the removed
destructor logic is not necessary since we visit the associated
animations anyways.
We create labels in the Browser from text on web documents, which may
use carriage returns. LibGfx will split lines on CR already, but LibGUI
would not consider CRs when computing the height of the containing
widget. The result was text that would not fit in the label's box.
Fixes pages 17-19 on
https://www.iro.umontreal.ca/~feeley/papers/ChevalierBoisvertFeeleyECOOP15.pdf
Calling the fill handler after painting the stroke as previously doesn't
work, since we need to set up the clip before both stroke and fill, and
unset it after both. The duplication is a bit unfortunate, but also
minor.
ObservableArray inherits from JS::Array and overrides `internal_set`
and `internal_delete` to run an interceptor callback when an indexed
item is added or deleted.
This is for validating that a decoder with a weak or nonexistent
sniff() method thinks it can decode an image. This should not be
treated as an error.
No behavior change.
We are currently using Core::DateTime, which is meant to represent local
time. However, we are doing no conversion between the parsed time in UTC
and local time, so we end up comparing time stamps from different time
zones.
Instead, store the parsed times as UnixDateTime, which is UTC. Then we
can always compare the parsed times against the current UTC time.
This also lets us store parsed milliseconds.
Displaying a GML preview in HackStudio seems to be broken at the moment,
but this change will be needed once it does work again, so might as well
make it now, while I'm aware of the issue.
When in permissive mode, the ConfigServer will not treat reads and
writes to non-pledged domains as errors, but instead turns them into
no-ops: Reads will act as if the key was not found, and writes will do
nothing. Permissive mode must be enabled before pledging any domains.
This is needed to make GUI Widgets nicer to work with in GML Playground:
a few Widgets include reads and writes to LibConfig in order to load
system settings (eg, GUI::Calendar) or to save and restore state
(eg, GUI::DynamicWidgetContainer). Without this change, editing a
layout that includes one of these Widgets will cause GML Playground to
crash when they try to access config domains that are not pledged.
The solution used previously is to make Playground pledge more domains,
but not only does this mean Playground has to know about these cases,
but also that working on a layout file can alter the user's settings in
other arbitrary apps, which is not something we want.
By simply ignoring these config accesses, we avoid those downsides, and
Widgets will simply use the fallback values they already have to provide
to Config::read_foo_value().
Turns out the spec didn't mean that the whole range is populated,
but that one of these ranges is populated. So take the argmax.
As fallout, explicitly mark the Liberation fonts as nonsymbolic
when we use them for the 14 standard fonts. Else, we'd regress
"PostScrõpt", since the Liberation fonts would otherwise go down
the "is symbolic or doesn't have explicit encoding" codepath,
since the standard fonts usually don't have an explicit encoding.
As a fallout from _that_, since the 14 standard fonts now go down
the regular truetype rendering path, and since we don't implement
lookup by postscript name yet, glyphs not present in Liberation
now cause text to stop rendering with a diag, instead of rendering
a "glyph not found" symbol. That isn't super common, only an
additional 4 files appear for the "'post' table not yet implemented"
diag. Since we'll implement that soon, this seems fine until then.
...from try_create_for_raw_bytes().
If a plugin returns `true` from sniff but then fails when calling
its `create()` method, we now no longer swallow that error.
Allows `image` (and other places in the system) to print a more
actionable error if early image headers are invalid.
(We now no longer try to find another plugin that can also handle
the image.)
Fixes a regression from #20063 / #19893 -- before then, we didn't
do fallible work this early.
When placement position is found we always want to do following:
- Mark the occupied cells in the occupation grid
- Add the item to the list of placed items
Therefore, having helper that does both is useful
With this change we use the same code to resolve (start, end, span)
based on computed values in all cases:
- When only column is definite
- When only row is definite
- When both are definite
Moves the code that identifies (start, end, span) for a grid item into
a separate function. By doing so, we can eliminate the duplicated code
between the placement of grid items with definite columns and those
with definite rows.
This change omits some of the comments that reference the spec, as they
were largely irrelevant and unhelpful for making changes or diagnosing
issues.
Table wrappers don't quite behave the same as most elements, in that
their computed height and width are not meant to be used for layout.
Instead, we now calculate suitable widths and heights based on the
contents of the table wrapper when performing absolute layout.
Fixes the layout of
http://wpt.live/css/css-position/position-absolute-center-007.html
These are invoked by GitHub when submitting a comment. Stub them out for
now, as this is enough to let GitHub proceed with (attempting) to submit
the comment.
Note no test here, because this early return involves HTTP-only cookies,
which we don't have the infrastructure to test (we would need to support
custom HTTP headers in tests).
It's no change in application behavior to have these objects owned by
the function-scope static map in Protocol.cpp, while allowing us to
remove some ugly FIXMEs from time immemorial.
Now that all input events are handled by LibWebView, replace the IPCs
which send the fields of Web::KeyEvent / Web::MouseEvent individually
with one IPC per event type (key or mouse).
We can also replace the ad-hoc queued input structure with a smaller
struct that simply holds the tranferred Web::KeyEvent / Web::MouseEvent.
In the future, we can also adapt Web::EventHandler to use these structs.
The Serenity chrome is the only chrome thus far that sends all input key
and mouse events to WebContent, including shortcut activations. This is
necessary for all chromes - we must give web pages a chance to intercept
input events before handling them ourselves.
To make this easier for other chromes, this patch moves Serenity's input
event handling to LibWebView. To do so, we add the Web::InputEvent type,
which models the event data we need within LibWeb. Chromes will then be
responsible for converting between this type and their native events.
This class lives in LibWeb (rather than LibWebView) because the plan is
to use it wholesale throughout the Page's event handler and across IPC.
Right now, we still send the individual fields of the event over IPC,
but it will be an easy refactor to send the event itself. We just can't
do this until all chromes have been ported to this event queueing.
Also note that we now only handle key input events back in the chrome.
WebContent handles all mouse events that it possibly can. If it was not
able to handle a mouse event, there's nothing for the chrome to do (i.e.
there is no clicking, scrolling, etc. the chrome is able to do if the
WebContent couldn't).
Animation::play_state() does not consider the fill state, and thus will
not return "Playing" for a fill-forward animation in the after phase.
It is still valid for paused, as pausing is not affected by the fill
mode.
All of this error propogation came from a single call to
HashMap::try_ensure_capacity! As part of the ongoing effort to ignore
small allocation failures, lets just assert this works. This has the
nice side-effect of propogating out to a few other classes.
Before this change, we only considering `grid-auto-flow` to determine
whether a row or column should be added when there was not enough space
in the implicit grid to fit the next unplaced item.
Now, we also choose the direction in which the "auto placement cursor"
is moved, based on the auto flow property.
This involves plumbing the perform the fetch hook argument throughout
all of the module fetch implementation AOs, where it was left as a FIXME
before.
With this change we can load module scripts in DedicatedWorkers.
This will be used to transfer information about the parent context to
DedicatedWorkers and future out-of-process Worker/Worklet
implementations for fetching purposes. In order to properly check
same-origin and other policies, we need to know more about the outside
settings than we were previously passing to the WebWorker process.
We previously used an empty optional to denote that a ReferrerPolicy is
in the default empty string state. However, later additions added an
explicit EmptyString state. This patch moves all users to the explicit
state, and stops using `Optional<ReferrerPolicy>` everywhere except for
when an option not being passed from JavaScript has meaning.
When buffering is enabled for the entire protocol request, it's possible
that the entirety of the file data might be available on the first read
of the pipe passed from RequestServer. If the file is then closed on the
RequestServer end, the client would never realize that the file is EOF
and call the user-provided on_finish callback.
By reading until there's an error, we expect to get an EAGAIN or similar
non-blocking pipe error message if there is still more data.
Also add a note to the Concepts header that the reason we have all the
strange concepts in place for container types is to work around the
language limitation that we cannot partially specialize function
templates.
The semantics of BGRx8888 aren't super clear and it means different
things for different parts of the codebase. In particular, the PNG
writer still writes the x channel to the alpha channel of its output.
In BMPs, the 4th palette byte is usually 0, which means after #21412 we
started writing all .bmp files with <= 8bpp as completely transparent
to PNGs.
This works around that.
(See also #19464 for previous similar workarounds.)
The added `bitmap.bmp` is a 1bpp file I drew in Photoshop and saved
using its "Save as..." saving path.
As long as the inputs are Int32, we can convert them to UInt32 in a
spec-compliant way with a simple static_cast<u32>.
This allows calculations like `-3 >>> 2` to take the fast path as well,
which is extremely valuable for stuff like crypto code.
While we're doing this, also remove the fast paths from the generic
shift functions in Value.cpp, since we only end up there if we *didn't*
take the same fast path in the interpreter.
Since get() returns an empty Optional if the index is not present, we
can combine these two into a single get() operation and save the cost of
a virtual call.
This patch adds a new "Peephole" pass for performing small, local
optimizations to bytecode.
We also introduce the first such optimization, fusing a sequence of
some comparison instruction FooCompare followed by a JumpIf into a
new set of JumpFooCompare instructions.
This gives a ~50% speed-up on the following microbenchmark:
for (let i = 0; i < 10_000_000; ++i) {
}
But more traditional benchmarks see a pretty sizable speed-up as well,
for example 15% on Kraken/ai-astar.js and 16% on Kraken/audio-dft.js :^)
The entry block must stay in place, although it's okay to merge stuff
into it.
This fixes 4 test262 tests and brings us to parity with optimization
disabled. :^)
There is a limit of 3 attempts before quitting, but the user can try
again after that.
The error messages now display to help the user understand the issue.
With this the `<circle>` element now correctly parses percentage sizes,
and resolves them relative to the viewport.
The rest of the geometry elements are still left TODO.
This will allow resolving paths that use sizes that are relative to the
viewport. This necessarily removes the on element caching, which has
been redundant for a while as computed paths are stored on the
paintable.
This was seen in Browser when hotkey activations were processed twice.
If we open the Inspector with a hotkey (F12) and quickly activate that
hotkey again, we could try sending a JS command (inspector.loadDOMTree)
before the inspector.js file was actually loaded in the WebContent.
The window for this bug is larger on Serenity, where loading WebContent
is a bit slower than on Linux. So even with the Browser bug fixed, it is
pretty easy to hit this window still.
This reverts commit 2e8ff1855c.
We had some awkward timing around the merging of this commit and commit
b073fdd570. With both commits in tree, we
are now processing hotkey activations twice. For example, ctrl+t will
open 2 tabs.
Instead of emitting a NewBigInt instruction to construct a primitive
bigint from a parsed literal, we now instantiate the BigInt on the heap
during codegen.
Instead of emitting a NewString instruction to construct a primitive
string from a parsed literal, we now instantiate the PrimitiveString on
the heap during codegen.
The property values here will always be StyleValueLists and not
TransformationStyleValues. The handling of interpolation in this case
gets quite a bit more complex, so let's just remove the dead code for
now and attempt this optimization again in the future if it's needed.
An array image mask contains a min/max range for each channel,
and if each channel of a given pixel is in that channel's range,
that pixel is masked out (i.e. transparent). (It's similar to
having a single color or palette index be transparent, but it
supports a range of transparent colors if desired.)
What makes this a bit awkward is that the range is relative to the
origin bits per pixel and the inputs to the image's color space.
So an indexed (palettized) image with 4bpp has a 2-element mask
array where both entries are between 0 and 15.
We currently apply masks after converting images to a Gfx::Bitmap,
that is after converting to 8bpp sRGB. And we do this by mapping
everything to 8bpp very early on in load_image().
This leaves us with a bunch of options that are all a bit awkward:
1. Make load_image() store the up- (or for 16bpp inputs, down-)
sampled-to-8bpp pixel data. And also return if we expanded the
pixel range while resampling (for color values) or not (for
palettized images). Then, when applying the image filter,
resample the array bounds in exactly the same way. This requires
passing around more stuff.
2. Like 1, but pass in the mask array to load_image() and apply
the mask right there and then. This means we'd apply mask arrays
at a different time than other masks.
3. Make the function that computes the mask from the mask array
work from the original, unprocessed image data. This is the most
local change, but probably also requires the largest amount of
code (in return, the color mask for 16bpp images is precise, in
addition that it separates concerns the most nicely).
This goes with 3 for now.
From https://drafts.csswg.org/css-backgrounds-4/#background-clip
"The background is painted within (clipped to) the intersection of the
border box and the geometry of the text in the element and its in-flow
and floated descendants"
This change implements it in the following way:
1. Traverse the descendants of the element, collecting the Gfx::Path of
glyphs into a vector.
2. The vector of collected paths is saved in the background painting
command.
3. The painting commands executor uses the list of glyphs to paint a
mask for background clipping.
Co-authored-by: Aliaksandr Kalenik <kalenik.aliaksandr@gmail.com>
In the situation where the amount of content preceeding the hunk was
greater than the max context of the hunk there would be an unsigned
underflow, as the logic was assuming signed arithmitic.
This underflow would result in the patch not applying, as patch would
assume the massive calculated fuzz would result in the patch matching
against any file.
While StringView does have a null state, we have been moving away from
this in our other String classes. To represent a StringView not being
given at all on the commandline, use an Optional.
This is still a very naive implementation and there are plenty of other
cases that we should handle (like a quoted path) - but just looking for
a tab handles the common case.
This allows us to avoid the need for costly traversals to gather
boxes that have been saved during the construction of the stacking
context tree.
No behavior change intended.
To avoid differing logic for serializing and deserializing similar
types, move the logic into separate helpers.
Also, adds security checks like VERIFY to avoid reading past the end of
the serialized data. If we try to read past the end of the serialized
data, either our program logic is wrong or our serialized data has
somehow been corrupted. Therefore, at least currently, it is better to
crash by VERIFYing.
To avoid differing logic for deserializing similar types, move the logic
into separate helpers.
Also, adds security checks like VERIFY to avoid reading past the end of
the serialized data. If we try to read past the end of the serialized
data, either our program logic is wrong or our serialized data has
somehow been corrupted. Therefore, at least currently, it is better to
crash by VERIFYing.