This initializes 'arrayBuffer' to an UInt8Array so that we can
manipulate the contents of 'arrayBuffer'. The test verifies that the
internal buffer is an ArrayBuffer.
Also:
* Correct comparison in test so that we compare arrayBuffer to
arrayClone, not to itself.
* Remove FIXME, this outputs [object ArrayBuffer] in Firefox and Chrome
too.
...instead of allocating separate BorderRadiusCornerClipper for each
executed sample/blit commands pair.
With this change a vector of BorderRadiusCornerClipper has far fewer
items. For example on twitter profile page its size goes down from
~3000 to ~3 items.
Before, this check was needed to prevent crashing when attempting to
allocate zero-size bitmap for sampled corners, which could have happened
if a corner had 0 radius in one axis.
Now, since SampleUnderCorners command is not emmited when radius is 0
in one axis, this check is no longer needed.
This reverts commit 6b7b9ca1c4b32e76e0afef6bca0cb300e615b576.
The whole corner radius is invisible if it has 0 radius in any axis, so
the reverted commit was a mistake that led to error checking during
painting command execution b61aab66d9 to
avoid crashing on attempt to allocate 0 size bitmap.
Deflate and WebP can store at most 15 bits per symbol, meaning their
huffman trees can be at most 15 levels deep.
During construction, when we hit this level, we used to try again
with an ever lower frequency cap per symbol. This had the effect
of giving the symbols with the highest frequency lower frequencies
first, causing the most-frequent symbols to be merged. For example,
maybe the most-frequent symbol had 1 bit, and the 2nd-frequent
two bits (and everything else at least 3). With the cap, the two
most frequent symbols might both have 2 symbols, freeing up bits
for the lower levels of the tree.
This has the effect of making the most-frequent symbols longer at
first, which isn't great for file size.
Instead of using a frequency cap, ignore ever more of the low
bits of the frequency. This sacrifices resolution where it hurts
the lower levels of the tree first, and those are stored less
frequently.
For deflate, the 64 kiB block size means this doesn't have a big
effect, but for WebP it can have a big effect:
sunset-retro.png (876K): 2.02M -> 1.73M -- now (very slightly) smaller
than twice the input size! Maybe we'll be competitive one day.
(For wow.webp and 7z7c.webp, it has no effect, since we don't hit
the "tree too deep" case there, since those have relatively few
colors.)
No behavior change other than smaller file size. (No performance
cost either, and it's less code too.)
For our deflate, block size is limited to less than 64 kiB, so the sum
of all byte frequencies always fits in a u16 by construction.
But while I haven't hit this in practice, but it can conceivably happen
when writing WebP files, which currently use a single huffman tree
(per channel) for a while image -- which is often much larger than
64 kiB.
No dramatic behavior change in practice, just feels more correct.
This implements some of basic webp compression: Huffman coding.
(The other parts of the basics are backreferences, and color cache
entries; and after that there are the four transforms -- predictor,
subtract green, color indexing, color.)
How much huffman coding helps depends on the input's entropy.
Constant-color channels are now encoded in constant space, but
otherwise a huffman code always needs at least one bit per symbol.
This means just huffman coding can at the very best reduce output
size to 1/8th of input size.
For three test input files:
sunset-retro.png (876K): 2.25M -> 2.02M
(helps fairly little; from 2.6x as big as the png input to 2.36x)
giphy.gif (184k): 11M -> 4.9M
(pretty decent, from 61x as big as the gif input to 27x as big)
7z7c.gif (11K): 775K -> 118K
(almost as good as possible an improvement for just huffman coding,
from 70x as big as the gif input to 10.7x as big)
No measurable encoding perf impact for encoding.
The code is pretty similar to Deflate.cpp in LibCompress, with just
enough differences that sharing code doesn't look like it's worth
it to me. I left comments outlining similarities.
To be used in WebPWriter.
JPEGWriter currently hardcodes huffman tables; maybe it can use this
to build data-dependent huffman tables in the future as well.
Pure code move (except for removing the `DeflateCompressor::` prefix
on the function's name, and putting the default argument for the 4th
argument in the function's definition), no behavior change.
EventSource allows opening a persistent HTTP connection to a server over
which events are continuously streamed.
Unfortunately, our test infrastructure does not allow for automating any
tests of this feature yet. It only works with HTTP connections.
Ran into a crash here while testing LibProtocol changes. The method we
invoke here (did_progress) already accepts an Optional, and handles when
that Optional is empty. So there's no need to assume `total_size` is
non-empty.
Supporting unbuffered fetches is actually part of the fetch spec in its
HTTP-network-fetch algorithm. We had previously implemented this method
in a very ad-hoc manner as a simple wrapper around ResourceLoader. This
is still the case, but we now implement a good amount of these steps
according to spec, using ResourceLoader's unbuffered API. The response
data is forwarded through to the fetch response using streams.
This will eventually let us remove the use of ResourceLoader's buffered
API, as all responses should just be streamed this way. The streams spec
then supplies ways to wait for completion, thus allowing fully buffered
responses. However, we have more work to do to make the other parts of
our fetch implementation (namely, Body::fully_read) use streams before
we can do this.
This adds an alternate API to ResourceLoader to load HTTP/HTTPS/Gemini
requests unbuffered. Most of the changes here are moving parts of the
existing ResourceLoader::load method to helper methods so they can be
re-used by the new ResourceLoader::load_unbuffered.