The benefit of the color indexing transform is to have only one
varying channel after it (the green channel, which after the
transform serves as index into the color table).
If there is only one varying channel before the transform, it's
not beneficial. (...except if there are <= 16 colors, then the
pixel bundling presumably still works.)
On my Mac system with Homebrew SDL + self-built Clang, SDL2's include
directory is not in the library search path by default. Add it to
unbreak the build.
Functions that don't have a FunctionEnvironment will get their `this`
value from the ExecutionContext. This patch stops generating
ResolveThisBinding instructions at all for functions like that, and
instead pre-populates the `this` register when entering a new bytecode
executable.
We already have a dedicated register slot for `this`, so instead of
having ResolveThisBinding take a `dst` operand, just write the value
directly into the `this` register every time.
This allows searching for text with case-insensitivity. As this is
probably what most users expect, the default behavior is changes to
perform case-insensitive lookups. Chromes may add UI to change the
behavior as they see fit.
Storing a number n needs floor(log2(n) + 1) bits, not ceil(log2(n)).
(The two expressions are identical except for when n is a power of 2.)
Serendipitously covered by the indexed color transform tests in this PR.
If an image has <= 16 colors, WebP lossless files pack multiple
color table indexes into a single pixel's green channel, further
reducing file size. This adds support for that.
My current test files all have more than 16 colors. For a 16x16
black-and-white bitmap that contains a little smiley face in the
middle, this reduces the output size from 128B to 54B.
If an image has 256 or fewer colors, WebP/Lossless allows storing
the colors in a helper image, and then storing just indexes into that
helper image in the main image's green channel, while setting
r, b, and a of the main image to 0.
Since constant-color channels need to space to store in WebP,
this reduces storage needed to 1/4th (if alpha is used) or 1/3rd
(if alpha is constant across the image).
If an image has <= 16 colors, WebP lossless files pack multiple
color table indexes into a single pixel's green channel, further
reducing file size. This pixel packing is not yet implemented in
this commit.
GIFs can store at most 256 colors per frame, so animated gifs
often have 256 or fewer colors, making this effective when
transcoding gifs.
(WebP also has a "subtract green" transform, which can be used
to need to store just a single channel for grayscale images, without
having to store a color table. That's not yet implemented -- for now,
we'll now store grayscale images using this color indexing transform
instead, which wastes to storage for the color table.)
(If an image has <= 256 colors but all these colors use only a single
channel, then storing a color table for these colors is also wasteful,
at least if the image has > 16 colors too. That's rare in practice,
but maybe we can add code for it later on.)
(WebP also has a "color cache" feature where the last few used colors
can be referenced using very few bits. This is what the webp spec says
is similar to palettes as well. We don't implement color cache writing
support yet either; maybe it's better than using a color indexing
transform for some inputs.)
Some numbers on my test files:
sunset-retro.png: No performance or binary size impact. The input
quickly uses more than 256 colors.
giphy.gif (184k): 4.1M -> 3.9M, 95.5 ms ± 4.9 ms -> 106.4 ms ± 5.3 ms
Most frames use more than 256 colors, but just barely. So fairly
expensive runtime wise, with just a small win.
(See comment on #24454 for the previous 4.9 MiB -> 4.1 MiB drop.)
7z7c.gif (11K): 118K -> 40K
Every frame has less than 256 colors (but more than 16, so no packing),
and so we can cut filesize roughly to 1/3rd: We only need to store an
index per channel. From 10.7x as large as the input to 3.6x as large.
No behavior change, but this makes it easy to correctly set this
flag when adding an indexing transform: Opacity then needs to be
determined based on if colors in the color table have opacity,
not if the indexes into the color table do.
With this struct, only the first time something sets opacity is
honored, giving us those semantics.
In an early version of the huffman writing code, we always used 8 bits
here, and the comments still reflected that. Since we're now always
writing only as many bits as we need (in practice, still almost always
8), the comments are misleading.