I opened smolkling.webp in Photoshop, added a layer mask with a
horizontal gradient, and saved it as webp. That wasn't enough to
get a horizontal filter for the ALPH chunk though, so I also ran
cwebp \
-alpha_filter best \
smolkling-ps.webp \
-o Tests/LibGfx/test-inputs/smolkling-horizontal-alpha.webp
That did the trick.
(Looks like doing the same with a vertical or diagonal gradient
_also_ produces a webp file with filtering_method 1, i.e. horizontal.)
Each secondary partition has an independent BooleanDecoder.
Their bitstreams interleave per macroblock row, that is the first
macroblock row is read from the first decoder, the second from the
second, ..., until it wraps around again.
All partitions share a single prediction state though: The second
macroblock row (which reads coefficients off the second decoder) is
predicted using the result of decoding the frist macroblock row (which
reads coefficients off the first decoder).
So if I understand things right, in theory the coefficient reading could
be parallelized, but prediction can't be. (IDCT can also be
parallelized, but that's true with just a single partition too.)
I created the test image by running
examples/cwebp -low_memory -partitions 3 -o foo.webp \
~/src/serenity/Tests/LibGfx/test-inputs/4.webp
using a cwebp hacked up as described in #19149. Since creating
multi-partition lossy webps requires hacking up `cwebp`, they're likely
very rare in practice. (But maybe other programs using the libwebp API
create them.)
Fixes#19149.
With this, webp lossy support is complete (*) :^)
And with that, webp support is complete: Lossless, lossy, lossy with
alpha, animated lossless, animated lossy, animated lossy with alpha all
work.
(*: Loop filtering isn't implemented yet, which has a minor visual
effect on the output. But it's only visible when carefully comparing
a webp decoded without loop filtering to the same decoded with it.
But it's technically a part of the spec that's still missing.
The upsampling of UV in the YUV->RGB code is also low-quality. This
produces somewhat visible banding in practice in some images (e.g.
in the fire breather's face in 5.webp), so we should probably improve
that at some point. Our JPG decoder has the same issue.)
I somehow added the wrong image here. 4.webp is the one described
by the comment in the test. Now test actually uses the image it
claims to use.
No behavior change.
The alpha channel of a lossy webp is always stored separately from
the (lossy) RGB data. Alpha is either compressed in a lossless webp
that stores just the alpha data, or it's stored completely
uncompressed. (But again, even if it's compressed, it's losslessly
compressed.)
This adds a test for uncompressed alpha, which I hadn't tested before.
It seems to work correctly, though :^)
I generated the test image by running:
~/Downloads/libwebp-1.3.0-mac-arm64/bin/cwebp \
-alpha_method 0 \
Tests/LibGfx/test-inputs/extended-lossless.webp \
-o Tests/LibGfx/test-inputs/extended-lossy-uncompressed-alpha.webp
This image covers two things that aren't covered by the existing
tests, and I found it useful for testing locally. The image's license
allows redistributing it, so add it as a test case.
Since the color interpolation requires two pixels in the horizontal and
vertical direction to work, 1 pixel wide or high bitmaps would cause a
crash when scaling. Fix this by clamping the index into the valid range.
Fixes#16047.
Previously, calling `.right()` on a `Gfx::Rect` would return the last
column's coordinate still inside the rectangle, or `left + width - 1`.
This is called 'endpoint inclusive' and does not make a lot of sense for
`Gfx::Rect<float>` where a rectangle of width 5 at position (0, 0) would
return 4 as its right side. This same problem exists for `.bottom()`.
This changes `Gfx::Rect` to be endpoint exclusive, which gives us the
nice property that `width = right - left` and `height = bottom - top`.
It enables us to treat `Gfx::Rect<int>` and `Gfx::Rect<float>` exactly
the same.
All users of `Gfx::Rect` have been updated accordingly.
Note that in some cases (in particular SQL::Result and PDFErrorOr),
there is no Formatter defined for the error type, hence TRY_OR_FAIL
cannot work as-is. Furthermore, this commit leaves untouched the places
where MUST could be replaced by TRY_OR_FAIL.
Inspired by:
https://github.com/SerenityOS/serenity/pull/18710#discussion_r1186892445
This implements conversion from profile connection space to the
device-dependent color for matrix-based profiles.
It only does the inverse color transform but does not yet do the
inverse tone reproduction curve transform -- i.e. it doesn't
implement many cases (LUT transforms), and it does the one thing
it does implement incorrectly. But to vindicate the commit a bit,
it also does the incorrect thing very inefficiently.
This can be used to convert a profile-dependent color to the L*a*b*
color space.
(I'd like to use this to implement the DeltaE (CIE 2000) algorithm,
which is a metric for how similar two colors are perceived.
(And I'd like to use that to evaluate color conversion roundtrip
quality, once I've implemented full conversions.)
WebP lossless files that use a color indexing transform with <= 16
colors use pixel bundling to pack 2, 4, or 8 pixels into a single pixel.
If the image's width doesn't happen to be an exact multiple of the
bundling factor, we need to:
1. Use ceil_div() instead of just dividing the width by the bundling
factor
2. Remember the original width and use it instead of computing
reduced width times bundling factor
This does these changes, and adds a simple test for it -- it at least
checks that the decoded images have the right size.
(I created these images myself in Photoshop, and used the same
technique as for Tests/LibGfx/test-inputs/catdog-alert-*.webp
to create images with a certain number of colors.)
For the test files, I opened Base/res/icons/catdog/alert.png in Adobe
Photoshop 2023, used Image->Mode->Index Color...->
Palette: Local (Perceptive) to reduce the number of colors to 13, 8, and
3 with transparency, and 2 without transparency, then converted it back
to Image->Mode->RGB Color (else it can't be saved as webp), then
File->Save a Copy... to save a WebP (mode lossless) for every palette
size.
The image is https://quakewiki.org/wiki/File:Qpalette.png in lossless
webp format with a color indexing transform.
I've created Qpalette.webp by running
examples/cwebp -z 0 ~/src/serenity/tmp.ppm -o Qpalette.webp
built at libwebp webmproject/libwebp@0825faa4c1 (without
png support, so I first ran
Build/lagom/image ~/Downloads/Qpalette.png -o tmp.ppm
to convert it from png to a format my cwebp binary could read).
This file also happens to explicitly set max_symbol, so it serves
as a test for that code path as well.
Introduced in 2c98eff, support for non-interleaved scans was not working
for frames with a number of MCU per line or column that is odd. Indeed,
the decoder assumed that they have scans that include a fabricated MCU
like scans with multiple components.
This patch makes the decoder handle images with a number of MCU per line
or column that is odd. To do so, as in the current decoder state we do
not know if components are interleaved at allocation time, we skip over
falsely-created macroblocks when filling them. As stated in 2c98eff,
this is probably not a good solution and a whole refactor will be
welcome.
It also comes with a test that open a square image with a side of 600px,
meaning 75 MCUs.
You can generate one by using `cjpeg` with the -scan argument.
This image has been generated with the following scan file:
0 1 2: 0 0 0 0;
0: 1 9 0 0;
2: 1 63 0 0 ;
1: 1 63 0 0 ;
0: 10 63 0 0;
This type of image isn't common, and you can probably only find one by
generating it yourself. It can be done using `cjpeg` with the -scan
argument.
This image has been generated with the following scan file:
0: 0 63 0 0;
1: 0 63 0 0;
2: 0 63 0 0;
Nobody made use of the ErrorOr return value and it just added more
chance of confusion, since it was not clear if failing to sniff an
image should return an error or false. The answer was false, if you
returned Error you'd crash the ImageDecoder.
Turns out extended-lossless-animated.webp did have a loop count of 0.
So I opened it in Hex Fiend and changed the byte at position 42
(which is the first byte of the little-endian u16 storing the loop
count) to 0x2A, so that the test can compare the loop count to something
not 0.
I drew the two webp files in Photoshop and saved them using the
"Save a Copy..." dialog, with ICC profile and all other boxes checked.
(I also tried saving with all the boxes unchecked, but it still wrote an
extended webp instead of a basic file.)
The lossless file exposed a bug: I didn't handle chunk padding
correctly before this patch.
The test verifies that loading an icc file and serializing it
again produces exactly the same output as the input. That's not
always the case, but often. It requires the input file either
not having any padding or using null bytes as padding, it
requires the input file putting tag data in the order the
tag data is referenced in in the tag table, and it requires the
input file only using known tag types (which at the moment
means it only works for v4 profiles, but that part will change
in the future).
The new file p3-v4.icc was extracted from a jpeg taken by an
iPhone Mini.
The patch also contains modifications on several classes, functions or
files that are related to the `JPGLoader`.
Renaming include:
- JPGLoader{.h, .cpp}
- JPGImageDecoderPlugin
- JPGLoadingContext
- JPG_DEBUG
- decode_jpg
- FuzzJPGLoader.cpp
- Few string literals or texts
icc-v4.jpg is Meta/Websites/serenityos.org/happy/3rd/bgianf.jpg.
There are a whole bunch of jpgs with v4 color profiles and I just picked
one fairly arbitrarily. It looks like a fairly standard v4 matrix
profile that in this form is also present in many jpgs taken by mobile
phone cameras. It uses parametric curves.
icc-v2.png is based on ./Documentation/WebServer_localhost.jpg since
that is the only image in the repo with a v2 color profile. It also has
all kinds of interesting and somewhat exotic tags, such as an 'dscm' (an
Apple extension to have a description of type 'mluc', since normal
'desc' is required ot have type 'desc' in v2 files -- in v4, 'desc' has
type 'mluc') tag of type 'mluc' that actually contains data in several
languages and that exercises the non-BMP UTF-16BE decoder. It's however
still also a fairly standard v2 matrix profile, which uses 'curv'
instead of 'para' for its curves ('para' is v4-only).
I converted that jpeg file to png, and cropped most of the image
data to save on file size by running:
sips -s format png --cropToHeightWidth 21 42 in.jpg --out out.png
Rather than reading files out of /res, put them in a subfolder of
Tests/LibGfx/ and pick the path based on AK_OS_SERENITY.
That way, the tests can also pass when run under lagom.
(I just `cp`d all the files that the test previously read from
random places into Tests/LibGfx/test-inputs.)
Rather than reading files out of /res, put them in a subfolder of
Tests/LibGfx/ and pick the path based on AK_OS_SERENITY.
That way, the tests can also pass when run under lagom.
- Use MUST() instead of checking plugin_decoder_or_error.is_error()
- Use MappedFile::bytes()
- Don't use EXPECT_EQ when comparing to fixed bools
No intended behavior change.
We should expect the sniffing method and the initialize method to fail
because this test case is testing that the ICO image decoder should not
decode random data within it.
When trying to figure out the correct implementation, we now have a very
strong distinction on plugins that are well suited for sniffing, and
plugins that need a MIME type to be chosen.
Instead of having multiple calls to non-static virtual sniff methods for
each Image decoding plugin, we have 2 static methods for each
implementation:
1. The sniff method, which in contrast to the old method, gets a
ReadonlyBytes parameter and ensures we can figure out the result
with zero heap allocations for most implementations.
2. The create method, which just creates a new instance so we don't
expose the constructor to everyone anymore.
In addition to that, we have a new virtual method called initialize,
which has a per-implementation initialization pattern to actually ensure
each implementation can construct a decoder object, and then have a
correct context being applied to it for the actual decoding.
These instances were detected by searching for files that include
stdlib.h, but don't match the regex:
\\b(_abort|abort|abs|aligned_alloc|arc4random|arc4random_buf|arc4random_
uniform|atexit|atof|atoi|atol|atoll|bsearch|calloc|clearenv|div|div_t|ex
it|_Exit|EXIT_FAILURE|EXIT_SUCCESS|free|getenv|getprogname|grantpt|labs|
ldiv|ldiv_t|llabs|lldiv|lldiv_t|malloc|malloc_good_size|malloc_size|mble
n|mbstowcs|mbtowc|mkdtemp|mkstemp|mkstemps|mktemp|posix_memalign|posix_o
penpt|ptsname|ptsname_r|putenv|qsort|qsort_r|rand|RAND_MAX|random|reallo
c|realpath|secure_getenv|serenity_dump_malloc_stats|serenity_setenv|sete
nv|setprogname|srand|srandom|strtod|strtof|strtol|strtold|strtoll|strtou
l|strtoull|system|unlockpt|unsetenv|wcstombs|wctomb)\\b
(Without the linebreaks.)
This regex is pessimistic, so there might be more files that don't
actually use anything from the stdlib.
In theory, one might use LibCPP to detect things like this
automatically, but let's do this one step after another.
Adapt BMPImageDecoderPlugin to support BMP images included in ICOns.
ICOImageDecoderPlugin now uses BMPImageDecoderPlugin to decode all
BMP images instead of it's own ad-hoc decoder which only supported
32 bpp BMPs.
We have a new, improved string type coming up in AK (OOM aware, no null
state), and while it's going to use UTF-8, the name UTF8String is a
mouthful - so let's free up the String name by renaming the existing
class.
Making the old one have an annoying name will hopefully also help with
quick adoption :^)
Otherwise, we end up propagating those dependencies into targets that
link against that library, which creates unnecessary link-time
dependencies.
Also included are changes to readd now missing dependencies to tools
that actually need them.
Each of these strings would previously rely on StringView's char const*
constructor overload, which would call __builtin_strlen on the string.
Since we now have operator ""sv, we can replace these with much simpler
versions. This opens the door to being able to remove
StringView(char const*).
No functional changes.
We were passing raw Gfx::Bitmap objects into the various image decoders
instead of encoded image data. This made all of them fail, but the test
expectations were set up in a way that aligned with this outcome.
With this patch, we now test the codecs for real. Except ICO, since we
don't have an ICO file handy. That's a FIXME.
Using a file(GLOB) to find all the test files in a directory is an easy
hack to get things started, but has some drawbacks. Namely, if you add
a test, it won't be found again without re-running CMake. `ninja` seems
to do this automatically, but it would be nice to one day stop seeing it
rechecking our globbed directories.
After the changes to LibGfx to make default font management handled in
WindowServer instead of each GUI application to allow for global font
broadcasts, the two LibGfx tests broke. The non-benchmark was fixed in
8f96d2, but the benchmark was left in the dust because nobody really
runs it manually :^(
With the goal of centralizing all tests in the system, this is a
first step to establish a Tests sub-tree. It will contain all of
the unit tests and test harnesses for the various components in the
system.