Glyph was a simple structure, but even now it's become more complex that
it was initially. Turning it into a class hides some of that complexity,
and make sit easier to understand to external eyes.
While doing this I also decided to remove the float + bool combo for
keeping track of the glyph's width, and replaced it with an Optional
instead.
Storing glyphs indexed by char code in a Type1 Font Program binds a Font
Program instance to the particular Encoding that was used at Font
Program construction time. This makes it difficult to reuse Font Program
instances against different Encodings, which would be otherwise
possible.
This commit changes how we store the glyphs on Type1 Font Programs.
Instead of storing them on a map indexed by char code, the map is now
indexed by glyph name. In turn, when rendering a glyph we use the
Encoding object to turn the char code into a glyph name, which in turn
is used to index into the map of glyphs.
This is the first step towards reusability of Type1 Font Programs. It
also unlocks the ability to render glyphs that are described via the
"seac" command (standard encoding accented character), which requires
accessing the base and accent glyphs by name.
This is useful in general (I'd imagine), but in particular having this
will allow us to implement accented PDF Type1 Font glyphs, which consist
of two separate glyphs that are composed into a single one.
When parsing streams we rely on a /Length item being defined in the
stream's dictionary to know how much data comprises the stream. Its
value is usually a direct value, but it can be indirect. There was
however a contradiction in the code: the condition that allowed it to
read and use the /Length value required it to be a direct value, but the
actual code using the value would have worked with indirect ones. This
meant that indirect /Length values triggered the fallback, "manual"
stream parsing code.
On the other hand, this latter code was also buggy, because it relied on
the "endstream" keyword to appear on a separate line, which isn't always
the case.
This commit both fixes the bug in the manual stream parsing scenario,
while also allowing for indirect /Length values to be used to parse
streams more directly and avoid the manual approach. The main caveat to
this second change is that for a brief period of time the Document is
not able to resolve references (i.e., before the xref table itself is
not parsed). Any parsing happening before that (e..g, the linearization
dictionary) must therefore use the manual stream parsing approach.
Functionality for checking image size in advance has been removed, as
this would require a SeekableStream and we now check for read errors
everywhere anyways.
This is a very new tag used for HDR content. The only files I know that
use it are the jpegs on https://ccameron-chromium.github.io/hdr-jpeg/
But they have an invalid ICC creation date, so `icc` can't process them.
(Commenting out the check for that does allow to print them.)
If the CIPC tag is present, it takes precedence about the actual data
in the profile and from what I understand, the ICC profile is
basically ignored. See https://www.color.org/events/HDR_experts.xalter
for background, in particular
https://www.color.org/hdr/02-Luke_Wallis.pdf (but the other talks
are very interesting too).
(PNG also has a cICP chunk that's supposed to take precedence over
iCCP.)
This isn't used by any mandatory tags, and it's not terribly useful.
But jpegs exported by Lightroom Classic write the 'tech' tag, and
it seems nice to be able to dump its contents.
signatureType stores a single u32 which for different tags with this
type means different things.
In each case, the value is one from a short table of valid values,
suggesting this should be a per-tag enum class instead of a
per-tag DistinctFourCC, per the comment at the top of DistincFourCC.h.
On the other hand, 3 of the 4 tables have an explicit "It is possible
that the ICC will define other signature values in the future" note,
which suggests the FourCC might actually be the way to go.
For now, just punt on that and manually dump the u32 in fourcc style
in icc.cpp and don't add any to_string() methods that return a readable
string based on the contents of these tables.
This is the type of namedColor2Tag, which is a required tag in
NamedColor profiles.
The implementation is pretty basic for now and only exposes the
numbers stored in the file directly (after endian conversion).
Though table wrappers are anonymous block containers (because
TableWrapper is inherited from BlockContainer) with no lines they
should not be skipped in block auto height calculation.
This reverts the SystemServer exec() logic to how it was before
81bd91c, but now with some extra TRY()s. This allows the HOME var
to always be propagated from LoginServer which prevents needing
to unveil() /etc/passwd everywhere.
This was called from LibCore and passed raw StringView data that may
not be null terminated, then incorrectly passed those strings to
getenv() and also tried printing them with just the %s format
specifier.
Instead of iterating through the needle being searched one byte at a
time (like an ascii string), we calculate its unicode code points first
and then iterate through those.
Template argument are checked to ensure that the `Out` type is equal or
convertible to the type returned by the invokee.
Compilation now fails on:
`Function<void()> f = []() -> int { return 0; };`
But this is allowed:
`Function<ErrorOr<int>()> f = []() -> int { return 0; };`
It seems like a lot (most?) places where InputBoxes are used check if
the retrieved string isn't empty anyway - make this be reflected in
the user interface, by disabling (graying out) the "OK" button when
nothing is entered, so empty input isn't a viable option at all.
Similar to the return values earlier, a signed value doesn't really make
sense here. Relying on the much more standard `size_t` makes it easier
to use Stream in all contexts.
This function is made from what composed `set_file()` (which now calls
the new function). It allows to create a `HexDocumentFile` without
calling the hackish `set_file(move(m_file))`.
Before 6490529ef7, all programs in the
SPECIAL_TARGETS list were built because they didn't have an
EXCLUDE_FROM_ALL property set.
That commit set the property for all targets, but because of this, the
minimal "Required" build configuration no longer built, as CMake failed
to rename every special target. Even the not built ones.
This commit makes the rename action run only if the executable exists,
which makes us build Serenity without the `install` utility, and also
by using the minimal configuration set. :^) :^)
This filesystem is based on the code of the long-lived TmpFS. It differs
from that filesystem in one keypoint - its root inode doesn't have a
sticky bit on it.
Therefore, we mount it on /dev, to ensure only root can modify files on
that directory. In addition to that, /tmp is mounted directly in the
SystemServer main (start) code, so it's no longer specified in the fstab
file. We ensure that /tmp has a sticky bit and has the value 0777 for
root directory permissions, which is certainly a special case when using
RAM-backed (and in general other) filesystems.
Because of these 2 changes, it's no longer needed to maintain the TmpFS
filesystem, hence it's removed (renamed to RAMFS), because the RAMFS
represents the purpose of this filesystem in a much better way - it
relies on being backed by RAM "storage", and therefore it's easy to
conclude it's temporary and volatile, so its content is gone on either
system shutdown or unmounting of the filesystem.
While the clipping logic was correct (current v/s new clipping path),
the clipping path contents weren't. This commit fixed that.
We calculate the clipping path in two places: when we set it to be the
whole page at graphics state creation time, and when we perform clipping
path intersection to calculate a new clipping path. The clipping path is
then used to limit painting by passing it to the painter (more
precisely, but passing its bounding box to the painter, as the latter
doesn't support arbitrary path clipping). For this last point the
clipping path must be in device coordinates.
There was however a mix of coordinate systems involved in the creation,
update and usage of the clipping path:
* The initial values of the path (i.e., the whole page) were in user
coordinates.
* Clipping path intersection was performed against m_current_path,
which is in device coordinates.
* To perform the clipping operation, the current clipping path was
assumed to be in user coordinates.
This mix resulted in the clipping not working correctly depending on the
zoom level at which one visualised a page.
This commit fixes the issue by always keeping track of the clipping path
in device coordinates. This means that the initial full-page contents
are now converted to device coordinates before putting them in the
graphics state, and that no mapping is performed when applied the
clipping to the painter.
Draw pieces around 80% of the size of a square, instead of 100%, so that
there is a nice gap around them. This feels more comfy, and makes it
actually possible to read the coordinates while a piece is on their
square.
Also, the Tree object was only ever used by the TreeMapWidget, so its
creation can happen inside `analyze()`, fixing the memory leak issue.
Plus, it doesn't have to be RefCounted.
This is a first pass at implementing CRC2D.createPattern() and the
associated CanvasPattern object. This implementation only works for a
few of the required image sources [like CRC2D.drawImage()], and does
not yet support transforms. Other than that it supports everything
else (which is mainly the various repeat modes).