Unlike most data in the CLDR, hour cycles are not stored on a per-locale
basis. Instead, they are keyed by a string that is usually a region, but
sometimes is a locale. Therefore, given a locale, to determine the hour
cycles for that locale, we:
1. Check if the locale itself is assigned hour cycles.
2. If the locale has a region, check if that region is assigned hour
cycles.
3. Otherwise, maximize that locale, and if the maximized locale has
a region, check if that region is assigned hour cycles.
4. If the above all fail, fallback to the "001" region.
Further, each locale's default hour cycle is the first assigned hour
cycle.
This hasn't mattered yet by chance, because the source for all enums
contains names of the same case. But the enum generated for hour cycle
regions will have mixed case. Sort them case-insensitively in order to
traverse these names in the same order in both generate_enum and
generate_mapping.
Similar to number formatting, the data for date-time formatting will be
located in its own generated file. This extracts the cldr-dates package
from the CLDR and sets up the generator plumbing to create the date-time
data files.
Currently, we generate separate data files for locale and number format
related tables/methods, but provide public accessors for all of the data
in one Locale.h file. Rather than continuing this trend for date-time,
relative time, etc. formatting, it's a bit easier to reason about if the
public accessors are also in separate files.
Allocating a Vector for each of these invocations is a bit silly when
the values are basically all compile-time arrays. This AO is used even
more heavily by Intl.DateTimeFormat, so change it to accept a Span to
reduce its cost.
This also adds an overload to accept a fixed-size C-array so callers do
not have to be prefixed with AK::Array, i.e. this:
get_option(..., AK::Array { "a"sv, "b"sv }, ...);
Reduces to:
get_option(..., { "a"sv, "b"sv }, ...);
(Which is how all call sites were already written to construct a Vector
in place).
At the moment we just check if we *can* render a simple triangle, we do
not yet actually test if the image is indeed the triangle we wanted.
This test also outputs the rendered image when GL_DEBUG is enabled to a
file called "picture.bmp" for manual verification.
Co-authored-by: sunverwerth <s.unverwerth@serenityos.org>
Gfx::Color implements an IPC::[en|de]code function, but we did not
actually link against LibIPC to resolve the needed Symbols for that and
were relying on LibGui or others to link against it for us.
Having this linkage is unfortunate, but static inlining the functions in
question is sadly not possible, due needed includes leading the IPC
pipeline to initialize multiple times then, which leads to a compilation
error.
Since AsyncIteratorClose and IteratorClose differ only in that the async
version awaits the inner value we just implement them with an enum flag
to switch just that behavior.
We were calling value() on an ErrorOr containing an error when trying
to extract the frame duration after a failed decode.
This fixes ImageDecoder crashing on various websites.
We only showed frame times down to the millisecond. Our FPS counter was
based off of that, allowing for a limited set of possible FPS values.
Convert these calculations to floating point so we get more useful FPS
and frame time values.
As long as possible, entire decoded frame sample vectors are moved into
the output vector, leading to up to 20% speedups by avoiding memmoves on
take_first.
Previously, Vector::extend for a moved vector would move the other
vector into this vector if this vector was empty, thereby throwing away
existing allocated capacity. Therefore, this commit allows the move to
only happen if this vector's capacity is too small to fit the other
vector. This will also alleviate bugs where callers relied on the
capacity to never shrink with calls to unchecked_append, extend and the
like.
This mirrors the existence of append() for data pointers and is very
useful when the program needs to have a guarantee of no allocations,
as is necessary for real-time audio.
Previously, a libc-like out-of-line error information was used in the
loader and its plugins. Now, all functions that may fail to do their job
return some sort of Result. The universally-used error type ist the new
LoaderError, which can contain information about the general error
category (such as file format, I/O, unimplemented features), an error
description, and location information, such as file index or sample
index.
Additionally, the loader plugins try to do as little work as possible in
their constructors. Right after being constructed, a user should call
initialize() and check the errors returned from there. (This is done
transparently by Loader itself.) If a constructor caused an error, the
call to initialize should check and return it immediately.
This opportunity was used to rework a lot of the internal error
propagation in both loader classes, especially FlacLoader. Therefore, a
couple of other refactorings may have sneaked in as well.
The adoption of LibAudio users is minimal. Piano's adoption is not
important, as the code will receive major refactoring in the near future
anyways. SoundPlayer's adoption is also less important, as changes to
refactor it are in the works as well. aplay's adoption is the best and
may serve as an example for other users. It also includes new buffering
behavior.
Buffer also gets some attention, making it OOM-safe and thereby also
propagating its errors to the user.
This consists of two changes: First, a utility function create_empty
allows the user to quickly create an empty buffer. Second, most creation
functions now return a NonnullRefPtr, as their failure causes a VERIFY
crash anyways.
Decoding the residual in FLAC subframes is by far the most I/O-heavy
operation in FLAC decoding, as the residual data makes up the majority
of subframe data in LPC subframes. As the residual consists of many
Rice-encoded numbers with different bit sizes for differently large
numbers, the residual decoder frequently reads only one or two bytes at
a time. As we use a normal FileInputStream, that directly translates to
many calls to the read() syscall. We can see that the I/O overhead while
FLAC decoding is quite large, and much time is spent in the read()
syscall's kernel code.
This is optimized by using a Buffered<FileInputStream> instead, leading
to 4K blocks being read at once and a large reduction in I/O overhead.
Benchmarking with the new abench utility gives a 15-20% speedup on
identical files, usually pushing FLAC decoding to 10-15x realtime speed
on common sample rates.