Unfortunately adopt_ref requires a reference, which obviously does not
work well with when attempting to harden against allocation failure.
The adopt_ref_if_nonnull() variant will allow you to avoid using bare
pointers, while still allowing you to handle allocation failure.
Unfortunately adopt_own requires a reference, which obviously does not
work well with when attempting to harden against allocation failure.
The adopt_own_if_nonnull() variant will allow you to avoid using bare
pointers, while still allowing you to handle allocation failure.
This patch adds two new methods to LexicalPath. LexicalPath::append
appends a new path component to a LexicalPath, and LexicalPath::join
constructs a new LexicalPath from one or more components.
Co-authored-by: Gunnar Beutner <gunnar@beutner.name>
The get_dir_entries syscall failed if the serialized form of all the
directory entries together was too large to fit in its temporary buffer.
Now the kernel uses a fixed size buffer, that is flushed to an output
buffer when it is full. If this flushing operation fails because there
is not enough space available, the syscall will return -EINVAL. That
error code is then used in userspace as a signal to allocate a larger
buffer and retry the syscall.
This allows the construction of `Variant<int, int, int>`.
While this might not seem useful, it is very useful for making variants
that contain a series of member function pointers, which I plan to use
in LibGL for glGenLists() and co.
typeid() and RTTI was a nice clutch to implement this, but let's move
away from the horrible slowness and implement variants using type
indices for faster variants.
This commit introduces the ability to parse the document catalog dict,
as well as the page tree and individual pages. Pages obviously aren't
fully parsed, as we won't care about most of the fields until we
start actually rendering PDFs.
One of the primary benefits of the PDF format is laziness. PDFs are
not meant to be parsed all at once, and the same is true for pages.
When a Document is constructed, it builds a map of page number to
object index, but it does not fetch and parse any of the pages. A page
is only parsed when a caller requests that particular page (and is
cached going forwards).
Additionally, this commit also adds an object_cast function which
logs bad casts if DEBUG_PDF is set. Additionally, utility functions
were added to ArrayObject and DictObject to get all types of objects
from the collections to avoid having to manually cast.
This can currently parse a really simple module.
Note that it cannot parse the DataCount section, and it's still missing
almost all of the instructions.
This commit also adds a 'wasm' test utility that tries to parse a given
webassembly binary file.
It currently does nothing but exit when the parse fails, but it's a
start :^)
This enables us to use keys of type NonnullRefPtr in HashMaps and
HashTables.
This commit also includes fixes in various places that used
HashMap<T, NonnullRefPtr<U>>::get() and expected to get an
Optional<NonnullRefPtr<U>> and now get an Optional<U*>.
Additionally, the const version of get() returns Optional<ConstPeekType>
for smart pointers.
For example, if the value in the HashMap is OwnPtr<u32>,
HashMap::get() const returns Optional<const u32*>.
Also, the PeekType of smart pointers is now T* instead of const T*.
Note: This commit doesn't compile, it breaks HashMap::get() for some
types. Fixed in the next commit.
This currently (obviously) doesn't support any actual 3D hardware,
hence all calls are done via software rendering.
Note that any modern constructs such as shaders are unsupported,
as this driver only implements Fixed Function Pipeline functionality.
The library is split into a base GLContext interface and a software
based renderer implementation of said interface. The global glXXX
functions serve as an OpenGL compatible c-style interface to the
currently bound context instance.
Co-authored-by: Stephan Unverwerth <s.unverwerth@gmx.de>
Also adds an AK::Empty struct, because 'empty' variants are useful, but
this implementation leaves that to the user (i.e. a variant cannot
actually be empty, but it can contain an instance of Empty - i.e. a
byte).
Note that this is more of a constrained Any type, but they basically do
the same things anyway :^)
Adding -fno-semantic-interposition to the GCC command
line caused this new warning.
I don't see how output.data() could be uninitialized here. Also,
commenting out the ensure_capacity() call for the Vector
also gets rid of this warning.
This allows everybody to create a String version of their number
in a arbitrary bijective base. Bijective base meaning that the mapping
doesn't have a 0. In the usual mapping to the alphabet the follower
after 'Z' is 'AA'.
The mapping using the (uppercase) alphabet is used as a standard but
can be overridden specifying 'base' and 'map'.
The code was directly yanked from the Spreadsheet.
We had some inconsistencies before:
- Sometimes "The", sometimes "the"
- Sometimes trailing ".", sometimes no trailing "."
I picked the most common one (lowecase "the", trailing ".") and applied
it to all copyright headers.
By using the exact same string everywhere we can ensure nothing gets
missed during a global search (and replace), and that these
inconsistencies are not spread any further (as copyright headers are
commonly copied to new files).
When the two chosen pivots happen to be the smallest and largest
elements of the array, three partitions will be created, two of
size 0 and one of size n-2. If this happens on each recursive call
to dual_pivot_quick_sort, the stack depth will reach approximately n/2.
To avoid the stack from deepening, iteration can be used for the
largest of the three partitions. This ensures the stack depth
will only increase for partitions of size n/2 or smaller, which
results in a maximum stack depth of log(n).
Picking the first and last elements as pivots makes it so that
a sorted array is the worst-case input for the algorithm.
This change instead picks pivots at approximately 1/3 and 2/3 in
the array. This results in desired performance for sorted arrays.
Of course this only changes which inputs result in worst-case
performance, but hopefully those inputs occur less frequently than
already sorted arrays.
As many macros as possible are moved to Macros.h, while the
macros to create a test case are moved to TestCase.h. TestCase is now
the only user-facing header for creating a test case. TestSuite and its
helpers have moved into a .cpp file. Instead of requiring a TEST_MAIN
macro to be instantiated into the test file, a TestMain.cpp file is
provided instead that will be linked against each test. This has the
side effect that, if we wanted to have test cases split across multiple
files, it's as simple as adding them all to the same executable.
The test main should be portable to kernel mode as well, so if
there's a set of tests that should be run in self-test mode in kernel
space, we can accomodate that.
A new serenity_test CMake function streamlines adding a new test with
arguments for the test source file, subdirectory under /usr/Tests to
install the test application and an optional list of libraries to link
against the test application. To accomodate future test where the
provided TestMain.cpp is not suitable (e.g. test-js), a CUSTOM_MAIN
parameter can be passed to the function to not link against the
boilerplate main function.
This is useful for CI where we don't want to spend a minute and a half
benchmarking Vector::append, and we don't have a good way to pass
test-specific arguments yet. :)
C++20 added std::source_location, which lets you capture the
callers __FILE__ / __LINE__ / __FUNCTION__ etc as a default
argument to functions.
See: https://en.cppreference.com/w/cpp/utility/source_location
During a bug investigation @ADKaster suggested we could use this
to make the LOCK_DEBUG feature of the kernel more user friendly
and allow it to automatically instrument all call sites.
We then implemented / tested it over discord. :^)
Co-Authored-by: Andrew Kaster <andrewdkaster@gmail.com>