This commit initializes the LibVideo library and implements parsing
basic Matroska container files. Currently, it will only parse audio
and video tracks.
This adds a new URL parser, which aims to be compliant with the URL
specification (https://url.spec.whatwg.org/). It also contains a
rudimentary data URL parser.
These dbgln's caused excessive load in the WebServer process,
accounting for ~67% of the processing time when serving a webpage
with a bunch of resources like serenityos.org/happy/2nd/.
As the parser now flattens out the instructions and inserts synthetic
nesting/structured instructions where needed, we can treat the whole
thing as a simple parsed bytecode stream.
This currently knows how to execute the following instructions:
- unreachable
- nop
- local.get
- local.set
- {i,f}{32,64}.const
- block
- loop
- if/else
- branch / branch_if
- i32_add
- i32_and/or/xor
- i32_ne
This also extends the 'wasm' utility to optionally execute the first
function in the module with optionally user-supplied arguments.
This commit replaces the former, hand-written parser with a new one that
can be generated automatically according to a state change diagram.
The new `EscapeSequenceParser` class provides a more ergonomic interface
to dealing with escape sequences. This interface has been inspired by
Alacritty's [vte library](https://github.com/alacritty/vte/).
I tried to avoid changing the application logic inside the `Terminal`
class. While this code has not been thoroughly tested, I can't find
regressions in the basic command line utilities or `vttest`.
`Terminal` now displays nicer debug messages when it encounters an
unknown escape sequence. Defensive programming and bounds checks have
been added where we access parameters, and as a result, we can now
endure 4-5 seconds of `cat /dev/urandom`. :D
We generate EscapeSequenceStateMachine.h when building the in-kernel
LibVT, and we assume that the file is already in place when the userland
library is being built. This will probably cause problems later on, but
I can't find a way to do it nicely.
This commit introduces the ability to parse the document catalog dict,
as well as the page tree and individual pages. Pages obviously aren't
fully parsed, as we won't care about most of the fields until we
start actually rendering PDFs.
One of the primary benefits of the PDF format is laziness. PDFs are
not meant to be parsed all at once, and the same is true for pages.
When a Document is constructed, it builds a map of page number to
object index, but it does not fetch and parse any of the pages. A page
is only parsed when a caller requests that particular page (and is
cached going forwards).
Additionally, this commit also adds an object_cast function which
logs bad casts if DEBUG_PDF is set. Additionally, utility functions
were added to ArrayObject and DictObject to get all types of objects
from the collections to avoid having to manually cast.
This can currently parse a really simple module.
Note that it cannot parse the DataCount section, and it's still missing
almost all of the instructions.
This commit also adds a 'wasm' test utility that tries to parse a given
webassembly binary file.
It currently does nothing but exit when the parse fails, but it's a
start :^)
This currently (obviously) doesn't support any actual 3D hardware,
hence all calls are done via software rendering.
Note that any modern constructs such as shaders are unsupported,
as this driver only implements Fixed Function Pipeline functionality.
The library is split into a base GLContext interface and a software
based renderer implementation of said interface. The global glXXX
functions serve as an OpenGL compatible c-style interface to the
currently bound context instance.
Co-authored-by: Stephan Unverwerth <s.unverwerth@gmx.de>
SPDX License Identifiers are a more compact / standardized
way of representing file license information.
See: https://spdx.dev/resources/use/#identifiers
This was done with the `ambr` search and replace tool.
ambr --no-parent-ignore --key-from-file --rep-from-file key.txt rep.txt *
Almost a year after first working on this, it's finally done: an
implementation of Promises for LibJS! :^)
The core functionality is working and closely following the spec [1].
I mostly took the pseudo code and transformed it into C++ - if you read
and understand it, you will know how the spec implements Promises; and
if you read the spec first, the code will look very familiar.
Implemented functions are:
- Promise() constructor
- Promise.prototype.then()
- Promise.prototype.catch()
- Promise.prototype.finally()
- Promise.resolve()
- Promise.reject()
For the tests I added a new function to test-js's global object,
runQueuedPromiseJobs(), which calls vm.run_queued_promise_jobs().
By design, queued jobs normally only run after the script was fully
executed, making it improssible to test handlers in individual test()
calls by default [2].
Subsequent commits include integrations into LibWeb and js(1) -
pretty-printing, running queued promise jobs when necessary.
This has an unusual amount of dbgln() statements, all hidden behind the
PROMISE_DEBUG flag - I'm leaving them in for now as they've been very
useful while debugging this, things can get quite complex with so many
asynchronously executed functions.
I've not extensively explored use of these APIs for promise-based
functionality in LibWeb (fetch(), Notification.requestPermission()
etc.), but we'll get there in due time.
[1]: https://tc39.es/ecma262/#sec-promise-objects
[2]: https://tc39.es/ecma262/#sec-jobs-and-job-queues
This makes them available for use by other language servers.
Also as a bonus, update the Shell language server to discover some
symbols and add go-to-definition functionality :^)
This patchset allows the editor to avoid redrawing the entire line when
the changes cause no unrecoverable style updates, and are at the end of
the line (this applies to most normal typing situations).
Cases that this does not resolve:
- When the cursor is not at the end of the buffer
- When a display refresh changes the styles on the already-drawn parts
of the line
- When the prompt has not yet been drawn, or has somehow changed
Fixes#5296.
This wrapper abstracts the watch_file setup and file handling, and
allows using the watch_file events as part of the event loop via the
Core::Notifier class.
Also renames the existing DirectoryWatcher class to BlockingFileWatcher,
and adds support for the Modified mode in this class.
Leaking macros across headers is a terrible thing, but I can't think of
a better way of achieving this.
- We need some way of modifying debug macros from CMake to implement
ENABLE_ALL_THE_DEBUG_MACROS.
- We need some way of modifying debug macros in specific source files
because otherwise we need to rebuild too many files.
This was done using the following script:
sed -i -E 's/#cmakedefine01 ([A-Z0-9_]+)/#ifndef \1\n\0\n#endif\n/' AK/Debug.h.in
sed -i -E 's/#cmakedefine01 ([A-Z0-9_]+)/#ifndef \1\n\0\n#endif\n/' Kernel/Debug.h.in
This fills in a bunch of the FIXMEs that was in prepare_script.
execute_script is almost finished, it's just missing the module side.
As an aside, let's not assert when inserting a script element with
innerHTML.
Personally, I prefer the naming convention DEBUG_FOO over FOO_DEBUG, but
the majority of the debug macros are already named in the latter naming
convention, so I just enforce consistency here.
This was done with the following script:
find . \( -name '*.cpp' -o -name '*.h' -o -name '*.in' \) -not -path './Toolchain/*' -not -path './Build/*' -exec sed -i -E 's/DEBUG_PATH/PATH_DEBUG/' {} \;
This was done with the help of several scripts, I dump them here to
easily find them later:
awk '/#ifdef/ { print "#cmakedefine01 "$2 }' AK/Debug.h.in
for debug_macro in $(awk '/#ifdef/ { print $2 }' AK/Debug.h.in)
do
find . \( -name '*.cpp' -o -name '*.h' -o -name '*.in' \) -not -path './Toolchain/*' -not -path './Build/*' -exec sed -i -E 's/#ifdef '$debug_macro'/#if '$debug_macro'/' {} \;
done
# Remember to remove WRAPPER_GERNERATOR_DEBUG from the list.
awk '/#cmake/ { print "set("$2" ON)" }' AK/Debug.h.in