This makes WebView::Database wrap around sqlite3 instead of LibSQL. The
effect on outside callers is pretty minimal. The main consequences are:
1. We must ensure the Cookie table exists before preparing any SQL
statements involving that table.
2. We can use an INSERT OR REPLACE statement instead of separate INSERT
and UPDATE statements.
The main intention of this change is to have a consistent look and
behavior across all scrollbars, including elements with
`overflow: scroll` and `overflow: auto`, iframes, and a page.
Before:
- Page's scrollbar is painted by Browser (Qt/AppKit) using the
corresponding UI framework style,
- Both WebContent and Browser know the scroll position offset.
- WebContent uses did_request_scroll_to() IPC call to send updates.
- Browser uses set_viewport_rect() to send updates.
After:
- Page's scrollbar is painted on WebContent side using the same style as
currently used for elements with `overflow: scroll` and
`overflow: auto`. A nice side effects: scrollbars are now painted for
iframes, and page's scrollbar respects scrollbar-width CSS property.
- Only WebContent knows scroll position offset.
- did_request_scroll_to() is no longer used.
- set_viewport_rect() is changed to set_viewport_size().
This was no longer doing anything. We'll eventually want a way to pass
system default fonts to each WebContent process, but we don't need to
squeeze everything through this API that was really meant for Serenity's
very idiosyncratic font system.
There was an issue with Clang that causes `consteval` function calls
from default initializers of fields to be made at run-time. This
manifested itself in the case of `ByteString::formatted` as an undefined
reference to `check_format_parameter_consistency` once format string
checking was enabled for Clang builds. The workaround is simple (just
move it to the member initializer list), and unblocks a useful change.
LibWeb will need to use unbuffered requests to support server-sent
events. Connection for such events remain open and the remote end sends
data as HTTP bodies at its leisure. The browser needs to be able to
handle this data as it arrives, as the request essentially never
finishes.
To support this, this make Protocol::Request operate in one of two
modes: buffered or unbuffered. The existing mechanism for setting up a
buffered request was a bit awkward; you had to set specific callbacks,
but be sure not to set some others, and then set a flag. The new
mechanism is to set the mode and the callbacks that the mode needs in
one API.
This actually allows us to re-introduce the ldd utility as a symlink to
our dynamic loader, so now ldd behaves exactly like on Linux - it will
load all dynamic dependencies for an ELF exectuable.
This has the advantage that running ldd on an ELF executable will
provide an exact preview of how the order in which the dynamic loader
loads the executable and its dependencies.
As a preparation to introducing ldd as a symlink to /usr/lib/Loader.so
we rename the ldd utility to be elfdeps, at its sole purpose is to list
ELF object dependencies, and not how the dynamic loader loads them.
Now both /bin/zcat and /bin/gunzip are symlinks to /bin/gzip, and we
essentially running it in decompression mode through these symlinks.
This ensures we don't maintain 2 versions of code to decompress Gzipped
data anymore, and handle the use case of gzipped-streaming input only
once in the codebase.
For example, for 7z7c.gif, we now store one 500x500 frame and then
a 94x78 frame at (196, 208) and a 91x78 frame at (198, 208).
This reduces how much data we have to store.
We currently store all pixels in the rect with changed pixels.
We could in the future store pixels that are equal in that rect
as transparent pixels. When inputs are gif files, this would
guaranteee that new frames only have at most 256 distinct colors
(since GIFs require that), which would help a future color indexing
transform. For now, we don't do that though.
The API I'm adding here is a bit ugly:
* WebPs can only store x/y offsets that are a multiple of 2. This
currently leaks into the AnimationWriter base class.
(Since we potentially have to make a webp frame 1 pixel wider
and higher due to this, it's possible to have a frame that has
<= 256 colors in a gif input but > 256 colors in the webp,
if we do the technique above.)
* Every client writing animations has to have logic to track
previous frames, decide which of the two functions to call, etc.
This also adds an opt-out flag to `animation`, because:
1. Some clients apparently assume the size of the last VP8L
chunk is the size of the image
(see https://github.com/discord/lilliput/issues/159).
2. Having incremental frames is good for filesize and for
playing the animation start-to-end, but it makes it hard
to extract arbitrary frames (have to extract all frames
from start to target frame) -- but this is mean tto be a
delivery codec, not an editing codec. It's also more vulnerable to
corrupted bytes in the middle of the file -- but transport
protocols are good these days.
(It'd also be an idea to write a full frame every N frames.)
For https://giphy.com/gifs/XT9HMdwmpHqqOu1f1a (an 184K gif),
output webp size goes from 21M to 11M.
For 7z7c.gif (an 11K gif), output webp size goes from 2.1M to 775K.
(The webp image data still isn't compressed at all.)
The high-level design is that we have a static method on WebPWriter that
returns an AnimationWriter object. AnimationWriter has a virtual method
for writing individual frames. This allows streaming animations to disk,
without having to buffer up the entire animation in memory first.
The semantics of this function, add_frame(), are that data is flushed
to disk every time the function is called, so that no explicit `close()`
method is needed.
For some formats that store animation length at the start of the file,
including WebP, this means that this needs to write to a SeekableStream,
so that add_frame() can seek to the start and update the size when a
frame is written.
This design should work for GIF and APNG writing as well. We can move
AnimationWriter to a new header if we add writers for these.
Currently, `animation` can read any animated image format we can read
(apng, gif, webp) and convert it to an animated webp file.
The written animated webp file is not compressed whatsoever, so this
creates large output files at the moment.
Specify the time in seconds to wait for a reply for each packet sent.
Replies received out of order will not be printed as replied but they
will be considered as replied when calculating statistics. Setting it
to 0 means infinite timeout.
In flood ping mode, the time interval between each request is set
to zero to provide a rapid display of how many packets are being
dropped. For each request a period '.' is printed, while for every
reply a backspace is printed.
Now that the chrome process is a singleton on all platforms, we can
safely add a cache to the CookieJar to greatly speed up access. The way
this works is we read all cookies upfront from the database. As cookies
are updated by the web, we store a list of "dirty" cookies that need to
be flushed to the database. We do that synchronization every 30 seconds
and at shutdown.
There's plenty of room for improvement here, some of which is marked
with FIXMEs in the CookieJar.
Before these changes, in a SQL database populated with 300 cookies,
browsing to https://twinings.co.uk/ WebContent spent:
19,806ms waiting for a get-cookie response
505ms waiting for a set-cookie response
With these changes, it spends:
24ms waiting for a get-cookie response
15ms waiting for a set-cookie response
Without this patch, we fail on manually mounting RAMFS (which I tested
for) but any filesystem that is not backed by actual storage will fail.
This bug was introduced in 0739b5df11 and
now is resolved by checking if the source fd is negative, to avoid fail
of the fstat call on it.
The main difference was that our implementation was writing
the final line of a series of repeated lines, whereas the
spec says "The second and succeeding copies of repeated adjacent
input lines shall not be written."
Additionally, there was a mistake in the -f flag implementation
causing the number of fields skipped to be one greater than
required.
Flags that rely on counting lines (-c and -d) were
producing results that were off by one. This is fixed
by initializing the `count` variable to 1, which is
consistent with behavior in the main loop, where it
is reset to 1 when lines don't match.
Calls to `read_line` are replaced with `read_line_with_resize`
and `swap`s of StringViews, which assume a consistent location
of the underlying ByteBuffers, are replaced. A test file has
been added for uniq, which includes a test case for long lines.
For `AK_OS_SERENITY`, the root path of the resources folder is "/res";
but otherwise it should be the `s_serenity_resource_root` variable set
in `platform_init()`.
However, a path provided on the command line, will override the default
path in both of those cases.
This change also makes sure that `RequestServer` can find the
certificates file `serenity/Build/lagom/share/Lagom/ladybird/cacert.pem`
Most of these now just await the image decoding, equivalent (ish) to
the old behavior. A more async-aware refactor should happen some time
in the future.
It previously resided in LibWebView to hide the details of launching a
singleton process. That functionality now lives in LibCore. By moving
this to Ladybird, we will be able to register the process with the task
manager.
This just moves the code to launch a single process such as SQLServer to
LibCore. This will allow re-using this feature for other processes, and
will allow moving the launching of SQLServer to Ladybird.
These changes are compatible with clang-format 16 and will be mandatory
when we eventually bump clang-format version. So, since there are no
real downsides, let's commit them now.
This reverts commit 9dbec601b0.
For KCOV to be performant (or at least not even slower) we need to
mmap the PC buffer from both user and kernel space at the same time.
You can't mmap a character device, so this change didn't make sense.
Plus even if we did invent a new method to exfiltrate the coverage
information out of the kernel, it would be incompatible with existing
kernel fuzzers. That would be kind of annoying. 🙃
For the normal non-test use case of headless-browser, the function
`load_page_for_screenshot_and_exit()` didn't actually load the requested
webpage in the `WebView`, resulting in an all white pixels screenshot.
We do the same thing with the gzip utility for performance.
This reduces the runtime of `./bin/base64 enwik8 >/dev/null` from
0.428s to 0.303s.
This reduces the runtime of `./bin/base64 -d enwik8.base64 >/dev/null`
from 0.632s to 0.469s.
(enwik8 is a 100MB test file from http://mattmahoney.net/dc/enwik8.zip)
You can now run
image -o out.png Tests/LibGfx/test-inputs/bmp/bitmap.bmp \
--crop 130,86,108,114
and end up with the nose part of that image in out.png.
This URL library ends up being a relatively fundamental base library of
the system, as LibCore depends on LibURL.
This change has two main benefits:
* Moving AK back more towards being an agnostic library that can
be used between the kernel and userspace. URL has never really fit
that description - and is not used in the kernel.
* URL _should_ depend on LibUnicode, as it needs punnycode support.
However, it's not really possible to do this inside of AK as it can't
depend on any external library. This change brings us a little closer
to being able to do that, but unfortunately we aren't there quite
yet, as the code generators depend on LibCore.
Rather than adding a bunch of `get_*_from_mime_type` functions, add just
one to get the Core::MimeType instance. We will need multiple fields at
once in Browser.
Either we mount from a loop device or other source, the user might want
to obfuscate the given source for security reasons, so this option will
ensure this will happen.
If passed during a mount, the source will be hidden when reading from
the /sys/kernel/df node.
...from try_create_for_raw_bytes().
If a plugin returns `true` from sniff but then fails when calling
its `create()` method, we now no longer swallow that error.
Allows `image` (and other places in the system) to print a more
actionable error if early image headers are invalid.
(We now no longer try to find another plugin that can also handle
the image.)
Fixes a regression from #20063 / #19893 -- before then, we didn't
do fallible work this early.
Since commit e6df1c9988 which switched us
over to using the syscall/sysret instruction the second syscall
argument changed from rcx to rdi. Update strace as well to print the
actually correct values for the second arg.
Previously, `true` was passed into the ElapsedTimer constructor if a
precise timer was required. We now use an enum to more explicitly
specify whether we would like a precise or a coarse timer.
We had previous implemented some plumbing for file input elements in
commit 636602a54e.
This implements the return path for chromes to inform WebContent of the
file(s) the user selected. This patch includes a dummy implementation
for headless-browser to enable testing.
This makes it easier to work with device tree nodes and properties, then
writing simple state machines to parse the device tree.
This also makes the old slow traversal methods use the
DeviceTreeProperty helper class, and adds a simple test.