Using mmap-allocated memory for backing stores does not allow us to
benefit from using GPU-accelerated painting, because all the performance
increase we get is mostly negated by reading the GPU-allocated texture
back into RAM, so it can be shared with the browser process.
With IOSurface, we get a framebuffer that is both shareable between
processes and can be used as underlying memory for an OpenGL/Metal
texture.
This change does not yet benefit from using IOSurface and merely wraps
them into Gfx::Bitmap to be used by the CPU painter.
Allows WebContentClient to get pid of WebContent process right after
creation, so there is no window between forking and
notify_process_information() IPC response, when client doesn't know the
pid.
In the upcoming changes, we are going to switch macOS to using an
IOSurface for the backing store. This change will simplify the process
of sharing an IOSurface between processes because we already have the
MachPortServer running in the browser, and WebContent knows how to
locate the corresponding server.
This change ensures that if a screenshot is requested between teardown
of paintable tree and relayout, we will wait for layout to complete
before taking a screenshot.
LibLocale was split off from LibUnicode a couple years ago to reduce the
number of applications on SerenityOS that depend on CLDR data. Now that
we use ICU, both LibUnicode and LibLocale are actually linking in this
data. And since vcpkg gives us static libraries, both libraries are over
30MB in size.
This patch reverts the separation and merges LibLocale into LibUnicode
again. We now have just one library that includes the ICU data.
Further, this will let LibUnicode share the locale cache that previously
would only exist in LibLocale.
This adds a motion preference to the browser UI similar to the existing
ones for color scheme and contrast.
Both AppKit UI and Qt UI has this new preference.
The auto value is currently the same as NoPreference, follow-ups can
address wiring that up to the actual preference for the OS.
Previously, the "Find Next Match" and "Find Previous Match" actions
simply updated the match index of the last query to be performed. This
led to incorrect results if the page had been modified after the last
query had been run.
`Page::find_in_page_next_match()` and
`Page::find_in_page_previous_match()` both now rerun the last query to
ensure the results are up to date before updating the match index.
The match index is also reset if the URL of the active document has
changed since the last query. The current match index is maintained if
only the URL fragment changes.
This change allows the results of a find in page query to be reported
back to the user interface. Currently, the number of results found and
the current match index are reported.
Instead of using a HashMap<ByteString, ByteString, CaseInsensitive...>
everywhere, we now encapsulate this in a class.
Even better, the new class also allows keeping track of multiple headers
with the same name! This will make it possible for HTTP responses to
actually retain all their headers on the perilous journey from
RequestServer to LibWeb.
The main intention of this change is to have a consistent look and
behavior across all scrollbars, including elements with
`overflow: scroll` and `overflow: auto`, iframes, and a page.
Before:
- Page's scrollbar is painted by Browser (Qt/AppKit) using the
corresponding UI framework style,
- Both WebContent and Browser know the scroll position offset.
- WebContent uses did_request_scroll_to() IPC call to send updates.
- Browser uses set_viewport_rect() to send updates.
After:
- Page's scrollbar is painted on WebContent side using the same style as
currently used for elements with `overflow: scroll` and
`overflow: auto`. A nice side effects: scrollbars are now painted for
iframes, and page's scrollbar respects scrollbar-width CSS property.
- Only WebContent knows scroll position offset.
- did_request_scroll_to() is no longer used.
- set_viewport_rect() is changed to set_viewport_size().
This was no longer doing anything. We'll eventually want a way to pass
system default fonts to each WebContent process, but we don't need to
squeeze everything through this API that was really meant for Serenity's
very idiosyncratic font system.
This allows searching for text with case-insensitivity. As this is
probably what most users expect, the default behavior is changes to
perform case-insensitive lookups. Chromes may add UI to change the
behavior as they see fit.
This allows the browser to send a query to the WebContent process,
which will search the page for the given string and highlight any
occurrences of that string.
This adds a `--experimental-cpu-transforms` option to Ladybird and
WebContent (which defaults to false/off).
When enabled the AffineCommandExecutorCPU will be used to handle
painting transformed stacking contexts (i.e. stacking contexts where
the transform is something other than a simple translation). The regular
command executor will still handle the non-transformed cases.
This is hidden under a flag as the `AffineCommandExecutorCPU` is very
incomplete now. It missing support for clipping, text, and other basic
commands. Once most common commands have been implemented this flag
will be removed.
...instead of scheduling repaint timer in PageClient.
This change fixes flickering on Discord that happened because:
- Event loop schedules repainting by activating repaint timer
- `Document::tear_down_layout_tree()` destroys paintable tree
- Repaint timer invokes callback and renders an empty frame because
paintable tree was destroyed
A connection's socket is allowed to be null if it's being recreated
after a failed connect(), this was handled correctly in
request_did_finish but not in recreate_socket_if_needed; this commit
fixes this oversight.
Most of IPC::Connection is thread-safe, but the responsiveness timer is
very much not so, this commit makes sure only one thread can send stuff
through IPC to avoid having threads racing in IPC::Connection.
This is simply less work than making sure everything IPC::Connection
uses (now and later) is thread-safe.
Previously RS handled all the requests in an event loop, leading to
issues with connections being started in the middle of other connections
being started (and potentially blowing up the stack), ultimately causing
requests to be delayed because of other requests.
This commit reworks the way we handle these (specifically starting
connections) by first serialising the requests, and then performing them
in multiple threads concurrently; which yields a significant loading
performance and reliability increase.
ImageDecoder now queues up image decoding requests and returns the
images back to the caller later. ImageDecoderClient has a new
promise-based API that allows callers to attach their own resolve/reject
handlers to the responses from ImageDecoder.
We were inadvertently keeping all documents alive by installing a
console client for them. This patch fixes the issue by adding a
finalizer to Document, and having that be the way we detach console
clients. This breaks the cycle.
With this change, we can spam set .innerHTML in a loop and memory
usage remains stable.
Fixes#14612
Add factory functions to distinguish between when the owner of the File
wants to transfer ownership to the new IPC object (adopt) or to send a
copy of the same fd to the IPC peer (clone).
This behavior is more intuitive than the previous behavior. Previously,
an IPC::File would default to a shallow clone of the file descriptor,
only *actually* calling dup(2) for the fd when encoding or it into an
IPC MessageBuffer. Now the dup(2) for the fd is explicit in the clone_fd
factory function.
These changes are compatible with clang-format 16 and will be mandatory
when we eventually bump clang-format version. So, since there are no
real downsides, let's commit them now.
This reverts commit 9dbec601b0.
For KCOV to be performant (or at least not even slower) we need to
mmap the PC buffer from both user and kernel space at the same time.
You can't mmap a character device, so this change didn't make sense.
Plus even if we did invent a new method to exfiltrate the coverage
information out of the kernel, it would be incompatible with existing
kernel fuzzers. That would be kind of annoying. 🙃
The previous name was extremely misleading, because the call is used for
pushing or replacing new session history entry on chrome side instead of
only changing URL.
It is going to be used to communicate whether it is possible to navigate
back or forward after session history stored on browser side will no
longer be used to driver navigation.
Before this change JS console was initialise from
activate_history_entry() which is too late for about:blank documents
that are ready to run scripts immediately after creation.
Currently the `<select>` dropdown IPC uses the option value attr to
find which option is selected. This won't work when options don't
have values or when multiple options have the same value. Also the
`SelectItem` contained so weird recursive structures that are
impossible to create with HTML. So I refactored `SelectItem` as a
variant, and gave the options a unique id. The id is send back to
`HTMLSelectElement` so it can find out exactly which option element
is selected.
Part of this issue was fixed in 89877b3f40
but that only addressed the first layer of deferred_invoke, ignoring the
second one (which would cause a race if a request was sent to a host
immediately following a timeout event from the same host).
Fixes#23840.
On Serenity, it's not trivial to extract the peer pid from a socket that
is created by SystemServer and then passed to a forked service process.
This patch adds an API to let the WebContent process notify the UI
directly, which makes the WebContent process show up in the Serenity
port's TaskManagerWidget. It seems that we will need to do something of
this sort in order to properly gather metrics on macOS as well, due to
the way that self mach ports work.
This adds an IPC for chromes to mute a tab. When muted, we trigger an
internal volume change notification and indicate that the user agent has
overriden the media volume.
Let's not re-invoke the "page did start loading" IPC when the history
state is pushed/replaced. It's a bit misleading (the change does not
actually load the new URL), but also the chromes may do more work than
we want when we change the URL.
Instead, add a new IPC for the history object to invoke.
Most browsers have some indicator when audio is playing in a tab, which
makes it easier to find that tab and mute unwanted audio. This adds an
IPC to allow the Ladybird chromes to do something similar.
Fixes delayed repainting in the following case:
1. Style or layout invalidation triggers html event loop processing.
2. Event loop processing does nothing because there is no rendering
opportunity.
3. Style or layout change won't be reflected until something else
triggers event loop processing
This URL library ends up being a relatively fundamental base library of
the system, as LibCore depends on LibURL.
This change has two main benefits:
* Moving AK back more towards being an agnostic library that can
be used between the kernel and userspace. URL has never really fit
that description - and is not used in the kernel.
* URL _should_ depend on LibUnicode, as it needs punnycode support.
However, it's not really possible to do this inside of AK as it can't
depend on any external library. This change brings us a little closer
to being able to do that, but unfortunately we aren't there quite
yet, as the code generators depend on LibCore.
Instead of wrapping every entry in Optional, use the null state of the
style pointer for the same purpose.
This shrinks StyleProperties by 1752 bytes per instance.
This causes a behavior change in which the read FD is now non-blocking.
This is intentional, as this change avoids a deadlock between RS and
WebContent, where WC could block while reading from the request FD,
while RS is blocked sending a message to WC.
The underlying issue here isn't quite understood yet, but for some
reason, when we defer this connection, we ultimately end up blocking
indefinitely on macOS when a subsequent StartRequest message tries to
send its request FD over IPC. We should continue investigating that
issue, but for now, this lets us use RequestServer more reliably on
macOS.
Normally, assigning to e.g document.body.onload will forward to
window.onload. However, in a detached DOM tree, there is no associated
window, so we have nowhere to forward to, making this a no-op.
The bulk of this change is making Document::window() return a nullable
pointer, as documents created by DOMParser or DOMImplementation do not
have an associated window object, and so must be able to return null
from here.
When in permissive mode, the ConfigServer will not treat reads and
writes to non-pledged domains as errors, but instead turns them into
no-ops: Reads will act as if the key was not found, and writes will do
nothing. Permissive mode must be enabled before pledging any domains.
This is needed to make GUI Widgets nicer to work with in GML Playground:
a few Widgets include reads and writes to LibConfig in order to load
system settings (eg, GUI::Calendar) or to save and restore state
(eg, GUI::DynamicWidgetContainer). Without this change, editing a
layout that includes one of these Widgets will cause GML Playground to
crash when they try to access config domains that are not pledged.
The solution used previously is to make Playground pledge more domains,
but not only does this mean Playground has to know about these cases,
but also that working on a layout file can alter the user's settings in
other arbitrary apps, which is not something we want.
By simply ignoring these config accesses, we avoid those downsides, and
Widgets will simply use the fallback values they already have to provide
to Config::read_foo_value().
...from try_create_for_raw_bytes().
If a plugin returns `true` from sniff but then fails when calling
its `create()` method, we now no longer swallow that error.
Allows `image` (and other places in the system) to print a more
actionable error if early image headers are invalid.
(We now no longer try to find another plugin that can also handle
the image.)
Fixes a regression from #20063 / #19893 -- before then, we didn't
do fallible work this early.
It's no change in application behavior to have these objects owned by
the function-scope static map in Protocol.cpp, while allowing us to
remove some ugly FIXMEs from time immemorial.
Now that all input events are handled by LibWebView, replace the IPCs
which send the fields of Web::KeyEvent / Web::MouseEvent individually
with one IPC per event type (key or mouse).
We can also replace the ad-hoc queued input structure with a smaller
struct that simply holds the tranferred Web::KeyEvent / Web::MouseEvent.
In the future, we can also adapt Web::EventHandler to use these structs.
All of this error propogation came from a single call to
HashMap::try_ensure_capacity! As part of the ongoing effort to ignore
small allocation failures, lets just assert this works. This has the
nice side-effect of propogating out to a few other classes.
This involves plumbing the perform the fetch hook argument throughout
all of the module fetch implementation AOs, where it was left as a FIXME
before.
With this change we can load module scripts in DedicatedWorkers.
We had previous implemented some plumbing for file input elements in
commit 636602a54e.
This implements the return path for chromes to inform WebContent of the
file(s) the user selected. This patch includes a dummy implementation
for headless-browser to enable testing.
This makes it so the clients don't have to wait for RS to become
responsive, potentially allowing them to do other things while RS
handles the connections.
Fixes#23306.
This reflects what the functions does more accurately, and allows for
adding functions to get sizes through other methods.
This also corrects the return type of said function, as size_t may only
hold sizes up to 4GB on 32-bit platforms.
Resolves a performance regression from
8ba18dfd40, where moving paint scheduling
to `EventLoop::process()` led to unnecessary repaints.
This update introduces a flag to trigger repaints only when necessary,
addressing the issue where repaints previously occurred with each event
loop process, irrespective of actual changes.
Exif metadata have two tags to store the pixel density along each axis.
If both values are different and no action is taken, the resulting image
will appear deformed. This commit scales the displayed bitmap
accordingly to these tags in order to show the image in its intended
shape. This unfortunately includes a lot of plumbing to get this
information through IPC.
Attribute values may contain HTML, and may contain invalid HTML at that.
If the latter occurs, let's not generate invalid Inspector HTML when we
embed the attribute values as data attributes. Instead, cache the values
in the InspectorClient, and embed just a lookup index into the HTML.
This also nicely reduces the size of the generated HTML. The Inspector
on https://github.com/SerenityOS/serenity reduces from 2.3MB to 1.9MB
(about 318KB, or 13.8%).
In this change, updating layout and painting are moved to the EventLoop
processing steps. This modification allows the addition of resize
observation dispatching that needs to happen in event loop processing
steps and must occur in the following order relative to layout and
painting:
1. Update layout.
2. Gather and broadcast resize observations.
3. Paint.
Painting command executors are defined within the "Painting" namespace,
allowing us to remove this prefix from their names.
This commit performs the following renamings:
- Painting::PaintingCommandExecutor to Painting::CommandExecutor
- Painting::PaintingCommandExecutorCPU to Painting::CommandExecutorCPU
- Painting::PaintingCommandExecutorGPU to Painting::CommandExecutorGPU
Separating the recorder list from the painter will allow us to save it
for later execution without carrying along the painter's state. This
will be useful once we have a separate thread for executing painting
commands, to which we will have to transfer commands from the main
thread.
Preparation for https://github.com/SerenityOS/serenity/pull/23108
When a tab or nested traversable navigable is closed, there might be
messages still in the pipe from the UI process that we need to
gracefully drop, rather than crash trying to access an invalid pointer.
Double-clicking the edges of a window results in the edge being extended
until it latches to the screen edge. This used to violate the fixed
aspect ratio property of certain windows because of only extending the
window in one dimension.
This commit adds a special case to the latching logic that makes sure to
also extend the other dimension of the window such that the fixed aspect
ratio is maintained.
The IPC layer between chromes and LibWeb now understands that multiple
top level traversables can live in each WebContent process.
This largely mechanical change adds a billion page_id/page_index
arguments to make sure that pages that end up opening new WebViews
through mechanisms like window.open() still work properly with those
extra windows.
WIFEXITED() returns a bool, so previously we were setting
exited_successfully to true when the service was terminated by a signal,
and false if it exited, regardless of the exit status. To test the exit
status, we have to use WEXITSTATUS() instead.
This causes us to correctly use the "3 tries then give up" logic for
services that crash, instead of infinitely attempting to respawn them.
The layout system can't currently answer the question "what height does
this Label want to be, if it has a certain width available?" Instead it
relies on counting newlines, which doesn't work in a lot of cases. This
made the notification windows look and behave in a funky way when their
text wraps onto multiple lines.
This patch uses TextLayout to measure how many lines we need, and then
manually sets the Label and Window heights to match. It's a bit hacky,
hence the FIXME, but it does make things behave the way they are
supposed to.
This refactoring makes WebContent less aware of LibWeb internals.
The code that initializes paint recording commands now resides in
`Navigable::paint()`. Additionally, we no longer need to reuse
PaintContext across iframes, allowing us to avoid saving and restoring
its state before recursing into an iframe.
This really just takes the [App].Name config value and removes the
ampersands `&` from the name. These ampersands are hotkeys for the
system menu. Instead of typing them twice - error prone - use the
fact that name is just menu_name without the ampersand. Remove the
ampersand to use as the name and use it as is as the menu_name.
`JsonValue::to_byte_string` has peculiar type-erasure semantics which is
not usually intended. Unfortunately, it also has a very stereotypical
name which does not warn about unexpected behavior. So let's prefix it
with `deprecated_` to make new code use `as_string` if it just wants to
get string value or `serialized<StringBuilder>` if it needs to do proper
serialization.
Instead of trying to acquire from an individual mouse device, let's read
from /dev/input/mice, where all mouse packets are blended together from
all mouse devices that are attached to the machine.
Instead of spawning these processes from the WebContent process, we now
create them in the Browser chrome.
Part 1/N of "all processes are owned by the chrome".
Considering an operation like the following:
document.cookie = "cookie=value";
const value = document.cookie;
If the IPC for the cookie setter is asynchronous, the getter can execute
while the browser/SQLServer processes are still handling the setter.
This is often seen when hammering the document with cookie requests.
The architecture of SQLServer is currently such that it sends results
over IPC one row at a time. After the rows are exhausted, it sends a
completion IPC. However, it does not wait for the client to finish
processing a row before sending another row or the completion signal.
This can result in clients hanging if the completion comes in while a
row is being processed. At least in the case of WebView::Database, the
result is that the completion signal is dropped, and the browser then
hangs forever waiting for that signal (after it finishes processing the
row).
This patch makes SQLServer asynchronously wait for the client to tell it
that the row has been processed and the next row (or completion) may be
sent. We repurpose the `m_ongoing_executions` in SQLStatement for this
purpose (this member was oddly being written to, but otherwise unused).
This really only implements a heuristic, assuming that HTTP/1.0 servers
cannot handle having multiple active connections; this assumption has
lots of false positives, but ultimately HTTP/1.0 is an out-of-date HTTP
version and people using it should just switch to a newer standard
anyway.
Specifically, python's "SimpleHTTPRequestHandler" utilises a
single-threaded HTTP/1.0 server, which means no keepalive and more
importantly, hangs and races with more than a single connection present.
This commit makes it so we serialise all requests to servers that are
known to serve only a single request per connection (aka HTTP/1.0 with
our setup, as we unconditionally request keepalive)
When the WebContent process has painted to its shared bitmaps, it sends
a synchronous IPC to the browser process to let the chrome paint. It is
synchronous to ensure the WC process doesn't paint onto the backing
bitmap again while it is being displayed.
However, this can cause a crash at exit if the browser process quits
while the WC process is waiting for a response to this IPC.
This patch makes the painting logic asynchronous by letting the browser
process broadcast when it has finished handling the paint IPC. The WC
process will not paint anything again until it receives that message. If
it had tried to repaint while waiting for that message, that paint will
be deferred until it arrives.
Fixes#22582
Previously, the job and the (cache of them) would lead to a UAF, as
after `.start()` was called on the job it'd be immediately destroyed.
Example of previous bug:
```
// Note due to the cache &jobA == &jobB
auto& jobA = Job::ensure("https://r.bing.com/");
auto& jobB = Job::ensure("https://r.bing.com/");
// Previously, the first .start() free'd the job
jobA.start();
// So the second .start() was a UAF
jobB.start();
```
These IPCs are different than other IPCs in that we can't just set up a
callback function to be invoked when WebContent sends us the screenshot
data. There are multiple places that would set that callback, and they
would step on each other's toes.
Instead, the screenshot APIs on ViewImplementation now return a Promise
which callers can interact with to receive the screenshot (or an error).
Instead of returning HeapBlock memory to the kernel (or a non-type
specific shared cache), we now keep a BlockAllocator per CellAllocator
and implement "deallocation" by basically informing the kernel that we
don't need the physical memory right now.
This is done with MADV_FREE or MADV_DONTNEED if available, but for other
platforms (including SerenityOS) we munmap and then re-mmap the memory
to achieve the same effect. It's definitely clunky, so I've added a
FIXME about implementing the madvise options on SerenityOS too.
The important outcome of this change is that GC types that use a
type-specific allocator become immune to use-after-free type confusion
attacks, since their virtual addresses will only ever be re-used for
the same exact type again and again.
Fixes#22274
All DOM node mutation IPCs now invoke an async completion IPC after the
DOM is mutated. This allows consolidating where the Inspector updates
its view and the selected DOM node.
This also allows improving the response to removing a DOM node. We would
previously just select the <body> tag after removing a DOM node because
the Inspector client had no idea what node preceded the removed node.
Now the WebContent process can just indicate what that node is. So now
after removing a DOM node, we inspect either its previous sibling (if it
had one) or its parent.
Rename them from "did_get_*" to "did_inspect_*", to correspond to the
request methods "inspect_dom_tree" and "inspect_accessibility_tree". No
functional change, but this makes it a bit easier to stare at IPC files
side-by-side and know which response method corresponds to a request
method at a quick glance.
With this change, instead of applying scroll offsets during the
recording of the painting command list, we do the following:
1. Collect all boxes with scrollable overflow into a PaintContext,
each with an id and the total amount of scrolling offset accumulated
from ancestor scrollable boxes.
2. During the recording phase assign a corresponding scroll_frame_id to
each command that paints content within a scrollable box.
3. Before executing the recorded commands, translate each command that
has a scroll_frame_id by the accumulated scroll offset.
This approach has following advantages:
- Implementing nested scrollables becomes much simpler, as the
recording phase only requires the correct assignment of the nearest
scrollable's scroll_frame_id, while the accumulated offset from
ancestors is applied subsequently.
- The recording of painting commands is not tied to a specific offset
within scrollable boxes, which means in the future, it will be
possible to update the scrolling offset and repaint without the need
to re-record painting commands.
Instead of using a scan code, which for scan code set 2 will not
represent the expected character mapping index, we could just use
another variable in the KeyEvent structure that correctly points to the
character index.
This change is mostly relevant to the KeyboardMapper application, and
also to the WindowServer code, as both handle KeyEvents and need to
use the character mapping index in various situations.
This aligns Workers and Window and MessagePorts to all use the same
mechanism for transferring serialized messages across realms.
It also allows transferring more message ports into a worker.
Re-enable the Worker-echo test, as none of the MessagePort tests have
themselves been flaky, and those are now using the same underlying
implementation.
POSIX requires that broadcast sends will only be allowed if the
SO_BROADCAST socket option was set on the socket.
Also, broadcast sends to protocols that do not support broadcast (like
TCP), should always fail.
In a bunch of cases, this actually ends up simplifying the code as
to_number will handle something such as:
```
Optional<I> opt;
if constexpr (IsSigned<I>)
opt = view.to_int<I>();
else
opt = view.to_uint<I>();
```
For us.
The main goal here however is to have a single generic number conversion
API between all of the String classes.
With this change, chrome no longer has to ask the WebContent process
to paint the next frame into a specified bitmap. Instead, it allocates
bitmaps and sends them to WebContent, which then lets chrome know when
the painting is done.
This work is a preparation to move the execution of painting commands
into a separate thread. Now, it is much easier to start working on the
next frame while the current one is still rendering. This is because
WebContent does not have to inform chrome that the current frame is
ready before it can request the next frame.
Additionally, as a side bonus, we can now eliminate the
did_invalidate_content_rect and did_change_selection IPC calls. These
were used solely for the purpose of informing chrome that it needed to
request a repaint.
This commit un-deprecates DeprecatedString, and repurposes it as a byte
string.
As the null state has already been removed, there are no other
particularly hairy blockers in repurposing this type as a byte string
(what it _really_ is).
This commit is auto-generated:
$ xs=$(ack -l \bDeprecatedString\b\|deprecated_string AK Userland \
Meta Ports Ladybird Tests Kernel)
$ perl -pie 's/\bDeprecatedString\b/ByteString/g;
s/deprecated_string/byte_string/g' $xs
$ clang-format --style=file -i \
$(git diff --name-only | grep \.cpp\|\.h)
$ gn format $(git ls-files '*.gn' '*.gni')
Let's not assume there is one global OpenGL context because it might
change once we will start creating >1 page inside single WebContent
process or contexts for WebGL.
With this change, Document now always has a Web::Page. This means we no
longer rely on the breakable link between Document and BrowsingContext
to find a relevant Web::Page.
Fixes#22290
Prior to this commit, it was possible to get a socket stuck in a state
with enqueued entries, waiting for a nonexistent request to finish.
This state could be entered by issuing a request immediately after the
completion of another one, and before deferred_invoke execution in the
event loop.
This commit closes this hole by making sure the socket is never in a
state where it can queue requests without an active job.
No functional impact intended. This is just a more complicated way of
writing what we have now.
The goal of this commit is so that we are able to store the 'name' of a
pseudo element for use in serializing 'unknown -webkit-
pseudo-elements', see:
https://www.w3.org/TR/selectors-4/#compat
This is quite awkward, as in pretty much all cases just the selector
type enum is enough, but we will need to cache the name for serializing
these unknown selectors. I can't figure out any reason why we would need
this name anywhere else in the engine, so pretty much everywhere is
still just passing around this raw enum. But this change will allow us
to easily store the name inside of this new struct for when it is needed
for serialization, once those webkit unknown elements are supported by
our engine.
The empty value we are currently returning hits the `decltype(nullptr)`
constructor of TakeDocumentScreenshotResponse. This is interpreted as an
invalid response by LibIPC. Instead, return a Gfx::ShareableBitmap that
the client-side can handle, and e.g. display an error, rather than the
message just being dropped.