When a platform key press or release event is repeated, we now pass
along a `repeat` flag to indicate that auto-repeating is happening. This
flag eventually ends up in `KeyboardEvent.repeat`.
Previously, tests would intermittently fail because the current session
wasn't yet aware of a newly created window handle.
Co-authored-by: Timothy Flynn <trflynn89@pm.me>
This is strictly nicer than passing them around as i32 everywhere,
and by switching to i64 as the underlying type, ID allocation becomes as
simple as incrementing an integer.
On the view-source page, generate anchor tags for any 'href' or 'src'
attribute value we come across. This handles both when the attribute
contains an absolute URL and a URL relative to the page.
This requires sending the document's base URL over IPC to resolve
relative URLs.
We've added a few JS::Handle members to this class over time. Let's
avoid creating a new GC root for each of these, and explicitly add a
visitation method.
Some of this code is older than widespread use of GCPtr. These functions
returning raw pointers has been a point of confusion at times, so lets
just indicate that they are non-null.
Cookies have a minimum expiry resolution of 1 second. So to test cookie
expiration, the test had to idle for at least a second, which is quite a
noticeable delay now that LibWeb tests are parallelized.
Instead, we can add an internal API to expire cookies with a time offset
to avoid this idle delay.
Contradictory to the spec, the Set Timeouts endpoint should update the
existing timeouts configuration in-place, rather than replacing it. WPT
expects this, and other browsers already implement this endpoint this
way.
Set the connection timeout which only limits the connection phase of the
request.
Previously, CURLOPT_TIMEOUT would apply to all transfer operations which
could result in legitimate upload or download operations being
cancelled.
The spec says we don't need to await navigations if we navigate to the
same URL that we are already on, but at least in our implementation, we
should still await the page load. Otherwise, we will invoke WebDriver
endpoints on the wrong page.
This reverts commit 556a0936dd.
This was causing a large slow down in WPT, and a crash on macOS during
session shutdown when running WebDriver manually.
We are currently trying to access the current parent and top-level
browsing contexts from the current BC itself. However, if the current BC
is closed, its association to the parent and top-level BCs is lost, and
we are no longer able to handle WebDriver endpoints involving those BCs.
Instead, let's store the parent and top-level BCs separately, and update
them in accordance with the spec.
Implements https://github.com/whatwg/html/pull/10007 which basically
moves style, layout and painting from HTML processing task into HTML
task with "rendering" source.
The biggest difference is that now we no longer schedule HTML event loop
processing whenever we might need a repaint, but instead queue a global
rendering task 60 times per second that will check if any documents
need a style/layout/paint update.
That is a great simplification of our repaint scheduling model. Before
we had:
- Optional timer that schedules animation updates 60 hz
- Optional timer that schedules rAF updates
- PaintWhenReady state to schedule a paint if navigable doesn't have a
rendering opportunity on the last event loop iteration
Now all that is gone and replaced with a single timer that drives
repainting at 60 hz and we don't have to worry about excessive repaints.
In the future, hard-coded 60 hz refresh interval could be replaced with
CADisplayLink on macOS and similar API on linux to drive repainting in
synchronization with display's refresh rate.
For example, in the following abbreviated test HTML:
<span>some text</span>
<script>println("whf")</script>
We would have to craft the expectation file to include the "some text"
segment, usually with some leading whitespace. This is a bit annoying,
and makes it difficult to manually craft expectation files.
So instead of comparing the expectation against the entire DOM inner
text, we now send the inner text of just the <pre> element containing
the test output when we invoke `internals.signalTextTestIsDone`.
There is an issue where gifs with many frames cannot be loaded, as each
bitmap is sent over IPC using a separate file descriptor, and there is
limit on the maximum number of descriptors per IPC message. Thus, trying
to load gifs with more than 64 frames (the current limit) causes the
image decoder process to die.
This commit introduces the BitmapSequence class, which is a thin wrapper
around the type Vector<Optional<NonnullRefPtr<Gfx::Bitmap>>> and
provides an IPC encode/decode routine that collates all bitmap data into
a single buffer so that only a single file descriptor is required per
IPC transfer, even if multiple frames are being sent.
Similar to script execution, this spins the WebDriver process until the
action is complete (rather than spinning the WebContent process, which
we've seen result in deadlocks).
In particular, we need to convert web element references to the actual
element. The AO isn't fully implemented because we will need to work out
mixing JsonValue types with JS value types, which currently isn't very
straight forward with our JSON clone algorithm.
After closing a window, it is the client's job to switch to another
window before executing any other command. Currently, we will crash if
that did not happen when we try to send an IPC to a window handle that
we no longer hold. This patch makes us return a "no such window" error
instead.
The exceptions to this new check are the "Switch to Window" and "Get
Window Handles" commands.
We are currently returning a JSON object of the form:
{
"name": "element-6066-11e4-a52e-4f735466cecf",
"value": "foo"
}
Instead, we are expected to return an object of the form:
{
"element-6066-11e4-a52e-4f735466cecf": "foo"
}
It's difficult to know what we need to implement if we silently ignore
these endpoints. Let's log the endpoints and their parameters, and clean
up the wall of FIXME comments to be easier to grok.
Previously, some otherwise unimplemented WebDriver endpoints were
indicating that they had executed successful, this was causing a large
number of Web Platform Tests to time out when they should have failed.
The IPCs to request a page's text, layout tree, etc. are currently all
synchronous. This can result in a deadlock when WebContent also makes
a synchronous IPC call, as both ends will be waiting on each other.
This replaces the page info IPCs with a single, asynchronous IPC. This
new IPC is promise-based, much like our screenshot IPC.
If we already destroyed our timer during destruction, and then curl
tries to flush its timeouts when we tear down the multi, we can just
ignore the timer callbacks.
We were only looking at the current top-level navigable and its children
when searching for the specified window handle. We need to search *all*
known navigables if the handle belongs to a window not in the current
tree.
This is one of the few endpoints that does not ensure a top-level BC is
open. It's a bit of an implementation-defined endpoint, so let's protect
against a non-existent BC explicitly.
When we create a WebDriverConnection object, we currently hand it the
page client for which it was opened, and perform all actions on that
client. However, some WebDriver endpoints change the browsing context
(and therefore page client) on which future commands should be executed.
For example, the switch-frame endpoint will switch the current browsing
context to a frame/iframe context.
This patch implements the current browsing context (and current top-
level browsing context) concepts. They are initialized to that of the
original page. Most of this patch is making sure we execute actions on
the correct context.
We currently spin the platform event loop while awaiting scripts to
complete. This causes WebContent to hang if another component is also
spinning the event loop. The particular example that instigated this
patch was the navigable's navigation loop (which spins until the fetch
process is complete), triggered by a form submission to an iframe.
So instead of spinning, we now return immediately from the script
executors, after setting up listeners for either the script's promise to
be resolved or for a timeout. The HTTP request to WebDriver must finish
synchronously though, so now the WebDriver process spins its event loop
until WebContent signals that the script completed. This should be ok -
the WebDriver process isn't expected to be doing anything else in the
meantime.
Also, as a consequence of these changes, we now actually handle time
outs. We were previously creating the timeout timer, but not starting
it.
UI event handlers currently return a boolean where false means the event
was cancelled by a script on the page, or otherwise dropped. It has been
a point of confusion for some time now, as it's not particularly clear
what should be returned in some special cases, or how the UI process
should handle the response.
This adds an enumeration with a few states that indicate exactly how the
WebContent process handled the event. This should remove all ambiguity,
and let us properly handle these states going forward.
There should be no behavior change with this patch. It's meant to only
introduce the enum, not change any of our decisions based on the result.
Added the following Routes, IPC definitions, and boilerplates for the
missing endpoints:
- Switch To Frame
- Switch To Parent Frame
- Element Clear
- Element Send Keys
You can now build with STYLE_INVALIDATION_DEBUG and get a debug stream
of reasons why style invalidations are happening and where.
I've rewritten this code many times, so instead of throwing it away once
again, I figured we should at least have it behind a flag.
This change updates `ExecuteScript::execute_script()` and
`ExecuteScript::execute_script()` to bring their behavior in line with
each other and the current specification text.
Instances of the variable `timeout` have also been renamed to
`timeout_ms`, for clarity.
Even though the underlying time zone is already cached by LibUnicode, JS
performs additional expensive lookups with that time zone. There's no
need to do those lookups again until the system time zone has changed.
Choosing options from the `<select>` will load and display that style
sheet's source text, with some checks to make sure that the text that
just loaded is the one we currently want.
The UI is a little goofy when scrolling, as it uses `position: sticky`
which we don't implement yet. But that's just more motivation to
implement it! :^)
This will be used by the inspector, for showing style sheet contents.
Identifying a specific style sheet is a bit tricky. Depending on where
it came from, a style sheet may have a URL, it might be associated with
a DOM element, both, or neither. This varied information is wrapped in
a new StyleSheetIdentifier struct.
This creates a TimeZoneWatcher in the UI process to inform all open
WebContent processes when the time zone changes. The WebContent process
will clear its time zone cache to retrieve a fresh zone the next time
it is asked for one.
- Expose table from console object
- Add new Table log level
- Create a JS object that represents table rows and columns
- Print table as HTML using WebContentConsoleClient
When working on the Inspector's HTML, it's often kind of tricky to debug
when an element is styled / positioned incorrectly. We don't have a way
to inspect the Inspector itself.
This adds a button to the Inspector to export its HTML/CSS/JS contents
to the downloads directory. This allows for more easily testing changes,
especially by opening the exported HTML in another browser's dev tools.
We will ultimately likely remove this button (or make it hidden) by the
time we are production-ready. But it's quite useful for now.
We use instances of `Gfx::Bitmap` to move pixel data all the way from
raw image bytes up to the Skia renderer. A vital piece of information
for correct blending of bitmaps is the alpha type, i.e. are we dealing
with premultiplied or unpremultiplied color values?
Premultiplied means that the RGB colors have been multiplied with the
associated alpha value, i.e. RGB(255, 255, 255) with an alpha of 2% is
stored as RGBA(5, 5, 5, 2%).
Unpremultiplied means that the original RGB colors are stored,
regardless of the alpha value. I.e. RGB(255, 255, 255) with an alpha of
2% is stored as RGBA(255, 255, 255, 2%).
It is important to know how the color data is stored in a
`Gfx::Bitmap`, because correct blending depends on knowing the alpha
type: premultiplied blending uses `S + (1 - A) * D`, while
unpremultiplied blending uses `A * S + (1 - A) * D`.
This adds the alpha type information to `Gfx::Bitmap` across the board.
It isn't used anywhere yet.
We don't want to set the intrinsic Console object's client to non-top-
level clients, created for e.g. subframes. We also want to make sure the
Console client is updated if the top-level document has changed.