This commit changes the variables used to represent the size and
progress of downloads from u32 to u64. This allows `pro` and
`Browser` to report the total size and progress of a download
correctly for downloads larger than 4GiB.
Introduce has_ongoing_navigation() that allows to check if resource
state in FrameLoading is pending. This API is going to be used in
upcoming fix for wait_for_navigation_to_complete() in WebDriver.
That's what this class really is; in fact that's what the first line of
the comment says it is.
This commit does not rename the main files, since those will contain
other time-related classes in a little bit.
We achieve this by adding a new Layout::ImageProvider class and having
both HTMLImageElement and HTMLObjectElement inherit from it.
The HTML spec is vague on how object image loading should work, which
is why this first pass is focusing on image elements.
The check on the currently active document for an existing favicon has
been removed. It caused an issue during navigation because the active
document will only be updated after the load and the favicon check has
been executed against the old document.
We never clear content filters on either end of the Browser-WebContent
IPC connection. So when the filters change, we re-append all filters to
the Vector holding them. This incidentally makes it impossible to remove
a filter.
Change both sides to clear their filter lists when receiving a new set
of filters.
This now defaults to serializing the path with percent decoded segments
(which is what all callers expect), but has an option not to. This fixes
`file://` URLs with spaces in their paths.
The name has been changed to serialize_path() path to make it more clear
that this method will generate a new string each call (except for the
cannot_be_a_base_url() case). A few callers have then been updated to
avoid repeatedly calling this function.
Fixes the issue that XML parser fails when loader passes input that is
prefixed with byte order mark.
Also it generally makes sense to pass text source through encoding
decoder before parsing. Probably we would even want to introduce method
similar to `create_with_uncertain_encoding` in `HTMLParser` but for
`XMLParser` to be make harder unconsciously pass non-UTF8 input to XML
parser.
If a subresource fails to load, we don't care that we got some custom
404 page. The subresource should still be considered failed.
This is an ad-hoc solution that unbreaks Acid2. This code will
eventually be replaced by fetch mechanisms.
This prevents us setting up the document of a removed browsing context
container (BCC, e.g. <iframe>), which will cause a crash if the
document contains a script that inserts another BCC as this will use
the stale browsing context it previously set up, even if it's
reinserted.
Required by Prebid.js, which does this by inserting an `<iframe>` into
a `<div>` in the active document via innerHTML, then transfers it to
the `<html>` element:
7b7389c5ab/src/utils.js (L597)
This is done in the spec by removing all tasks and aborting all fetches
when a document is destroyed:
https://html.spec.whatwg.org/multipage/document-lifecycle.html#destroy-a-document
See the code comments for a simplified example.
If an HTTP response fails with an error code (e.g 403) but still has
body content, we now render the content.
We only fall back to our own built-in error page if there's no body.
There is currently a memory leak with these file request objects due to
the callback on_file_request_finish referencing itself in its capture
list. This object does not need to be reference counted or allocated on
the heap. It is only ever stored in a HashMap until a response is
received from the browser, and it is not shared.
`Stream` will be qualified as `AK::Stream` until we remove the
`Core::Stream` namespace. `IODevice` now reuses the `SeekMode` that is
defined by `SeekableStream`, since defining its own would require us to
qualify it with `AK::SeekMode` everywhere.
We weren't properly creating a `LoadRequest` which resulted in `m_page`
not having a value in certain situations inside
`ResourceLoader::load(LoadRequest&)`
As per Fetch, we are supposed to store cookies from Set-Cookie as soon
as we receive response headers for any HTTP response, even in error
cases.
Required by Twitter to login, as it sets cookies via XHR.
This generally seems like a better name, especially if we somehow also
need a better name for "read the entire buffer, but not the entire file"
somewhere down the line.
This will make it easier to support both string types at the same time
while we convert code, and tracking down remaining uses.
One big exception is Value::to_string() in LibJS, where the name is
dictated by the ToString AO.
We have a new, improved string type coming up in AK (OOM aware, no null
state), and while it's going to use UTF-8, the name UTF8String is a
mouthful - so let's free up the String name by renaming the existing
class.
Making the old one have an annoying name will hopefully also help with
quick adoption :^)
Previously we labeled redirects as normal FrameLoader::Type::Navigation,
now we introduce a new FrameLoader::Type::Redirect and label redirects
with it. This will allow us to handle redirects in the browser
differently (such as for overwritting the latest history entry when a
redirect happens) :^)
These lambdas were marked mutable as they captured a Ptr wrapper
class by value, which then only returned const-qualified references
to the value they point from the previous const pointer operators.
Nothing is actually mutating in the lambdas state here, and now
that the Ptr operators don't add extra const qualifiers these
can be removed.