This abstraction will help us to support multiple IPC transport
mechanisms going forward. For now, we only have a TransportSocket that
implements the same behavior we previously had, using Unix Sockets for
IPC.
Set the connection timeout which only limits the connection phase of the
request.
Previously, CURLOPT_TIMEOUT would apply to all transfer operations which
could result in legitimate upload or download operations being
cancelled.
If we already destroyed our timer during destruction, and then curl
tries to flush its timeouts when we tear down the multi, we can just
ignore the timer callbacks.
That usually happens in "exceptional" states when the client exits
unexpectedly (crash, force quit mid-load, etc), leading to the job flush
timer firing and attempting to write to a nonexistent object (the
client).
This commit makes RS simply cancel such jobs; cancelled jobs in this
state simply go away instead of sending notifications around.
Instead of using a HashMap<ByteString, ByteString, CaseInsensitive...>
everywhere, we now encapsulate this in a class.
Even better, the new class also allows keeping track of multiple headers
with the same name! This will make it possible for HTTP responses to
actually retain all their headers on the perilous journey from
RequestServer to LibWeb.
Most of IPC::Connection is thread-safe, but the responsiveness timer is
very much not so, this commit makes sure only one thread can send stuff
through IPC to avoid having threads racing in IPC::Connection.
This is simply less work than making sure everything IPC::Connection
uses (now and later) is thread-safe.
Previously RS handled all the requests in an event loop, leading to
issues with connections being started in the middle of other connections
being started (and potentially blowing up the stack), ultimately causing
requests to be delayed because of other requests.
This commit reworks the way we handle these (specifically starting
connections) by first serialising the requests, and then performing them
in multiple threads concurrently; which yields a significant loading
performance and reliability increase.
Add factory functions to distinguish between when the owner of the File
wants to transfer ownership to the new IPC object (adopt) or to send a
copy of the same fd to the IPC peer (clone).
This behavior is more intuitive than the previous behavior. Previously,
an IPC::File would default to a shallow clone of the file descriptor,
only *actually* calling dup(2) for the fd when encoding or it into an
IPC MessageBuffer. Now the dup(2) for the fd is explicit in the clone_fd
factory function.
This URL library ends up being a relatively fundamental base library of
the system, as LibCore depends on LibURL.
This change has two main benefits:
* Moving AK back more towards being an agnostic library that can
be used between the kernel and userspace. URL has never really fit
that description - and is not used in the kernel.
* URL _should_ depend on LibUnicode, as it needs punnycode support.
However, it's not really possible to do this inside of AK as it can't
depend on any external library. This change brings us a little closer
to being able to do that, but unfortunately we aren't there quite
yet, as the code generators depend on LibCore.
This makes it so the clients don't have to wait for RS to become
responsive, potentially allowing them to do other things while RS
handles the connections.
Fixes#23306.
Fixes#22582
Previously, the job and the (cache of them) would lead to a UAF, as
after `.start()` was called on the job it'd be immediately destroyed.
Example of previous bug:
```
// Note due to the cache &jobA == &jobB
auto& jobA = Job::ensure("https://r.bing.com/");
auto& jobB = Job::ensure("https://r.bing.com/");
// Previously, the first .start() free'd the job
jobA.start();
// So the second .start() was a UAF
jobB.start();
```
This commit un-deprecates DeprecatedString, and repurposes it as a byte
string.
As the null state has already been removed, there are no other
particularly hairy blockers in repurposing this type as a byte string
(what it _really_ is).
This commit is auto-generated:
$ xs=$(ack -l \bDeprecatedString\b\|deprecated_string AK Userland \
Meta Ports Ladybird Tests Kernel)
$ perl -pie 's/\bDeprecatedString\b/ByteString/g;
s/deprecated_string/byte_string/g' $xs
$ clang-format --style=file -i \
$(git diff --name-only | grep \.cpp\|\.h)
$ gn format $(git ls-files '*.gn' '*.gni')
In order to follow spec text to achieve this, we need to change the
underlying representation of a host in AK::URL to deserialized format.
Before this, we were parsing the host and then immediately serializing
it again.
Making that change resulted in a whole bunch of fallout.
After this change, callers can access the serialized data through
this concept-host-serializer. The functional end result of this
change is that IPv6 hosts are now correctly serialized to be
surrounded with '[' and ']'.
We have a new, improved string type coming up in AK (OOM aware, no null
state), and while it's going to use UTF-8, the name UTF8String is a
mouthful - so let's free up the String name by renaming the existing
class.
Making the old one have an annoying name will hopefully also help with
quick adoption :^)
URL had properly named replacements for protocol(), set_protocol() and
create_with_file_protocol() already. This patch removes these function
and updates all call sites to use the functions named according to the
specification.
See https://url.spec.whatwg.org/#concept-url-scheme
If the request is stopped RequestServer::did_finish_request() will crash
on the VERIFY() call since request.total_size.has_value() returns false.
Let us instead use a conditional expression to verify if it has a value
and then call async_request_finished().