Both calls essentially only differ in one boolean, which dictates
whether to print the value in uppercase or lowercase.
Move the long function call into a new function and pass in the
"uppercase" boolean seperately to avoid having to write everything
twice.
Those functions only differ by the input type of `number`. No other
wrapper does this, as they rely on adjusting the type of the argument on
the caller side instead.
Avoid specializing too much by just doing the same for signed numbers.
We were decoding and then re-encoding the query string in URLs.
This round-trip caused us to lose information about plus ('+')
ASCII characters encoded as "%2B".
A change was made prior to percent encode plus signs in order to fix an
issue with the Google cookie consent page.
Unforunately, this was treating a symptom of a problem and not the root
cause and is incorrect behavior.
When we want to use the find_first_index that base Vector provides, we
need to provide an element of the real contained type. That's impossible
for OwnPtr, however, and even with RefPtr there might be instances where
we have a raw reference to the object we want to find, but no smart
pointer. Therefore, overloading this function (with an identical body,
the magic is done by the find_index templatization) with `T const&` as a
parameter allows there use cases.
On oss-fuzz, the LibJS REPL is provided a file encoded with Windows-1252
with the following contents:
/ô¡°½/
The REPL assumes the input file is UTF-8. So in Windows-1252, the above
is represented as [0x2f 0xf4 0xa1 0xb0 0xbd 0x2f]. The inner 4 bytes are
actually a valid UTF-8 encoding if we only look at the most significant
bits to parse leading/continuation bytes. However, it decodes to the
code point U+121c3d, which is not a valid code point.
This commit adds additional validation to ensure the decoded code point
itself is also valid.
These functions are _very_ misleading, as `first()` and `last()` return
references, but `{first,last}_matching()` return copies of the values.
This commit makes it so that they now return Optional<T&>, eliminating
the copy and the confusion.
This implements Optional<T&> as a T*, whose presence has been missing
since the early days of Optional.
As a lot of find_foo() APIs return an Optional<T> which imposes a
pointless copy on the underlying value, and can sometimes be very
misleading, with this change, those APIs can return Optional<T&>.
This method exploits the fact that the values themselves hold the tree
pointers, and as a result this let's us skip the O(logn) traversal down
to the matching Node for a Key-Value pair.
Adds a new optional parameter 'reserved_chars' to
AK::URL::percent_encode. This new optional parameter allows the caller
to specify custom characters to be percent encoded. This is then used
to percent encode plus signs by HttpRequest::to_raw_request.
As seen on TV, HashTable can get "thrashed", i.e. it has a bunch of
deleted buckets that count towards the load factor. This means that hash
tables which are large enough for their contents need to be resized.
This was fixed in 9d8da16 with a workaround that shrinks the HashTable
back down in these cases, as after the resize and re-hash the load
factor is very low again. However, that's not a good solution. If you
insert and remove repeatedly around a size boundary, you might get
frequent resizes, which involve frequent re-allocations.
The new solution is an in-place rehashing algorithm that I came up with.
(Do complain to me, I'm at fault.) Basically, it iterates the buckets
and re-hashes the used buckets while marking the deleted slots empty.
The issue arises with collisions in the re-hash. For this reason, there
are two kinds of used buckets during the re-hashing: the normal "used"
buckets, which are old and are treated as free space, and the
"re-hashed" buckets, which are new and treated as used space, i.e. they
trigger probing. Therefore, the procedure for relocating a bucket's
contents is as follows:
- Locate the "real" bucket of the contents with the hash. That bucket is
the starting point for the target bucket, and the current (old) bucket
is the bucket we want to move.
- While we still need to move the bucket:
- If we're the target, something strange happened last iteration or we
just re-hashed to the same location. We're done.
- If the target is empty or deleted, just move the bucket. We're done.
- If the target is a re-hashed full bucket, we probe by double-hashing
our hash as usual. Henceforth, we move our target for the next
iteration.
- If the target is an old full bucket, we swap the target and to-move
buckets. Therefore, the bucket to move is a the correct location and the
former target, which still needs to find a new place, is now in the
bucket to move. So we can just continue with the loop; the target is
re-obtained from the bucket to move. This happens for each and every
bucket, though some buckets are "coincidentally" moved before their
point of iteration is reached. Either way, this guarantees full in-place
movement (even without stack storage) and therefore space complexity of
O(1). Time complexity is amortized O(2n) asssuming a good hashing
function.
This leads to a performance improvement of ~30% on the benchmark
introduced with the last commit.
Co-authored-by: Hendiadyoin1 <leon.a@serenityos.org>
The hash table buckets had three different state booleans that are in
fact exclusive. In preparation for further states, this commit
consolidates them into one enum. This has the added benefit on not
relying on the compiler's boolean packing anymore; we definitely now
only need one byte for the bucket state.
Currently this can parse XML and resolve external resources/references,
and read a DTD (but not apply or verify its rules).
That's good enough for _most_ XHTML documents as the HTML 5 spec
enforces its own rules about document well-formedness, and does not make
use of XML DTDs (aside from a list of predefined entities).
An accompanying `xml` utility is provided that can read and dump XML
documents, and can also run the XML conformance test suite.
This is an enum-like type that works with arbitrary sized storage > u64,
which is the limit for a regular enum class - which limits it to 64
members when needing bit field behavior.
Co-authored-by: Ali Mohammad Pur <mpfard@serenityos.org>
Previously, case-insensitively searching the haystack "Go Go Back" for
the needle "Go Back" would return false:
1. Match the first three characters. "Go ".
2. Notice that 'G' and 'B' don't match.
3. Skip ahead 3 characters, plus 1 for the outer for-loop.
4. Now, the haystack is effectively "o Back", so the match fails.
Reducing the skip by 1 fixes this issue. I'm not 100% convinced this
fixes all cases, but I haven't been able to find any cases where it
doesn't work now. :^)
Day and month name constants are defined in numerous places. This
pulls them together into a single place and eliminates the
duplication. It also ensures they are `constexpr`.
Even though the StringView(char*, size_t) constructor only runs its
overflow check when evaluated in a runtime context, the code generated
here could prevent the compiler from optimizing invocations from the
StringView user-defined literal (verified on Compiler Explorer).
This changes the user-defined literal declaration to be consteval to
ensure it is evaluated at compile time.
C++20 provides the `requires` clause which simplifies the ability to
limit overload resolution. Prefer it over `EnableIf`
With all uses of `EnableIf` being removed, also remove the
implementation so future devs are not tempted.