Mostly seen on macOS, but when we toggle playing a media element, we
need to update its layout node's display to ensure the change is
reflected on the playback button. Further, when setting the element's
display time, we need to update the display to ensure the change is
refelected on the media timeline.
Previously, an audio loader could succeed for an HTMLVideoElement that
contains a video file, which caused the duration to be set to the bogus
duration of the audio loader instead of the correct duration from the
video container. Instead of setting the duration based on audio always,
set it to the video duration if we are creating a video element.
This implements the ability to drag the timeline and volume buttons on
UA-rendered media controls. The two behave a bit differently:
Volume is updated as the user drags the volume button. This isn't a very
expensive operation, so updating in real-time and hearing the volume
change feels nice.
The current time, on the other hand, is not committed until the user
releases the mouse button. Performing a seek every time we get a mouse-
move event is pretty laggy, especially for video. However, we still want
to render updates on the timeline itself (so the position of the button
and the timestamp update as you drag). To do so, we internally pause the
media and override the timestamp provided to the layout node.
In the future, we may be able to seek video periodically to provide some
visual feedback. For example, we can seek after every N seconds of
scrubbing, or when the user pauses scrubbing for a while.
Now that we support audio, we can start correctly reporting we support
certain audio mime types!
Required by certain sites like Gartic Phone, which fails to load audio
because we didn't return a non-empty string for `audio/mpeg`:
```
"https://garticphone.com/sounds/turnundefined", Error: Load failed: 404
```
```js
(new Audio).canPlayType("audio/mpeg") && (this._extension = ".mp3"),
```
It can take some time to download / decode a media resource. During this
time, its duration is set to NaN. The media control box would then have
some odd rendering glitches as it tried to treat NaN as an actual time.
Once we do have a duration, we also must ensure the media control box is
updated.
Now that the processResponseConsumeBody algorithm receives the internal
response body of the fetched object, we do not need to go out of our way
to read its body from outside of fetch.
However, several elements do still need to manually inspect the internal
response for other data, such as response headers and status. Note that
HTMLScriptElement already does the new workaround as a proper spec step.
That's what this class really is; in fact that's what the first line of
the comment says it is.
This commit does not rename the main files, since those will contain
other time-related classes in a little bit.
Rather than queueing microtasks ad nauseam to check if a media element
has a new source candidate, let the media element tell us when it might
have a new child to inspect. This removes endless CPU churn in cases
where there is never a candidate that we can play.
Rather than setting the src attribute on the HTMLMediaElement, websites
may append a list of HTMLSourceElement nodes to the media element. There
is a series of "try the next source" steps to attempt to fetch/load each
source until we find one that works.
After the EventLoop changes, we do not need to override LibVideo's timer
with a Qt timer for Ladybird. The timer callback provided here also does
not need the JS::SafeFunction wrapper that Platform::Timer provides.
We are currently using the fetch controller's terminate() method to stop
ongoing fetches when the HTMLMediaElement load algorithm is invoked.
This method ultimately causes the fetch response to be a network error,
which we propagate through the HTMLMediaElement's error event. This can
cause websites, such as Steam, to avoid attempting to play any video.
The spec does not actually specify what it means to "stop" or "cancel" a
fetching process. But we should not use terminate() as that is a defined
spec method, and the spec does tend to indicate when that method should
be used (e.g. as it does in XMLHttpRequest).
We currently use Time::to_seconds() to report a video's duration. The
video, however, may have a sub-second duration. For example, the video
used by the video test page is 12.05 seconds long.
This has several advantages over the current manual demuxing currently
being performed. PlaybackManager hides the specific demuxer being used,
which will allow more codecs to be added transparently to LibWeb. It
also provides buffering and controls playback rate for us.
Further, it will allow us to much more easily implement the "media
timeline" to render a timestamp and implement seeking.
Note that the default value of the attribute is true. We were previously
autoplaying videos as soon as they loaded - this will prevent that from
happening until the paused attribute is set to false.