If the BufferedStream is able to fill its entire circular buffer in
populate_read_buffer() and is later asked to read a line or read until
a delimiter, it could erroneously return EMSGSIZE if the caller's buffer
was smaller than the internal buffer. In this case, all we really care
about is whether the caller's buffer is big enough for however much data
we're going to copy into it. Which needs to take into account the
candidate.
This method (unlike can_read_line) ensures that the delimiter is present
in the buffer, and doesn't return true after eof when the delimiter is
absent.
...requested size"
This reverts commit 13573a6c4b.
Some clients of `BufferedStream` expect a non-blocking read by
`read_some` which the commit above made impossible by potentially
performing a blocking read. For example, the following command hangs:
pro http://ipecho.net/plain
This is caused by the HTTP job expecting to read the body of the
response which is included in the active buffer, but since the buffered
data size is less than the buffer passed into `read_some`, another
blocking read is performed before anything is returned.
By reverting this commit, all tests still pass and `pro` no longer
hangs. Note that because of another bug related to `stdout` / `stderr`
mixing and the absence of a line ending, there is no output to the
command above except a progress update.
Just like with input buffered streams, we don't currently have a use
case for output buffered streams which aren't seekable, since the main
application are files.
This class, in a similar fashion to what has been done with
`InputBufferedStream`, postpones write to the stream until an internal
buffer is full.
This patch also adds the `OutputBufferedFile` alias.
When BufferedFile.can_read_line() was invoked on files with no newlines,
t incorrectly returned a false result for this single line that, even
though doesn't finish with a newline character, is still a line. Since
this method is usually used in tandem with read_line(), users would miss
reading this line (and hence all the file contents).
This commit fixes this corner case by adding another check after a
negative result from finding a newline character. This new check does
the same as the check that is done *before* looking for newlines, which
takes care of this problem, but only works for files that have at least
one newline (hence the buffer has already been filled).
A new unit test has been added that shows the use case. Without the
changes in this commit the test fails, which is a testament that this
commit really fixes the underlying issue.
We currently only fill a buffer when it is empty. So if it has 1 byte
and 16 KB was requested, only that 1 byte would be returned. Instead,
attempt to refill the buffer when it's size is less than the requested
size.
When reading, we currently only fill a BufferedStream's buffer when it
is empty, and only with 1 KB of data. This means that while the buffer
defaults to a size of 16 KB, at least 15 KB is always unused.
Similar to POSIX read, the basic read and write functions of AK::Stream
do not have a lower limit of how much data they read or write (apart
from "none at all").
Rename the functions to "read some [data]" and "write some [data]" (with
"data" being omitted, since everything here is reading and writing data)
to make them sufficiently distinct from the functions that ensure to
use the entire buffer (which should be the go-to function for most
usages).
No functional changes, just a lot of new FIXMEs.
Similar to the return values earlier, a signed value doesn't really make
sense here. Relying on the much more standard `size_t` makes it easier
to use Stream in all contexts.