This also makes the editor clean as many lines as the searching took,
for instance, in the case of <C-r><C-c>ls<tab>, two lines should be
cleaned, not just one.
Fixes#3413.
Previously, it was kept as just a time_t and the sub-second
offset was inferred from the monotonic clock. This means that
sub-second time adjustments were ignored.
Now that `ntpquery -s` can pass in a time with sub-second
precision, it makes sense to keep time at that granularity
in the kernel.
After this, `ntpquery -s` immediately followed by `ntpquery` shows
an offset of 0.02s (that is, on the order of network roundtrip time)
instead of up to 0.75s previously.
Using a pool.ntp.org server seems nicer and more open-source-y,
but until our pool use is approved, let's put in a default value
that works.
(time.google.com serves smeared time instead of doing leap seconds.
pool.ntp.org doesn't serve smeared time. I intend to implement
client-side leap second smearing since nobody likes jumpy timestamps.
For now, we get this for free.)
g++ seems to generate calls to __cxa_guard_* functions when we use
non-trivial initialization of local statics.
Requiring these symbols breaks some ports since these symbols are
defined in libcstdc++ which we do not link against when building ports.
This commit turns some local statics into global statics in netdb.cpp so
libc wouldn't need these symbols.
Previously, we were including the whole of <getopt.h> in <unistd.h>.
However according to the posix <unistd.h> doesn't have to contain
everything we had in <getopt.h>.
This fixes some symbol collisions that broke the git port.
Instead of FileDescriptor branching on the type of File it's wrapping,
add a File::stat() function that can be overridden to provide custom
behavior for the stat syscalls.
This fetches information from /var/run/utmp and shows the interactive
sessions in a little list. More things can be added here to make it
more interesting. :^)
To keep track of ongoing terminal sessions, we now have a sort-of
traditional /var/run/utmp file, like other Unix systems.
Unlike other Unix systems however, ours is of course JSON. :^)
The /bin/utmpupdate program is used to update the file, which is
not writable by regular user accounts. This helper program is
set-GID "utmp".
For now, the new DOM::EventDispatcher is very simple, it just iterates
over the set of listeners on an EventTarget and invokes the callbacks
as it goes.
This simplifies EventTarget subclasses since they no longer have to
implement the callback mechanism themselves.
The streaming operator doesn't short-circuit, consider the following
snippet:
void foo(InputStream& stream) {
int a, b;
stream >> a >> b;
}
If the first read fails, the second is called regardless. It should be
well defined what happens in this case: nothing.
I suspected an error in CircularDuplexStream::read(Bytes, size_t). This
does not appear to be the case, this test case is useful regardless.
The following script was used to generate the test:
import gzip
uncompressed = []
for _ in range(0x100):
uncompressed.append(1)
for _ in range(0x7e00):
uncompressed.append(0)
for _ in range(0x100):
uncompressed.append(1)
compressed = gzip.compress(bytes(uncompressed))
compressed = ", ".join(f"0x{byte:02x}" for byte in compressed)
print(f"""\
TEST_CASE(gzip_decompress_repeat_around_buffer)
{{
const u8 compressed[] = {{
{compressed}
}};
u8 uncompressed[0x8011];
Bytes{{ uncompressed, sizeof(uncompressed) }}.fill(0);
uncompressed[0x8000] = 1;
const auto decompressed = Compress::GzipDecompressor::decompress_all({{ compressed, sizeof(compressed) }});
EXPECT(compare({{ uncompressed, sizeof(uncompressed) }}, decompressed.bytes()));
}}
""", end="")