To make it a little clearer what this is for. (This is an RAII helper
class for adding and removing an Interpreter to a VM's list of the
currently active (executing code) Interpreters.)
Taking a big step towards a world of multiple global object, this patch
adds a new JS::VM object that houses the JS::Heap.
This means that the Heap moves out of Interpreter, and the same Heap
can now be used by multiple Interpreters, and can also outlive them.
The VM keeps a stack of Interpreter pointers. We push/pop on this
stack when entering/exiting execution with a given Interpreter.
This allows us to make this change without disturbing too much of
the existing code.
There is still a 1-to-1 relationship between Interpreter and the
global object. This will change in the future.
Ultimately, the goal here is to make Interpreter a transient object
that only needs to exist while you execute some code. Getting there
will take a lot more work though. :^)
Note that in LibWeb, the global JS::VM is called main_thread_vm(),
to distinguish it from future worker VM's.
We can now see at which time a packet was received by the network
adapter, instead of having to measure user time after receiving
the packet in user space.
This means the destination timestamp is no longer affected by in-kernel
queuing delays, which can be tens of milliseconds when the system
is under load.
It also means that if ntpquery grows a message queue that waits on
replies from several requests, the time used processing one response
won't be incorrectly included in the destination timestamp of the
next response (in case two responses arrive at the network adapter
at roughly the same time).
NTP's calculations work better if send and receive latency are
about equal, and this only removes in-kernel queue delays and
context switch delays for the receiving packet. But the two
latencies aren't very equal anyways because $network. Also, maybe
we can add another API for setting the send time in the outgoing
packet in kernel space right before (or when) hitting the network
adapter and use that here too. So this still seems like progress.
Consider the following snippet:
void foo(InputStream& stream) {
if(!stream.eof()) {
u8 byte;
stream >> byte;
}
}
There is a very subtle bug in this snippet, for some input streams eof()
might return false even if no more data can be read. In this case an
error flag would be set on the stream.
Until now I've always ensured that this is not the case, but this made
the implementation of eof() unnecessarily complicated.
InputFileStream::eof had to keep a ByteBuffer around just to make this
possible. That meant a ton of unnecessary copies just to get a reliable
eof().
In most cases it isn't actually necessary to have a reliable eof()
implementation.
In most other cases a reliable eof() is avaliable anyways because in
some cases like InputMemoryStream it is very easy to implement.
It has been possible for a signal to arrive in between us checking
g_interrupted and exiting - sucessfully, even though we were interrupted.
This way, if a signal arrives before we reset the disposition, we
will reliably check for it later; if it arrives afterwards, it'll
kill us automatically.
No difference in practice, but it's tidier and protects us
if we ever use g_interrupted earlier in main -- in that case,
the compiler might chose to keep the variable in a register without
the volatile.
Found by @bugaevc, thanks!
With this, hitting ctrl-c twice in `for i in $(seq 10) { sleep 1 }`
terminates the loop as expected (...well, I'd expect it to quit after
just one ctrl-c, but serenity's shell makes a single ctrl-c only
quit the current loop iteration).
Part of #3419.
Using a pool.ntp.org server seems nicer and more open-source-y,
but until our pool use is approved, let's put in a default value
that works.
(time.google.com serves smeared time instead of doing leap seconds.
pool.ntp.org doesn't serve smeared time. I intend to implement
client-side leap second smearing since nobody likes jumpy timestamps.
For now, we get this for free.)
This fetches information from /var/run/utmp and shows the interactive
sessions in a little list. More things can be added here to make it
more interesting. :^)
To keep track of ongoing terminal sessions, we now have a sort-of
traditional /var/run/utmp file, like other Unix systems.
Unlike other Unix systems however, ours is of course JSON. :^)
The /bin/utmpupdate program is used to update the file, which is
not writable by regular user accounts. This helper program is
set-GID "utmp".
I suspected an error in CircularDuplexStream::read(Bytes, size_t). This
does not appear to be the case, this test case is useful regardless.
The following script was used to generate the test:
import gzip
uncompressed = []
for _ in range(0x100):
uncompressed.append(1)
for _ in range(0x7e00):
uncompressed.append(0)
for _ in range(0x100):
uncompressed.append(1)
compressed = gzip.compress(bytes(uncompressed))
compressed = ", ".join(f"0x{byte:02x}" for byte in compressed)
print(f"""\
TEST_CASE(gzip_decompress_repeat_around_buffer)
{{
const u8 compressed[] = {{
{compressed}
}};
u8 uncompressed[0x8011];
Bytes{{ uncompressed, sizeof(uncompressed) }}.fill(0);
uncompressed[0x8000] = 1;
const auto decompressed = Compress::GzipDecompressor::decompress_all({{ compressed, sizeof(compressed) }});
EXPECT(compare({{ uncompressed, sizeof(uncompressed) }}, decompressed.bytes()));
}}
""", end="")
Addresses #3394 which was caused by dereferencing null `test_name`
when no arguments were given to /bin/tt. Additionally, arguments
other than the defined tests now print usage and exit to avoid
running the default test.
This only queries a single NTP server, only does a point-to-point
request, doens't do any filtering, doesn't display the response
in any useful format, and is generally very bare-bones.
But maybe, over time it can learn to query more servers, do
filtering, run as a service that keeps state over time to
improve filtering, adjust system time, and maybe learn to
run as an NTP server then.