Previously lsof would crash by incorrectly parsing valid file names
that contain spaces.
For example, established TCP socket file descriptors would cause an
lsof crash:
socket:127.0.0.1:33985 / 127.0.0.1:8080 (connected)
This commit fixes the issue by not parsing for white spaces to set the
file name.
Second batch of detectable formats, this time with verious offsets, as
enabled by the previous commit.
This adds tar, and the three signature variants for iso-9660 image
files.
Problem:
- Function local `constexpr` variables do not need to be
`static`. This consumes memory which is unnecessary and can prevent
some optimizations.
Solution:
- Remove `static` keyword.
As the parser now flattens out the instructions and inserts synthetic
nesting/structured instructions where needed, we can treat the whole
thing as a simple parsed bytecode stream.
This currently knows how to execute the following instructions:
- unreachable
- nop
- local.get
- local.set
- {i,f}{32,64}.const
- block
- loop
- if/else
- branch / branch_if
- i32_add
- i32_and/or/xor
- i32_ne
This also extends the 'wasm' utility to optionally execute the first
function in the module with optionally user-supplied arguments.
By constraining two implementations, the compiler will select the best
fitting one. All this will require is duplicating the implementation and
simplifying for the `void` case.
This constraining also informs both the caller and compiler by passing
the callback parameter types as part of the constraint
(e.g.: `IterationFunction<int>`).
Some `for_each` functions in LibELF only take functions which return
`void`. This is a minimal correctness check, as it removes one way for a
function to incompletely do something.
There seems to be a possible idiom where inside a lambda, a `return;` is
the same as `continue;` in a for-loop.
This implementation of netstat presents an overview of existing network
sockets. It directly reads the exposed sockets from /proc/net/. Current
support is limited to information regarding TCP and UDP connections.
Future improvements could include presenting information regarding
local sockets.
It's technically not specified by POSIX, but it appears most Unix-like
systems worth mentioning put those definitions there. Also, it's more
logical since the dev_t type is defined there.
This adds very basic support for module instantiation/allocation, as
well as a stub for an interpreter (and executions APIs).
The 'wasm' utility is further expanded to instantiate, and attempt
executing the first non-imported function in the module.
Note that as the execution is a stub, the expected result is a zero.
Regardless, this will allow future commits to implement the JS
WebAssembly API. :^)
You can now watch the clipboard for changes and run a command each time
the clipboard contents change like this:
$ paste --watch some command here
The command will be spawned each time the clipboard contents change. It
can read the clipboard contents from its stdin, and CLIPBOARD_STATE
environment variable will be set to "data" if there is data to be read,
or to "nil"/"clear" if the clipboard has been cleared.
It's much better to tell the user "hey, the magic numbers don't check
out" than "oh there was a problem with your input" :P
Also refactors some stuff to make it possible to efficiently use the
parser error enum without it getting in the way.
This can currently parse a really simple module.
Note that it cannot parse the DataCount section, and it's still missing
almost all of the instructions.
This commit also adds a 'wasm' test utility that tries to parse a given
webassembly binary file.
It currently does nothing but exit when the parse fails, but it's a
start :^)
When moving multiple files by using *, e.g.,: mv * /new_path/
If there was an error while trying to move a file to the new path
the next file in the file list to be moved would have its path
incorrectly set.
- Fixed mv loop to always append the correct path to the destination
path.
- Added proper error message when mv fails.
When using open() with the O_CREAT flag we must specify the mode
argument. Otherwise we'll incorrectly set the new file's mode
to whatever is left on the stack:
courage:~ $ dd if=/dev/zero of=test count=1
1+0 blocks in
1+0 blocks out
512 bytes copied.
courage:~ $ ls -l test
--w-r-x--- 1 anon users 512 2021-05-08 01:29:52 test
courage:~ $
This also specifies the mode argument when opening files for
reading. This however is harmless because in those cases the argument
is ignored.
fixes#6923
I noticed while testing `find` that the output of `find` contains extra
forward slashes if the root path has a trailing slash. This patch fixes
that issue by passing the root path through LexicalPath before
proceeding.
This unix classic attempts to classify and identify information about
given files based on various heuristics. In this case, we're relying on
the Core::MimeData detector for file type and LibGfx::ImageDecoder for
additional metadata if the given file is an image.
It's very simple for now, but adding new detectors should be quite easy.
This fixes extensive copying data around, and also makes head(1) in
bytes mode read exactly as much data as it needs.
Also, rename --characters to --bytes: that's exactly what it does
(actual character counting is way more complicated), and that's what
the option is called in GNU coreutils.
Fixes https://github.com/SerenityOS/serenity/issues/6852
This changes client methods so that they return the IPC response's
return value directly - instead of the response struct - for IPC
methods which only have a single return value.
When the default build location was moved from /Build to the new
architecture specific directory, /Build/i686, this code broke.
All file names are now path relative one additional level up.
So continue the hack, and introduce another dummy directory to
make the relative paths resolve correctly.
We had some inconsistencies before:
- Sometimes "The", sometimes "the"
- Sometimes trailing ".", sometimes no trailing "."
I picked the most common one (lowecase "the", trailing ".") and applied
it to all copyright headers.
By using the exact same string everywhere we can ensure nothing gets
missed during a global search (and replace), and that these
inconsistencies are not spread any further (as copyright headers are
commonly copied to new files).
The current ProtocolServer was really only used for requests, and with
the recent introduction of the WebSocket service, long-lasting
connections with another server are not part of it. To better reflect
this, this commit renames it to RequestServer.
This commit also changes the existing 'protocol' portal to 'request',
the existing 'protocol' user and group to 'request', and most mentions
of the 'download' aspect of the request to 'request' when relevant, to
make everything consistent across the system.
Note that LibProtocol still exists as-is, but the more generic Client
class and the more specific Download class have both been renamed to a
more accurate RequestClient and Request to match the new names.
This commit only change names, not behaviors.
This implements more of the dlfcn functionality. Most notably:
* It's now possible to dlopen() libraries which were already
loaded at program startup time. This does not cause those
libraries to be loaded twice.
* Errors are reported via dlerror() rather than by crashing
the program.
* Calls to the dl*() functions are thread-safe.
Instead of storing the function names (in a badly named Vector<String>)
and source ranges separately, consolidate them into a new struct:
TracebackFrame. This makes it both easier to use now and easier to
extend in the future.
Unlike before we now keep each call frame's current node source range
in the traceback frame next to the function name, meaning we can display
line and column numbers outside of the VM and after the call stack is
emptied.
The 'syscall-arguments' positional arg being required was
breaking the scenario where the user just passes the
'--list-syscalls' argument.
Instead, make the argument not required, and manually handle
the error path our selves.
Closes: #6574
SPDX License Identifiers are a more compact / standardized
way of representing file license information.
See: https://spdx.dev/resources/use/#identifiers
This was done with the `ambr` search and replace tool.
ambr --no-parent-ignore --key-from-file --rep-from-file key.txt rep.txt *
Move LibCompress unit tests to LibCompress/Tests directory and register
them with CMake's add_test. This allows us to run these tests with
ninja test instead of running a separate executable.
Also split the existing tests in 3 test files that better follow the
source code structure (inspired by AK tests).
This adds a simple REPL command line utility for (eventually) executing
SQL statements / files. Currently, it just validates statements from
stdin and prints any errors.
Often you want to enable and then disable profiling after a period of time.
To make this slightly more ergonomic, add an option to wait for user
input after enabling profiling, this will disable profiling on exit
after the key press.
This flag warns on classes which have `virtual` functions but do not
have a `virtual` destructor.
This patch adds both the flag and missing destructors. The access level
of the destructors was determined by a two rules of thumb:
1. A destructor should have a similar or lower access level to that of a
constructor.
2. Having a `private` destructor implicitly deletes the default
constructor, which is probably undesirable for "interface" types
(classes with only virtual functions and no data).
In short, most of the added destructors are `protected`, unless the
compiler complained about access.
This required changing the load_sync API to take a LoadRequest instead
of just a URL. Since HTMLScriptElement was the only (non-test) user of
this API, it didn't seem useful to instead add an overload of load_sync
for this.
I hereby declare these to be full nouns that we don't split,
neither by space, nor by underscore:
- Breadcrumbbar
- Coolbar
- Menubar
- Progressbar
- Scrollbar
- Statusbar
- Taskbar
- Toolbar
This patch makes everything consistent by replacing every other variant
of these with the proper one. :^)
The previous handling of the name and message properties specifically
was breaking websites that created their own error types and relied on
the error prototype working correctly - not assuming an JS::Error this
object, that is.
The way it works now, and it is supposed to work, is:
- Error.prototype.name and Error.prototype.message just have initial
string values and are no longer getters/setters
- When constructing an error with a message, we create a regular
property on the newly created object, so a lookup of the message
property will either get it from the object directly or go though the
prototype chain
- Internal m_name/m_message properties are no longer needed and removed
This makes printing errors slightly more complicated, as we can no
longer rely on the (safe) internal properties, and cannot trust a
property lookup either - get_without_side_effects() is used to solve
this, it's not perfect but something we can revisit later.
I did some refactoring along the way, there was some really old stuff in
there - accessing vm.call_frame().arguments[0] is not something we (have
to) do anymore :^)
Fixes#6245.
According to the Single UNIX Specification, Version 2 that's where
those macros should be defined. This fixes the libiconv port.
This also fixes some (but not all) build errors for the diffutils and nano ports.
We now leverage the VM's promise rejection tracker callbacks and print a
warning in either of these cases:
- A promise was rejected without any handlers
- A handler was added to an already rejected promise
Almost a year after first working on this, it's finally done: an
implementation of Promises for LibJS! :^)
The core functionality is working and closely following the spec [1].
I mostly took the pseudo code and transformed it into C++ - if you read
and understand it, you will know how the spec implements Promises; and
if you read the spec first, the code will look very familiar.
Implemented functions are:
- Promise() constructor
- Promise.prototype.then()
- Promise.prototype.catch()
- Promise.prototype.finally()
- Promise.resolve()
- Promise.reject()
For the tests I added a new function to test-js's global object,
runQueuedPromiseJobs(), which calls vm.run_queued_promise_jobs().
By design, queued jobs normally only run after the script was fully
executed, making it improssible to test handlers in individual test()
calls by default [2].
Subsequent commits include integrations into LibWeb and js(1) -
pretty-printing, running queued promise jobs when necessary.
This has an unusual amount of dbgln() statements, all hidden behind the
PROMISE_DEBUG flag - I'm leaving them in for now as they've been very
useful while debugging this, things can get quite complex with so many
asynchronously executed functions.
I've not extensively explored use of these APIs for promise-based
functionality in LibWeb (fetch(), Notification.requestPermission()
etc.), but we'll get there in due time.
[1]: https://tc39.es/ecma262/#sec-promise-objects
[2]: https://tc39.es/ecma262/#sec-jobs-and-job-queues
This utility traces the route packets take to a user specified host.
QEMU user networking (the type of networking we currently have setup)
does not support sending "real" raw IPv4 packets, and as such we cant
specify a custom TTL value. As a result, this utility will only work
on real hardware, qemu setup with tun networking (requires root) and
other hypervisors that support bridged adapters (VirtualBox/VMWare).
Instead of crashing on a missing input file, looping forever on an
invalid gzip compressed file, and crashing on permissions issues in
the output file, handle all issues gracefully by logging and returning.
and customizable indentation level
An example: cat /proc/net/adapters | jp
Another example: cat /proc/all | jp -i 2 (indents are set to 2 spaces, instead of 4 by default)
This parser should be a little bit more modern and a little more
resilient to zip files from other operating systems. As a side
effect we now also support extracting zip files that are using
DEFLATE compression (using our own LibCompress).
If in 'foo(); bar();' bar fails, we'd get the error of that and then
foo's return value - that's probably not something anyone expects.
Also make sure to return non-success so the process will exit with 1.
Very incompressible data could sometimes produce no backreferences
which would result in no distance huffman code being created (as it
was not needed), so VERIFY the code exists only if it is actually
needed for writing the stream.
This completes our tar utility by implementing the -c option
for archive creation using TarOutputStream and optionally
GzipCompressor for compression via the -z option.
We can't always rely on the initial URL's path(), so use either the
user-specified argument or the URL path for determining the realpath,
depending on whether we got a file:// URL argument.
Fixes#4950.
Instead of assuming that we should use the OSC 9 progress messages
whenever we run on serenity, add a show-progress=[true|false] option.
This lets us avoid seeing esc sequence spam in GitHub Actions logs.
The test-js reporter is arguably the nicest test runner / reporter that
exists in the serenity code base. To the goal of leveling up all the
other unit test environments, start a new LibTest library so that we
can share code and reporting utilities to make all the test systems
look and behave similarly.
This is basically just for consistency, it's quite strange to see
multiple AK container types next to each other, some with and some
without the namespace prefix - we're 'using AK::Foo;' a lot and should
leverage that. :^)
(...and ASSERT_NOT_REACHED => VERIFY_NOT_REACHED)
Since all of these checks are done in release builds as well,
let's rename them to VERIFY to prevent confusion, as everyone is
used to assertions being compiled out in release.
We can introduce a new ASSERT macro that is specifically for debug
checks, but I'm doing this wholesale conversion first since we've
accumulated thousands of these already, and it's not immediately
obvious which ones are suitable for ASSERT.
Thanks to @trflynn89 for the neat implicit consteval ctor trick!
This allows us to basically slap `CheckedFormatString` on any
formatting function, and have its format argument checked at compiletime.
Note that there is a validator bug where it doesn't parse inner replaced
fields like `{:~>{}}` correctly (what should be 'left align with next
argument as size' is parsed as `{:~>{` following a literal closing
brace), so the compiletime checks are disabled on these temporarily by
forcing them to be StringViews.
This commit also removes the now unused `AK::StringLiteral` type (which
was introduced for use with NTTP strings).
There's no point in using different, seemingly randomly sized buffers as
the required size for storing an IPv4 address representation is well
known (16 bytes).
This achieves two things:
- Programs can now intentionally perform arbitrary syscalls by calling
syscall(). This allows us to work on things like syscall fuzzing.
- It restricts the ability of userspace to make syscalls to a single
4KB page of code. In order to call the kernel directly, an attacker
must now locate this page and call through it.
Since this is useful in many places, let's have a common implementation
of walking the stack of a given thread via /proc and symbolicating each
of the frames.
The /boot directory is only accessible to root by default, but anyone
wanting access to kernel symbols for development can get them by making
/boot/Kernel accessible to the "symbol" user.
Usage: bt <PID>
This program will print a symbolicated backtrace for the main thread of
the process with the given PID. It uses SymbolServer for the
symbolication.
There's a lot of room for improvement in this command, but it is pretty
neat already. :^)
If an exception was thrown while printing the last computed value in
the REPL, it would always assert on next input.
Something like this would always assert:
> a=[];Object.defineProperty(a,"0",{get:()=>{throw ""}})
> 1 + 2
This parser will be used by the C++ langauge server to provide better
auto-complete (& maybe also other things in the future).
It is designed to be error tolerant, and keeps track of the position
spans of the AST nodes, which should be useful later for incremental
parsing.
Core::IODevice (which Core::File inherits from) does not have a
reasonable way to block for a line. grep was spinning on
IODevice::read_line, passing endless empty strings to the matcher
lambda. Use getline instead, which will at least block in the Kernel for
characters to be available on stdin and only return full lines (or eof)
This was just an alias for "unix" that I added early on back when there
was some belief that we might be compatible with OpenBSD. We're clearly
never going to be compatible with their pledges so just drop the alias.
Now that we've moved to atomic replacement of these files when altering
them, we don't need to keep them open for the lifetime of Core::Account
so just simplify this and close them when they are not needed.
Before this patch, we had a nasty race condition when changing a user's
password: there was a time window between truncating /etc/shadow and
writing out its new contents, where you could simply "su" to root
without using a password.
Instead of writing directly to /etc/passwd and /etc/shadow, we now
create temporary files in /etc and fill them with the new contents.
Those files are then atomically renamed to /etc/passwd and /etc/shadow.
Sadly, fixing this race requires giving the passwd program a lot more
privileges. This is something we can and should improve upon. :^)
Now, `chres 640 480 2` can set the UI to HighDPI 640x480 at runtime. A
real GUI for changing the display factor will come later.
(`chres 640 480 2` followed by `chres 1280 960` is very fast since
we don't have to re-allocate the framebuffer since both modes use
the exact same number of physical pixels.)
This API was a mostly gratuitous deviation from POSIX that gave up some
portability in exchange for avoiding the occasional strlen().
I don't think that was actually achieving anything valuable, so let's
just chill out and have the same open() API as everyone else. :^)