Apologies for the enormous commit, but I don't see a way to split this
up nicely. In the vast majority of cases it's a simple change. A few
extra places can use TRY instead of manual error checking though. :^)
Consider the situation where two shared libraries libA and libB, both
depending (as in having a NEEDED dtag) on libC. libA is first
dlopen()-ed, which produces libC to be mapped and linked. When libB is
dlopen()-ed the DynamicLinker would re-map and re-link libC though,
causing any previous references to its old location to be invalid. And
if libA's PLT has been patched to point to libC's symbols, then any
further invocations to libA will cause the code to jump to a virtual
address that isn't mapped anymore, therefore causing a crash. This
situation was reported in #10014, although the setup was more convolved
in the ticket.
This commit fixes the issue by distinguishing between a main program
loading being performed by Loader.so, and a dlopen() call. The main
difference between these two cases is that in the former the
s_globals_objects maps is always empty, while in the latter it might
already contain dependencies for the library being dlopen()-ed. Hence,
when collecting dependencies to map and link, dlopen() should skip those
that are present in the global map to avoid the issue described above.
With this patch the original issue seen in #10014 is gone, with all
python3 modules (so far) loading correctly.
A unit test reproducing a simplified issue is also included in this
commit. The unit test includes the building of two dynamic libraries A
and B with both depending on libline.so (and B also depending on A); the
test then dlopen()s libA, invokes one its function, then does the same
with libB.
When loading libraries, it is required that each library uses the same
instance of each symbol, and that they use the one from the executable
if any. This is barely noticeable if done incorrectly; except that it
completely breaks RTTI on Clang. This switches the hash map to be
ordered; tested to work for Clang by @Bertaland
The System V ABI for both x86 and x86_64 requires that the stack pointer
is 16-byte aligned on entry. Previously we did not align the stack
pointer properly.
As far as "main" was concerned the stack alignment was correct even
without this patch due to how the C++ _start function and the kernel
interacted, i.e. the kernel misaligned the stack as far as the ABI
was concerned but that misalignment (read: it was properly aligned for
a regular function call - but misaligned in terms of what the ABI
dictates) was actually expected by our _start function.
Previously, we assumed that the `.text` segment was loaded at vaddr 0 in
all dynamic libraries, so we used the dynamic object's base address with
`msyscall`. This did not work with the LLVM toolchain, as it likes to
shuffle these segments around.
This now also handles the case when there are multiple text segments for
some reason correctly.
The LexicalPath instance methods dirname(), basename(), title() and
extension() will be changed to return StringView const& in a further
commit. Due to this, users creating temporary LexicalPath objects just
to call one of those getters will recieve a StringView const& pointing
to a possible freed buffer.
To avoid this, static methods for those APIs have been added, which will
return a String by value to avoid those problems. All cases where
temporary LexicalPath objects have been used as described above haven
been changed to use the static APIs.
This implements the dladdr() function which lets the caller look up
the symbol name, symbol address as well as library name and library
base address for an arbitrary address.
By constraining two implementations, the compiler will select the best
fitting one. All this will require is duplicating the implementation and
simplifying for the `void` case.
This constraining also informs both the caller and compiler by passing
the callback parameter types as part of the constraint
(e.g.: `IterationFunction<int>`).
Some `for_each` functions in LibELF only take functions which return
`void`. This is a minimal correctness check, as it removes one way for a
function to incompletely do something.
There seems to be a possible idiom where inside a lambda, a `return;` is
the same as `continue;` in a for-loop.
This enables us to use keys of type NonnullRefPtr in HashMaps and
HashTables.
This commit also includes fixes in various places that used
HashMap<T, NonnullRefPtr<U>>::get() and expected to get an
Optional<NonnullRefPtr<U>> and now get an Optional<U*>.
When loading a library at runtime with dlopen(), we now check that:
1. The library's TLS size does not overflow the size of the allocated
TLS block.
2. The Library's TLS data is all zeroed.
We check for both of these cases because we currently do not support
them correctly. When we do add support for them, we can remove these
checks.
This changes the TLS offset calculation logic to be based on the
symbol's size instead of the total size of the TLS.
Because of this change, we no longer need to pipe "m_tls_size" to so
many functions.
Also, After this patch, the TLS data of the main program exists at the
"end" of the TLS block (Highest addresses).
This fixes a part of #6609.
Previously, TLS data was always zero-initialized.
To support initializing the values of TLS data, sys$allocate_tls now
receives a buffer with the desired initial data, and copies it to the
master TLS region of the process.
The DynamicLinker gathers the initial TLS image and passes it to
sys$allocate_tls.
We also now require the size passed to sys$allocate_tls to be
page-aligned, to make things easier. Note that this doesn't waste memory
as the TLS data has to be allocated in separate pages anyway.
This fixes a regression that was introduced in f40ee1b and caused the
tls_offset of all objects other than the main program to be 0.
After this fix map_library's is_program argument is no longer used, so
it was removed.
This implements more of the dlfcn functionality. Most notably:
* It's now possible to dlopen() libraries which were already
loaded at program startup time. This does not cause those
libraries to be loaded twice.
* Errors are reported via dlerror() rather than by crashing
the program.
* Calls to the dl*() functions are thread-safe.
SPDX License Identifiers are a more compact / standardized
way of representing file license information.
See: https://spdx.dev/resources/use/#identifiers
This was done with the `ambr` search and replace tool.
ambr --no-parent-ignore --key-from-file --rep-from-file key.txt rep.txt *
Merge the load_elf() and commit_elf() functions into a single
load_main_executable() function that takes care of both things.
Also split "stage 3" into two separate stages, keeping the lazy
relocations in stage 3, and adding a stage 4 for calling library
initialization functions.
We also make sure to map the main executable before dealing with
any of its dependencies, to ensure that non-PIE executables get
loaded at their desired address.
(...and ASSERT_NOT_REACHED => VERIFY_NOT_REACHED)
Since all of these checks are done in release builds as well,
let's rename them to VERIFY to prevent confusion, as everyone is
used to assertions being compiled out in release.
We can introduce a new ASSERT macro that is specifically for debug
checks, but I'm doing this wholesale conversion first since we've
accumulated thousands of these already, and it's not immediately
obvious which ones are suitable for ASSERT.
When performing a global symbol lookup, we were recomputing the symbol
hashes once for every dynamic object searched. The hash function was
at the very top of a profile (15%) of program startup.
With this change, the hash function is no longer visible among the top
stacks in the profile. :^)
Let's use a stronger type than void* for this since we're talking
specifically about a virtual address and not necessarily a pointer
to something actually in memory (yet).
To support this, I had to reorganize the "load_elf" function into two
passes. First we map all the dynamic objects, to get their symbols
into the global lookup table. Then we link all the dynamic objects.
So many read-only GOT's! :^)
This achieves two things:
- Programs can now intentionally perform arbitrary syscalls by calling
syscall(). This allows us to work on things like syscall fuzzing.
- It restricts the ability of userspace to make syscalls to a single
4KB page of code. In order to call the kernel directly, an attacker
must now locate this page and call through it.
Also, before calling the main program entry function, inform the kernel
that no more syscall regions can be registered.
This effectively bans syscalls from everywhere except LibC and
LibPthread. Pretty neat! :^)
load_from_image() becomes map() and link(). This allows us to map
an object before mapping its dependencies.
This solves an issue where fixed-position executables (like GCC)
would clash with the ASLR placement of their own shared libraries.
Refactor DynamicLoader construction with a try_create() helper so that
we can call mmap() before making a loader. This way the loader doesn't
need to have an "mmap failed" state.
This patch also takes care of determining the ELF file size in
try_create() instead of expecting callers to provide it.