This is a bit sad, but, with the Allocators as static globals their
destructors were running before some user code. Which doesn't really
make much sense, as none of the members of (at least the basic one) do
any real heavy lifting or have many resources to RAII.
To avoid the problem, just mmap the memory for the global arrays of
Allocators in __malloc_init and let the Kernel collect the memory when
we're done with the process.
This extends the opportunistic protection of empty-but-kept-around to
also cover BigAllocationBlocks. Since we only cache 4KB BAB's at the
moment, this sees limited use, but it does work.
We now keep a separate queue of empty ChunkedBlocks in each allocator.
The underlying memory for each block is mprotect'ed with PROT_NONE to
provoke crashes on use-after-free.
This is not going to catch *all* use-after-frees, but if it catches
some, that's still pretty nice. :^)
The malloc memory region names are now updated to reflect their reuse
status: "malloc: ChunkedBlock(size) (free/reused)"
This one is a bit mysterious. I can't find any authoritative answer on what
the correct behavior is, but it seems reasonable to me that free() doesn't
step on errno, since it returns "void" and thus the caller won't know to
inspect errno anyway.