This broke in c2e24a2fa1 where the boot
drive was removed from `SERENITY_MACHINE`. We now add the boot drive to
the common Qemu arguments, so it gets included in the aarch64 run
configuration as well.
This in turn enables `./Meta/serenity.sh test aarch64` and the CI
scripts to work with the AArch64 port.
As the RPi doesn't have a debugcon-like device, we create two serial
devices. The system console, UART0 is redirected to `debug.log`, while
UART1 is made available to the userspace and is used as the stdout for
the test runner script.
We are not yet able to run the full test suite, as the kernel panics due
to some unimplemented features.
Note that Qemu `master` or our patched Qemu build is required for
`SystemServer` to recognize the `system_mode=self-test` parameter.
Newer versions of QEMU prevent the user from running a GL-rendered
display while a SPICE display is active due to incompatibilities.
Since there is no way to disable QEMUs (apparently implicit) SPICE
display, make sure that we only enable SPICE support if the user
requested running with SPICE specifically. In this case, QEMU picks the
default SPICE client instead of rendering a display using whatever our
default on that platform would be.
Some hardware/software configurations crash KVM as soon as we try to
start Serenity. The exact cause is currently unknown, so just fully
revert it for now.
This reverts commit 897c4e5145.
The new baked image is a Prekernel and a Kernel baked together now, so
essentially we no longer need to pass the Prekernel as -kernel and the
actual kernel image as -initrd to QEMU, leaving the option to pass an
actual initrd or initramfs module later on with multiboot.
When we had 32 bit support in the OS kernel and userland, the very bare
minimum CPU we supported was Pentium 3, but now the CPU is just required
to support x86-64 long mode to be supported, so the exact model is not
very important.
I chose the QEMU64 virtual CPU model, because the whole concept of the
QEMU ISA-PC machine is that it checks how the kernel handles arbitrarily
old hardware setup.
It appears that QEMU on macOS doesn't have the VirtIO GPU variants that
support VGA functionality. Those variants are not especially important
to us, because we don't use any kind of VGA functionality in our kernel
anyway.
Therefore, for macOS, we could decide to use virtio-gpu-gl-pci and
virtio-gpu-pci devices instead.
Commit: 2c84466ad8 ("Kernel/Storage: Introduce new boot device
addressing modes") changed the way we pass the boot device parameter.
That commit missed updating boot parameter in the run.sh script for NVMe
boot devices.
10ms (the default) is ridiculous and causes all kinds of glitches if we
actually want to have a low-latency queue.
<https://gitlab.com/qemu-project/qemu/-/issues/1076#note_996636777>
suggests 2ms (and no lower than 1ms). This improves audio glitch
resistance at our current 512 sample buffer size, but going lower is
still not possible.
With this device being added, we can now boot into graphics mode on
these platforms too. For ISA-PC machine this is basically the only
viable option to use, but in the future, we should remove this device
for the microvm machine type as it should allow us to determine better
options and detect them by using a given device tree blob.
We don't need this AHCI controller to be present as we already have one
in the Q35 machine. This will help using the correct boot device in GRUB
setups later on.
This commit bumps the required QEMU version to 6.2 and updates the
version checking logic in Meta/run.sh to support checking against
major and minor version numbers instead of checking against the major
version only
This let us test the VMWare SVGA adapter easily. We already use the std
vga (which is compatible with bochs-display that only lacks VGA support)
on the i440FX QEMU machine so we keep testing it there too, and on the
Q35 machine we use a bochs-display device as secondary display.
Before, we wouldn't enable virtualization on Windows anymore unless
SERENITY_VIRTUALIZATION_SUPPORT was set explicitly. As far as we know,
there's no automatic way of detecting whether WHPX is enabled or not. So
we'll just enable virtualization on Windows by default, and if that
doesn't work the user can still disable it manually with
SERENITY_VIRTUALIZATION_SUPPORT=0.
4k logical blocks are better for block devices in QEMU as they align
with the underlying filesystem which typically has 4k logical blocks
such as our EXT2 filesystem.
run.sh builds i686 by default, and the aarch64 port of serenity
isn't very far along yet.
Without this change, `run.sh` without arguments unceremoniously
fails with:
[0/1] cd .../serenity/Build/i686 && /usr...
ENITY_ARCH=i686 /home/thakis/src/serenity/Meta/run.sh
qemu-system-i386: invalid accelerator kvm
That's because /dev/kvm exists, but that's no good on a non-intel host.
As there is no need for a Prekernel on aarch64, the Prekernel code was
moved into Kernel itself. The functionality remains the same.
SERENITY_KERNEL_AND_INITRD in run.sh specifies a kernel and an inital
ramdisk to be used by the emulator. This is needed because aarch64
does not need a Prekernel and the other ones do.
The microvm machine type is a modern tool for kernel and firmware
developers to test their software against features like FDTs, second
IOAPIC, lack of legacy devices by default, the ability of using PCIe
without using PCI x86 IO ports, etc.
We can boot into such machine but we are limited in the functionality we
support currently for this type of virtual machine.
The ISA-PC machine type provides no PCI bus support, no IOAPIC support
and other modern PC features of our generation.
This is mainly a good environment for testing abstractions in the kernel
space, and can help with improving on them for the sake of porting the
OS to other chipsets and CPU architectures.
Previously we added it only if spice was available, but it's possible to
build qemu with --disable-spice --enable-spice-protocol, which provides
qemu-vdagent but no spicevmc. In such case we still configured
qemu-vdagent to use "vdagent" device, but never actually defined it, so
the qemu-vdagent was never working.
QEMU crashes on M1 Macs when using `--accel hvf` option.
To solve this, detect the host's architecture and only add the
`--accel hvf` parameter if we are running on a "x86_64" machine.
This will allow "arm64" machines like M1 Macs to work correctly.
Add an option to enable NVMe storage device as the boot
drive.
To enable NVMe support, run the following:
$ SERENITY_NVME_ENABLE=1 Meta/serenity.sh run i686 root=/dev/nvme0n1