2020-01-18 08:38:21 +00:00
|
|
|
/*
|
2021-09-10 13:45:12 +00:00
|
|
|
* Copyright (c) 2020-2021, Liav A. <liavalb@hotmail.co.il>
|
2021-03-12 10:27:59 +00:00
|
|
|
* Copyright (c) 2020-2021, Andreas Kling <kling@serenityos.org>
|
2022-02-09 18:33:39 +00:00
|
|
|
* Copyright (c) 2022, the SerenityOS developers.
|
2020-01-18 08:38:21 +00:00
|
|
|
*
|
2021-04-22 08:24:48 +00:00
|
|
|
* SPDX-License-Identifier: BSD-2-Clause
|
2020-01-18 08:38:21 +00:00
|
|
|
*/
|
|
|
|
|
2021-03-12 16:29:37 +00:00
|
|
|
#include <AK/Format.h>
|
Kernel: Introduce the IOWindow class
This class is intended to replace all IOAddress usages in the Kernel
codebase altogether. The idea is to ensure IO can be done in
arch-specific manner that is determined mostly in compile-time, but to
still be able to use most of the Kernel code in non-x86 builds. Specific
devices that rely on x86-specific IO instructions are already placed in
the Arch/x86 directory and are omitted for non-x86 builds.
The reason this works so well is the fact that x86 IO space acts in a
similar fashion to the traditional memory space being available in most
CPU architectures - the x86 IO space is essentially just an array of
bytes like the physical memory address space, but requires x86 IO
instructions to load and store data. Therefore, many devices allow host
software to interact with the hardware registers in both ways, with a
noticeable trend even in the modern x86 hardware to move away from the
old x86 IO space to exclusively using memory-mapped IO.
Therefore, the IOWindow class encapsulates both methods for x86 builds.
The idea is to allow PCI devices to be used in either way in x86 builds,
so when trying to map an IOWindow on a PCI BAR, the Kernel will try to
find the proper method being declared with the PCI BAR flags.
For old PCI hardware on non-x86 builds this might turn into a problem as
we can't use port mapped IO, so the Kernel will gracefully fail with
ENOTSUP error code if that's the case, as there's really nothing we can
do within such case.
For general IO, the read{8,16,32} and write{8,16,32} methods are
available as a convenient API for other places in the Kernel. There are
simply no direct 64-bit IO API methods yet, as it's not needed right now
and is not considered to be Arch-agnostic too - the x86 IO space doesn't
support generating 64 bit cycle on IO bus and instead requires two 2
32-bit accesses. If for whatever reason it appears to be necessary to do
IO in such manner, it could probably be added with some neat tricks to
do so. It is recommended to use Memory::TypedMapping struct if direct 64
bit IO is actually needed.
2022-09-23 08:50:04 +00:00
|
|
|
#include <AK/Platform.h>
|
2020-03-23 12:45:10 +00:00
|
|
|
#include <AK/StringView.h>
|
2021-09-11 08:19:33 +00:00
|
|
|
#include <AK/Try.h>
|
2023-02-24 18:33:43 +00:00
|
|
|
#include <Kernel/Interrupts/InterruptDisabler.h>
|
2022-10-04 00:05:54 +00:00
|
|
|
#if ARCH(X86_64)
|
2023-06-09 18:22:30 +00:00
|
|
|
# include <Kernel/Arch/x86_64/Firmware/PCBIOS/Mapper.h>
|
2022-10-04 10:46:11 +00:00
|
|
|
# include <Kernel/Arch/x86_64/IO.h>
|
Kernel: Introduce the IOWindow class
This class is intended to replace all IOAddress usages in the Kernel
codebase altogether. The idea is to ensure IO can be done in
arch-specific manner that is determined mostly in compile-time, but to
still be able to use most of the Kernel code in non-x86 builds. Specific
devices that rely on x86-specific IO instructions are already placed in
the Arch/x86 directory and are omitted for non-x86 builds.
The reason this works so well is the fact that x86 IO space acts in a
similar fashion to the traditional memory space being available in most
CPU architectures - the x86 IO space is essentially just an array of
bytes like the physical memory address space, but requires x86 IO
instructions to load and store data. Therefore, many devices allow host
software to interact with the hardware registers in both ways, with a
noticeable trend even in the modern x86 hardware to move away from the
old x86 IO space to exclusively using memory-mapped IO.
Therefore, the IOWindow class encapsulates both methods for x86 builds.
The idea is to allow PCI devices to be used in either way in x86 builds,
so when trying to map an IOWindow on a PCI BAR, the Kernel will try to
find the proper method being declared with the PCI BAR flags.
For old PCI hardware on non-x86 builds this might turn into a problem as
we can't use port mapped IO, so the Kernel will gracefully fail with
ENOTSUP error code if that's the case, as there's really nothing we can
do within such case.
For general IO, the read{8,16,32} and write{8,16,32} methods are
available as a convenient API for other places in the Kernel. There are
simply no direct 64-bit IO API methods yet, as it's not needed right now
and is not considered to be Arch-agnostic too - the x86 IO space doesn't
support generating 64 bit cycle on IO bus and instead requires two 2
32-bit accesses. If for whatever reason it appears to be necessary to do
IO in such manner, it could probably be added with some neat tricks to
do so. It is recommended to use Memory::TypedMapping struct if direct 64
bit IO is actually needed.
2022-09-23 08:50:04 +00:00
|
|
|
#endif
|
Kernel/PCI: Simplify the entire subsystem
A couple of things were changed:
1. Semantic changes - PCI segments are now called PCI domains, to better
match what they are really. It's also the name that Linux gave, and it
seems that Wikipedia also uses this name.
We also remove PCI::ChangeableAddress, because it was used in the past
but now it's no longer being used.
2. There are no WindowedMMIOAccess or MMIOAccess classes anymore, as
they made a bunch of unnecessary complexity. Instead, Windowed access is
removed entirely (this was tested, but never was benchmarked), so we are
left with IO access and memory access options. The memory access option
is essentially mapping the PCI bus (from the chosen PCI domain), to
virtual memory as-is. This means that unless needed, at any time, there
is only one PCI bus being mapped, and this is changed if access to
another PCI bus in the same PCI domain is needed. For now, we don't
support mapping of different PCI buses from different PCI domains at the
same time, because basically it's still a non-issue for most machines
out there.
2. OOM-safety is increased, especially when constructing the Access
object. It means that we pre-allocating any needed resources, and we try
to find PCI domains (if requested to initialize memory access) after we
attempt to construct the Access object, so it's possible to fail at this
point "gracefully".
3. All PCI API functions are now separated into a different header file,
which means only "clients" of the PCI subsystem API will need to include
that header file.
4. Functional changes - we only allow now to enumerate the bus after
a hardware scan. This means that the old method "enumerate_hardware"
is removed, so, when initializing an Access object, the initializing
function must call rescan on it to force it to find devices. This makes
it possible to fail rescan, and also to defer it after construction from
both OOM-safety terms and hotplug capabilities.
2021-09-07 09:08:38 +00:00
|
|
|
#include <Kernel/Bus/PCI/API.h>
|
2021-01-25 15:07:10 +00:00
|
|
|
#include <Kernel/Debug.h>
|
2021-09-11 07:39:47 +00:00
|
|
|
#include <Kernel/Firmware/ACPI/Parser.h>
|
2023-02-24 18:10:59 +00:00
|
|
|
#include <Kernel/Library/StdLib.h>
|
2021-08-06 08:45:34 +00:00
|
|
|
#include <Kernel/Memory/TypedMapping.h>
|
2021-06-22 15:40:16 +00:00
|
|
|
#include <Kernel/Sections.h>
|
Kernel: Introduce the ACPI subsystem
ACPI subsystem includes 3 types of parsers that are created during
runtime, each one capable of parsing ACPI tables at different level.
ACPIParser is the most basic parser which is essentialy a parser that
can't parse anything useful, due to a user request to disable ACPI
support in a kernel boot parameter.
ACPIStaticParser is a derived class from ACPIParser, which is able to
parse only static data (e.g. FADT, HPET, MCFG and other tables), thus
making it not able to parse AML (ACPI Machine Language) nor to support
handling of hardware events and power management. This type of parser
can be created with a kernel boot parameter.
ACPIDynamicParser is a derived class from ACPIStaticParser, which
includes all the capabilities of the latter, but *should* implement an
AML interpretation, (by building the ACPI AML namespace) and handling
power & hardware events. Currently the methods to support AML
interpretation are not implemented.
This type of parser is created automatically during runtime if the user
didn't specify a boot parameter related to ACPI initialization.
Also, adding strncmp function definition in StdLib.h, to be able to use
it in ACPIStaticParser class.
2019-12-31 11:01:09 +00:00
|
|
|
|
2021-07-10 23:36:30 +00:00
|
|
|
namespace Kernel::ACPI {
|
2020-04-09 12:10:44 +00:00
|
|
|
|
2020-03-22 00:12:45 +00:00
|
|
|
static Parser* s_acpi_parser;
|
2020-02-16 00:27:42 +00:00
|
|
|
|
2020-04-09 12:31:47 +00:00
|
|
|
Parser* Parser::the()
|
2020-03-22 00:12:45 +00:00
|
|
|
{
|
2020-04-09 12:31:47 +00:00
|
|
|
return s_acpi_parser;
|
2020-03-22 00:12:45 +00:00
|
|
|
}
|
Kernel: Introduce the ACPI subsystem
ACPI subsystem includes 3 types of parsers that are created during
runtime, each one capable of parsing ACPI tables at different level.
ACPIParser is the most basic parser which is essentialy a parser that
can't parse anything useful, due to a user request to disable ACPI
support in a kernel boot parameter.
ACPIStaticParser is a derived class from ACPIParser, which is able to
parse only static data (e.g. FADT, HPET, MCFG and other tables), thus
making it not able to parse AML (ACPI Machine Language) nor to support
handling of hardware events and power management. This type of parser
can be created with a kernel boot parameter.
ACPIDynamicParser is a derived class from ACPIStaticParser, which
includes all the capabilities of the latter, but *should* implement an
AML interpretation, (by building the ACPI AML namespace) and handling
power & hardware events. Currently the methods to support AML
interpretation are not implemented.
This type of parser is created automatically during runtime if the user
didn't specify a boot parameter related to ACPI initialization.
Also, adding strncmp function definition in StdLib.h, to be able to use
it in ACPIStaticParser class.
2019-12-31 11:01:09 +00:00
|
|
|
|
2021-09-10 13:45:12 +00:00
|
|
|
void Parser::must_initialize(PhysicalAddress rsdp, PhysicalAddress fadt, u8 irq_number)
|
|
|
|
{
|
|
|
|
VERIFY(!s_acpi_parser);
|
|
|
|
s_acpi_parser = new (nothrow) Parser(rsdp, fadt, irq_number);
|
|
|
|
VERIFY(s_acpi_parser);
|
|
|
|
}
|
|
|
|
|
2022-08-19 18:53:40 +00:00
|
|
|
UNMAP_AFTER_INIT NonnullLockRefPtr<ACPISysFSComponent> ACPISysFSComponent::create(StringView name, PhysicalAddress paddr, size_t table_size)
|
2021-03-13 10:01:44 +00:00
|
|
|
{
|
2021-12-12 14:33:08 +00:00
|
|
|
// FIXME: Handle allocation failure gracefully
|
|
|
|
auto table_name = KString::must_create(name);
|
2022-08-19 18:53:40 +00:00
|
|
|
return adopt_lock_ref(*new (nothrow) ACPISysFSComponent(move(table_name), paddr, table_size));
|
2021-03-13 10:01:44 +00:00
|
|
|
}
|
|
|
|
|
2021-11-07 23:51:39 +00:00
|
|
|
ErrorOr<size_t> ACPISysFSComponent::read_bytes(off_t offset, size_t count, UserOrKernelBuffer& buffer, OpenFileDescription*) const
|
2021-03-13 10:01:44 +00:00
|
|
|
{
|
2021-09-07 13:22:24 +00:00
|
|
|
auto blob = TRY(try_to_generate_buffer());
|
2021-03-13 10:01:44 +00:00
|
|
|
|
|
|
|
if ((size_t)offset >= blob->size())
|
2021-11-07 23:51:39 +00:00
|
|
|
return 0;
|
2021-03-13 10:01:44 +00:00
|
|
|
|
|
|
|
ssize_t nread = min(static_cast<off_t>(blob->size() - offset), static_cast<off_t>(count));
|
2021-09-07 10:09:52 +00:00
|
|
|
TRY(buffer.write(blob->data() + offset, nread));
|
2021-03-13 10:01:44 +00:00
|
|
|
return nread;
|
|
|
|
}
|
|
|
|
|
2021-11-07 23:51:39 +00:00
|
|
|
ErrorOr<NonnullOwnPtr<KBuffer>> ACPISysFSComponent::try_to_generate_buffer() const
|
2021-03-13 10:01:44 +00:00
|
|
|
{
|
2022-01-13 16:20:22 +00:00
|
|
|
auto acpi_blob = TRY(Memory::map_typed<u8>((m_paddr), m_length));
|
2022-04-10 22:08:07 +00:00
|
|
|
return KBuffer::try_create_with_bytes("ACPISysFSComponent: Blob"sv, Span<u8> { acpi_blob.ptr(), m_length });
|
2021-03-13 10:01:44 +00:00
|
|
|
}
|
|
|
|
|
2021-12-12 14:33:08 +00:00
|
|
|
UNMAP_AFTER_INIT ACPISysFSComponent::ACPISysFSComponent(NonnullOwnPtr<KString> table_name, PhysicalAddress paddr, size_t table_size)
|
|
|
|
: SysFSComponent()
|
2021-03-13 10:01:44 +00:00
|
|
|
, m_paddr(paddr)
|
|
|
|
, m_length(table_size)
|
2021-12-12 14:33:08 +00:00
|
|
|
, m_table_name(move(table_name))
|
2021-03-13 10:01:44 +00:00
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2021-12-12 14:58:06 +00:00
|
|
|
UNMAP_AFTER_INIT void ACPISysFSDirectory::find_tables_and_register_them_as_components()
|
2021-03-13 10:01:44 +00:00
|
|
|
{
|
|
|
|
size_t ssdt_count = 0;
|
2022-04-22 22:06:37 +00:00
|
|
|
MUST(m_child_components.with([&](auto& list) -> ErrorOr<void> {
|
|
|
|
ACPI::Parser::the()->enumerate_static_tables([&](StringView signature, PhysicalAddress p_table, size_t length) {
|
|
|
|
if (signature == "SSDT") {
|
|
|
|
auto component_name = KString::formatted("{:4s}{}", signature.characters_without_null_termination(), ssdt_count).release_value_but_fixme_should_propagate_errors();
|
|
|
|
list.append(ACPISysFSComponent::create(component_name->view(), p_table, length));
|
|
|
|
ssdt_count++;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
list.append(ACPISysFSComponent::create(signature, p_table, length));
|
|
|
|
});
|
|
|
|
return {};
|
|
|
|
}));
|
|
|
|
|
|
|
|
MUST(m_child_components.with([&](auto& list) -> ErrorOr<void> {
|
|
|
|
auto rsdp = Memory::map_typed<Structures::RSDPDescriptor20>(ACPI::Parser::the()->rsdp()).release_value_but_fixme_should_propagate_errors();
|
|
|
|
list.append(ACPISysFSComponent::create("RSDP"sv, ACPI::Parser::the()->rsdp(), rsdp->base.revision == 0 ? sizeof(Structures::RSDPDescriptor) : rsdp->length));
|
|
|
|
auto main_system_description_table = Memory::map_typed<Structures::SDTHeader>(ACPI::Parser::the()->main_system_description_table()).release_value_but_fixme_should_propagate_errors();
|
|
|
|
if (ACPI::Parser::the()->is_xsdt_supported()) {
|
|
|
|
list.append(ACPISysFSComponent::create("XSDT"sv, ACPI::Parser::the()->main_system_description_table(), main_system_description_table->length));
|
|
|
|
} else {
|
|
|
|
list.append(ACPISysFSComponent::create("RSDT"sv, ACPI::Parser::the()->main_system_description_table(), main_system_description_table->length));
|
2021-03-13 10:01:44 +00:00
|
|
|
}
|
2022-04-22 22:06:37 +00:00
|
|
|
return {};
|
|
|
|
}));
|
2021-03-13 10:01:44 +00:00
|
|
|
}
|
|
|
|
|
2023-06-09 18:17:02 +00:00
|
|
|
UNMAP_AFTER_INIT NonnullLockRefPtr<ACPISysFSDirectory> ACPISysFSDirectory::must_create(SysFSFirmwareDirectory& firmware_directory)
|
2021-12-12 14:58:06 +00:00
|
|
|
{
|
2022-08-19 18:53:40 +00:00
|
|
|
auto acpi_directory = MUST(adopt_nonnull_lock_ref_or_enomem(new (nothrow) ACPISysFSDirectory(firmware_directory)));
|
2021-12-12 14:58:06 +00:00
|
|
|
acpi_directory->find_tables_and_register_them_as_components();
|
|
|
|
return acpi_directory;
|
|
|
|
}
|
|
|
|
|
2023-06-09 18:17:02 +00:00
|
|
|
UNMAP_AFTER_INIT ACPISysFSDirectory::ACPISysFSDirectory(SysFSFirmwareDirectory& firmware_directory)
|
2021-12-12 14:58:06 +00:00
|
|
|
: SysFSDirectory(firmware_directory)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2021-11-10 23:55:02 +00:00
|
|
|
void Parser::enumerate_static_tables(Function<void(StringView, PhysicalAddress, size_t)> callback)
|
2021-03-13 10:01:44 +00:00
|
|
|
{
|
|
|
|
for (auto& p_table : m_sdt_pointers) {
|
2022-01-13 16:20:22 +00:00
|
|
|
auto table = Memory::map_typed<Structures::SDTHeader>(p_table).release_value_but_fixme_should_propagate_errors();
|
2021-03-13 10:01:44 +00:00
|
|
|
callback({ table->sig, 4 }, p_table, table->length);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-04-01 17:58:27 +00:00
|
|
|
static bool validate_table(Structures::SDTHeader const&, size_t length);
|
2020-04-09 16:15:02 +00:00
|
|
|
|
2021-02-19 20:29:46 +00:00
|
|
|
UNMAP_AFTER_INIT void Parser::locate_static_data()
|
2020-04-09 16:15:02 +00:00
|
|
|
{
|
|
|
|
locate_main_system_description_table();
|
|
|
|
initialize_main_system_description_table();
|
2021-09-10 13:45:12 +00:00
|
|
|
process_fadt_data();
|
2022-09-25 15:29:44 +00:00
|
|
|
process_dsdt();
|
2020-04-09 16:15:02 +00:00
|
|
|
}
|
|
|
|
|
2021-11-10 23:55:02 +00:00
|
|
|
UNMAP_AFTER_INIT Optional<PhysicalAddress> Parser::find_table(StringView signature)
|
2020-04-09 16:15:02 +00:00
|
|
|
{
|
2021-02-07 12:03:24 +00:00
|
|
|
dbgln_if(ACPI_DEBUG, "ACPI: Calling Find Table method!");
|
2020-04-09 16:15:02 +00:00
|
|
|
for (auto p_sdt : m_sdt_pointers) {
|
2022-01-13 16:20:22 +00:00
|
|
|
auto sdt_or_error = Memory::map_typed<Structures::SDTHeader>(p_sdt);
|
|
|
|
if (sdt_or_error.is_error()) {
|
|
|
|
dbgln_if(ACPI_DEBUG, "ACPI: Failed mapping Table @ {}", p_sdt);
|
|
|
|
continue;
|
|
|
|
}
|
2021-02-07 12:03:24 +00:00
|
|
|
dbgln_if(ACPI_DEBUG, "ACPI: Examining Table @ {}", p_sdt);
|
2022-01-13 16:20:22 +00:00
|
|
|
if (!strncmp(sdt_or_error.value()->sig, signature.characters_without_null_termination(), 4)) {
|
2021-02-07 12:03:24 +00:00
|
|
|
dbgln_if(ACPI_DEBUG, "ACPI: Found Table @ {}", p_sdt);
|
2020-04-09 16:15:02 +00:00
|
|
|
return p_sdt;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return {};
|
|
|
|
}
|
|
|
|
|
2022-04-01 17:58:27 +00:00
|
|
|
bool Parser::handle_irq(RegisterState const&)
|
2020-04-09 16:15:02 +00:00
|
|
|
{
|
2021-09-10 13:45:12 +00:00
|
|
|
TODO();
|
2020-04-09 16:15:02 +00:00
|
|
|
}
|
|
|
|
|
2021-09-10 13:45:12 +00:00
|
|
|
UNMAP_AFTER_INIT void Parser::enable_aml_parsing()
|
2020-03-22 00:12:45 +00:00
|
|
|
{
|
2021-09-10 13:45:12 +00:00
|
|
|
// FIXME: When enabled, do other things to "parse AML".
|
|
|
|
m_can_process_bytecode = true;
|
|
|
|
}
|
2020-04-09 16:15:02 +00:00
|
|
|
|
2021-09-10 13:45:12 +00:00
|
|
|
UNMAP_AFTER_INIT void Parser::process_fadt_data()
|
|
|
|
{
|
|
|
|
dmesgln("ACPI: Initializing Fixed ACPI data");
|
2020-04-09 16:15:02 +00:00
|
|
|
|
2021-09-10 13:45:12 +00:00
|
|
|
VERIFY(!m_fadt.is_null());
|
|
|
|
dbgln_if(ACPI_DEBUG, "ACPI: FADT @ {}", m_fadt);
|
2021-01-13 20:16:18 +00:00
|
|
|
|
2022-01-13 16:20:22 +00:00
|
|
|
auto sdt = Memory::map_typed<Structures::FADT>(m_fadt).release_value_but_fixme_should_propagate_errors();
|
2021-09-10 13:45:12 +00:00
|
|
|
dmesgln("ACPI: Fixed ACPI data, Revision {}, length: {} bytes", (size_t)sdt->h.revision, (size_t)sdt->h.length);
|
2020-04-09 16:15:02 +00:00
|
|
|
m_x86_specific_flags.cmos_rtc_not_present = (sdt->ia_pc_boot_arch_flags & (u8)FADTFlags::IA_PC_Flags::CMOS_RTC_Not_Present);
|
2020-12-17 19:37:37 +00:00
|
|
|
|
|
|
|
// FIXME: QEMU doesn't report that we have an i8042 controller in these flags, even if it should (when FADT revision is 3),
|
|
|
|
// Later on, we need to make sure that we enumerate the ACPI namespace (AML encoded), instead of just using this value.
|
|
|
|
m_x86_specific_flags.keyboard_8042 = (sdt->h.revision <= 3) || (sdt->ia_pc_boot_arch_flags & (u8)FADTFlags::IA_PC_Flags::PS2_8042);
|
|
|
|
|
2020-04-09 16:15:02 +00:00
|
|
|
m_x86_specific_flags.legacy_devices = (sdt->ia_pc_boot_arch_flags & (u8)FADTFlags::IA_PC_Flags::Legacy_Devices);
|
|
|
|
m_x86_specific_flags.msi_not_supported = (sdt->ia_pc_boot_arch_flags & (u8)FADTFlags::IA_PC_Flags::MSI_Not_Supported);
|
|
|
|
m_x86_specific_flags.vga_not_present = (sdt->ia_pc_boot_arch_flags & (u8)FADTFlags::IA_PC_Flags::VGA_Not_Present);
|
|
|
|
|
|
|
|
m_hardware_flags.cpu_software_sleep = (sdt->flags & (u32)FADTFlags::FeatureFlags::CPU_SW_SLP);
|
|
|
|
m_hardware_flags.docking_capability = (sdt->flags & (u32)FADTFlags::FeatureFlags::DCK_CAP);
|
|
|
|
m_hardware_flags.fix_rtc = (sdt->flags & (u32)FADTFlags::FeatureFlags::FIX_RTC);
|
|
|
|
m_hardware_flags.force_apic_cluster_model = (sdt->flags & (u32)FADTFlags::FeatureFlags::FORCE_APIC_CLUSTER_MODEL);
|
|
|
|
m_hardware_flags.force_apic_physical_destination_mode = (sdt->flags & (u32)FADTFlags::FeatureFlags::FORCE_APIC_PHYSICAL_DESTINATION_MODE);
|
|
|
|
m_hardware_flags.hardware_reduced_acpi = (sdt->flags & (u32)FADTFlags::FeatureFlags::HW_REDUCED_ACPI);
|
|
|
|
m_hardware_flags.headless = (sdt->flags & (u32)FADTFlags::FeatureFlags::HEADLESS);
|
|
|
|
m_hardware_flags.low_power_s0_idle_capable = (sdt->flags & (u32)FADTFlags::FeatureFlags::LOW_POWER_S0_IDLE_CAPABLE);
|
|
|
|
m_hardware_flags.multiprocessor_c2 = (sdt->flags & (u32)FADTFlags::FeatureFlags::P_LVL2_UP);
|
|
|
|
m_hardware_flags.pci_express_wake = (sdt->flags & (u32)FADTFlags::FeatureFlags::PCI_EXP_WAK);
|
|
|
|
m_hardware_flags.power_button = (sdt->flags & (u32)FADTFlags::FeatureFlags::PWR_BUTTON);
|
|
|
|
m_hardware_flags.processor_c1 = (sdt->flags & (u32)FADTFlags::FeatureFlags::PROC_C1);
|
|
|
|
m_hardware_flags.remote_power_on_capable = (sdt->flags & (u32)FADTFlags::FeatureFlags::REMOTE_POWER_ON_CAPABLE);
|
|
|
|
m_hardware_flags.reset_register_supported = (sdt->flags & (u32)FADTFlags::FeatureFlags::RESET_REG_SUPPORTED);
|
|
|
|
m_hardware_flags.rtc_s4 = (sdt->flags & (u32)FADTFlags::FeatureFlags::RTC_s4);
|
|
|
|
m_hardware_flags.s4_rtc_status_valid = (sdt->flags & (u32)FADTFlags::FeatureFlags::S4_RTC_STS_VALID);
|
|
|
|
m_hardware_flags.sealed_case = (sdt->flags & (u32)FADTFlags::FeatureFlags::SEALED_CASE);
|
|
|
|
m_hardware_flags.sleep_button = (sdt->flags & (u32)FADTFlags::FeatureFlags::SLP_BUTTON);
|
|
|
|
m_hardware_flags.timer_value_extension = (sdt->flags & (u32)FADTFlags::FeatureFlags::TMR_VAL_EXT);
|
|
|
|
m_hardware_flags.use_platform_clock = (sdt->flags & (u32)FADTFlags::FeatureFlags::USE_PLATFORM_CLOCK);
|
|
|
|
m_hardware_flags.wbinvd = (sdt->flags & (u32)FADTFlags::FeatureFlags::WBINVD);
|
|
|
|
m_hardware_flags.wbinvd_flush = (sdt->flags & (u32)FADTFlags::FeatureFlags::WBINVD_FLUSH);
|
|
|
|
}
|
|
|
|
|
2022-09-25 15:29:44 +00:00
|
|
|
UNMAP_AFTER_INIT void Parser::process_dsdt()
|
|
|
|
{
|
|
|
|
auto sdt = Memory::map_typed<Structures::FADT>(m_fadt).release_value_but_fixme_should_propagate_errors();
|
|
|
|
|
|
|
|
// Add DSDT-pointer to expose the full table in /sys/firmware/acpi/
|
|
|
|
m_sdt_pointers.append(PhysicalAddress(sdt->dsdt_ptr));
|
|
|
|
|
|
|
|
auto dsdt_or_error = Memory::map_typed<Structures::DSDT>(PhysicalAddress(sdt->dsdt_ptr));
|
|
|
|
if (dsdt_or_error.is_error()) {
|
|
|
|
dmesgln("ACPI: DSDT is unmappable");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
dmesgln("ACPI: Using DSDT @ {} with {} bytes", PhysicalAddress(sdt->dsdt_ptr), dsdt_or_error.value()->h.length);
|
|
|
|
}
|
|
|
|
|
2020-04-09 16:15:02 +00:00
|
|
|
bool Parser::can_reboot()
|
|
|
|
{
|
2022-01-21 09:55:45 +00:00
|
|
|
auto fadt_or_error = Memory::map_typed<Structures::FADT>(m_fadt);
|
|
|
|
if (fadt_or_error.is_error())
|
|
|
|
return false;
|
|
|
|
if (fadt_or_error.value()->h.revision < 2)
|
2020-04-09 16:15:02 +00:00
|
|
|
return false;
|
|
|
|
return m_hardware_flags.reset_register_supported;
|
|
|
|
}
|
|
|
|
|
2022-04-01 17:58:27 +00:00
|
|
|
void Parser::access_generic_address(Structures::GenericAddressStructure const& structure, u32 value)
|
2020-04-09 16:15:02 +00:00
|
|
|
{
|
|
|
|
switch ((GenericAddressStructure::AddressSpace)structure.address_space) {
|
|
|
|
case GenericAddressStructure::AddressSpace::SystemIO: {
|
2022-10-04 00:05:54 +00:00
|
|
|
#if ARCH(X86_64)
|
2020-04-09 16:15:02 +00:00
|
|
|
IOAddress address(structure.address);
|
2021-01-25 11:52:31 +00:00
|
|
|
dbgln("ACPI: Sending value {:x} to {}", value, address);
|
2020-04-09 16:15:02 +00:00
|
|
|
switch (structure.access_size) {
|
|
|
|
case (u8)GenericAddressStructure::AccessSize::QWord: {
|
2021-01-08 23:42:44 +00:00
|
|
|
dbgln("Trying to send QWord to IO port");
|
2021-02-23 19:42:32 +00:00
|
|
|
VERIFY_NOT_REACHED();
|
2020-04-09 16:15:02 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
case (u8)GenericAddressStructure::AccessSize::Undefined: {
|
2021-01-08 23:42:44 +00:00
|
|
|
dbgln("ACPI Warning: Unknown access size {}", structure.access_size);
|
2021-02-23 19:42:32 +00:00
|
|
|
VERIFY(structure.bit_width != (u8)GenericAddressStructure::BitWidth::QWord);
|
|
|
|
VERIFY(structure.bit_width != (u8)GenericAddressStructure::BitWidth::Undefined);
|
2021-01-10 13:24:59 +00:00
|
|
|
dbgln("ACPI: Bit Width - {} bits", structure.bit_width);
|
2020-04-09 16:15:02 +00:00
|
|
|
address.out(value, structure.bit_width);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
address.out(value, (8 << (structure.access_size - 1)));
|
|
|
|
break;
|
|
|
|
}
|
Kernel: Introduce the IOWindow class
This class is intended to replace all IOAddress usages in the Kernel
codebase altogether. The idea is to ensure IO can be done in
arch-specific manner that is determined mostly in compile-time, but to
still be able to use most of the Kernel code in non-x86 builds. Specific
devices that rely on x86-specific IO instructions are already placed in
the Arch/x86 directory and are omitted for non-x86 builds.
The reason this works so well is the fact that x86 IO space acts in a
similar fashion to the traditional memory space being available in most
CPU architectures - the x86 IO space is essentially just an array of
bytes like the physical memory address space, but requires x86 IO
instructions to load and store data. Therefore, many devices allow host
software to interact with the hardware registers in both ways, with a
noticeable trend even in the modern x86 hardware to move away from the
old x86 IO space to exclusively using memory-mapped IO.
Therefore, the IOWindow class encapsulates both methods for x86 builds.
The idea is to allow PCI devices to be used in either way in x86 builds,
so when trying to map an IOWindow on a PCI BAR, the Kernel will try to
find the proper method being declared with the PCI BAR flags.
For old PCI hardware on non-x86 builds this might turn into a problem as
we can't use port mapped IO, so the Kernel will gracefully fail with
ENOTSUP error code if that's the case, as there's really nothing we can
do within such case.
For general IO, the read{8,16,32} and write{8,16,32} methods are
available as a convenient API for other places in the Kernel. There are
simply no direct 64-bit IO API methods yet, as it's not needed right now
and is not considered to be Arch-agnostic too - the x86 IO space doesn't
support generating 64 bit cycle on IO bus and instead requires two 2
32-bit accesses. If for whatever reason it appears to be necessary to do
IO in such manner, it could probably be added with some neat tricks to
do so. It is recommended to use Memory::TypedMapping struct if direct 64
bit IO is actually needed.
2022-09-23 08:50:04 +00:00
|
|
|
#endif
|
2020-04-09 16:15:02 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
case GenericAddressStructure::AddressSpace::SystemMemory: {
|
2021-01-08 23:42:44 +00:00
|
|
|
dbgln("ACPI: Sending value {:x} to {}", value, PhysicalAddress(structure.address));
|
2020-04-09 16:15:02 +00:00
|
|
|
switch ((GenericAddressStructure::AccessSize)structure.access_size) {
|
|
|
|
case GenericAddressStructure::AccessSize::Byte:
|
2022-01-13 16:20:22 +00:00
|
|
|
*Memory::map_typed<u8>(PhysicalAddress(structure.address)).release_value_but_fixme_should_propagate_errors() = value;
|
2020-04-09 16:15:02 +00:00
|
|
|
break;
|
|
|
|
case GenericAddressStructure::AccessSize::Word:
|
2022-01-13 16:20:22 +00:00
|
|
|
*Memory::map_typed<u16>(PhysicalAddress(structure.address)).release_value_but_fixme_should_propagate_errors() = value;
|
2020-04-09 16:15:02 +00:00
|
|
|
break;
|
|
|
|
case GenericAddressStructure::AccessSize::DWord:
|
2022-01-13 16:20:22 +00:00
|
|
|
*Memory::map_typed<u32>(PhysicalAddress(structure.address)).release_value_but_fixme_should_propagate_errors() = value;
|
2020-04-09 16:15:02 +00:00
|
|
|
break;
|
|
|
|
case GenericAddressStructure::AccessSize::QWord: {
|
2022-01-13 16:20:22 +00:00
|
|
|
*Memory::map_typed<u64>(PhysicalAddress(structure.address)).release_value_but_fixme_should_propagate_errors() = value;
|
2020-04-09 16:15:02 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
default:
|
2021-02-23 19:42:32 +00:00
|
|
|
VERIFY_NOT_REACHED();
|
2020-04-09 16:15:02 +00:00
|
|
|
}
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
case GenericAddressStructure::AddressSpace::PCIConfigurationSpace: {
|
2021-05-31 17:25:27 +00:00
|
|
|
// According to https://uefi.org/specs/ACPI/6.4/05_ACPI_Software_Programming_Model/ACPI_Software_Programming_Model.html#address-space-format,
|
|
|
|
// PCI addresses must be confined to devices on Segment group 0, bus 0.
|
2020-04-09 16:15:02 +00:00
|
|
|
auto pci_address = PCI::Address(0, 0, ((structure.address >> 24) & 0xFF), ((structure.address >> 16) & 0xFF));
|
2021-01-08 23:42:44 +00:00
|
|
|
dbgln("ACPI: Sending value {:x} to {}", value, pci_address);
|
2020-04-09 16:15:02 +00:00
|
|
|
u32 offset_in_pci_address = structure.address & 0xFFFF;
|
|
|
|
if (structure.access_size == (u8)GenericAddressStructure::AccessSize::QWord) {
|
2021-01-08 23:42:44 +00:00
|
|
|
dbgln("Trying to send QWord to PCI configuration space");
|
2021-02-23 19:42:32 +00:00
|
|
|
VERIFY_NOT_REACHED();
|
2020-04-09 16:15:02 +00:00
|
|
|
}
|
2021-02-23 19:42:32 +00:00
|
|
|
VERIFY(structure.access_size != (u8)GenericAddressStructure::AccessSize::Undefined);
|
Kernel/PCI: Hold a reference to DeviceIdentifier in the Device class
There are now 2 separate classes for almost the same object type:
- EnumerableDeviceIdentifier, which is used in the enumeration code for
all PCI host controller classes. This is allowed to be moved and
copied, as it doesn't support ref-counting.
- DeviceIdentifier, which inherits from EnumerableDeviceIdentifier. This
class uses ref-counting, and is not allowed to be copied. It has a
spinlock member in its structure to allow safely executing complicated
IO sequences on a PCI device and its space configuration.
There's a static method that allows a quick conversion from
EnumerableDeviceIdentifier to DeviceIdentifier while creating a
NonnullRefPtr out of it.
The reason for doing this is for the sake of integrity and reliablity of
the system in 2 places:
- Ensure that "complicated" tasks that rely on manipulating PCI device
registers are done in a safe manner. For example, determining a PCI
BAR space size requires multiple read and writes to the same register,
and if another CPU tries to do something else with our selected
register, then the result will be a catastrophe.
- Allow the PCI API to have a united form around a shared object which
actually holds much more data than the PCI::Address structure. This is
fundamental if we want to do certain types of optimizations, and be
able to support more features of the PCI bus in the foreseeable
future.
This patch already has several implications:
- All PCI::Device(s) hold a reference to a DeviceIdentifier structure
being given originally from the PCI::Access singleton. This means that
all instances of DeviceIdentifier structures are located in one place,
and all references are pointing to that location. This ensures that
locking the operation spinlock will take effect in all the appropriate
places.
- We no longer support adding PCI host controllers and then immediately
allow for enumerating it with a lambda function. It was found that
this method is extremely broken and too much complicated to work
reliably with the new paradigm being introduced in this patch. This
means that for Volume Management Devices (Intel VMD devices), we
simply first enumerate the PCI bus for such devices in the storage
code, and if we find a device, we attach it in the PCI::Access method
which will scan for devices behind that bridge and will add new
DeviceIdentifier(s) objects to its internal Vector. Afterwards, we
just continue as usual with scanning for actual storage controllers,
so we will find a corresponding NVMe controllers if there were any
behind that VMD bridge.
2022-02-10 16:33:13 +00:00
|
|
|
auto& pci_device_identifier = PCI::get_device_identifier(pci_address);
|
|
|
|
PCI::raw_access(pci_device_identifier, offset_in_pci_address, (1 << (structure.access_size - 1)), value);
|
2020-04-09 16:15:02 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
default:
|
2021-02-23 19:42:32 +00:00
|
|
|
VERIFY_NOT_REACHED();
|
2020-04-09 16:15:02 +00:00
|
|
|
}
|
2021-02-23 19:42:32 +00:00
|
|
|
VERIFY_NOT_REACHED();
|
2020-03-22 00:12:45 +00:00
|
|
|
}
|
2020-04-09 16:15:02 +00:00
|
|
|
|
2022-01-21 09:55:45 +00:00
|
|
|
bool Parser::validate_reset_register(Memory::TypedMapping<Structures::FADT> const& fadt)
|
2020-03-22 00:12:45 +00:00
|
|
|
{
|
2021-05-31 17:25:27 +00:00
|
|
|
// According to https://uefi.org/specs/ACPI/6.4/04_ACPI_Hardware_Specification/ACPI_Hardware_Specification.html#reset-register,
|
|
|
|
// the reset register can only be located in I/O bus, PCI bus or memory-mapped.
|
2020-04-09 16:15:02 +00:00
|
|
|
return (fadt->reset_reg.address_space == (u8)GenericAddressStructure::AddressSpace::PCIConfigurationSpace || fadt->reset_reg.address_space == (u8)GenericAddressStructure::AddressSpace::SystemMemory || fadt->reset_reg.address_space == (u8)GenericAddressStructure::AddressSpace::SystemIO);
|
|
|
|
}
|
|
|
|
|
|
|
|
void Parser::try_acpi_reboot()
|
|
|
|
{
|
|
|
|
InterruptDisabler disabler;
|
|
|
|
if (!can_reboot()) {
|
2021-03-12 10:27:59 +00:00
|
|
|
dmesgln("ACPI: Reboot not supported!");
|
2020-04-09 16:15:02 +00:00
|
|
|
return;
|
|
|
|
}
|
2021-03-12 10:27:59 +00:00
|
|
|
dbgln_if(ACPI_DEBUG, "ACPI: Rebooting, probing FADT ({})", m_fadt);
|
2020-04-09 16:15:02 +00:00
|
|
|
|
2022-01-21 09:55:45 +00:00
|
|
|
auto fadt_or_error = Memory::map_typed<Structures::FADT>(m_fadt);
|
|
|
|
if (fadt_or_error.is_error()) {
|
|
|
|
dmesgln("ACPI: Failed probing FADT {}", fadt_or_error.error());
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
auto fadt = fadt_or_error.release_value();
|
|
|
|
VERIFY(validate_reset_register(fadt));
|
2020-04-09 16:15:02 +00:00
|
|
|
access_generic_address(fadt->reset_reg, fadt->reset_value);
|
2020-07-06 13:27:22 +00:00
|
|
|
Processor::halt();
|
2020-04-09 16:15:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void Parser::try_acpi_shutdown()
|
|
|
|
{
|
2021-03-12 10:27:59 +00:00
|
|
|
dmesgln("ACPI: Shutdown is not supported with the current configuration, aborting!");
|
2020-04-09 16:15:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
size_t Parser::get_table_size(PhysicalAddress table_header)
|
|
|
|
{
|
|
|
|
InterruptDisabler disabler;
|
2021-05-01 19:10:08 +00:00
|
|
|
dbgln_if(ACPI_DEBUG, "ACPI: Checking SDT Length");
|
2022-01-13 16:20:22 +00:00
|
|
|
return Memory::map_typed<Structures::SDTHeader>(table_header).release_value_but_fixme_should_propagate_errors()->length;
|
2020-04-09 16:15:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
u8 Parser::get_table_revision(PhysicalAddress table_header)
|
|
|
|
{
|
|
|
|
InterruptDisabler disabler;
|
2021-05-01 19:10:08 +00:00
|
|
|
dbgln_if(ACPI_DEBUG, "ACPI: Checking SDT Revision");
|
2022-01-13 16:20:22 +00:00
|
|
|
return Memory::map_typed<Structures::SDTHeader>(table_header).release_value_but_fixme_should_propagate_errors()->revision;
|
2020-04-09 16:15:02 +00:00
|
|
|
}
|
|
|
|
|
2021-02-19 20:29:46 +00:00
|
|
|
UNMAP_AFTER_INIT void Parser::initialize_main_system_description_table()
|
2020-04-09 16:15:02 +00:00
|
|
|
{
|
2021-05-01 19:10:08 +00:00
|
|
|
dbgln_if(ACPI_DEBUG, "ACPI: Checking Main SDT Length to choose the correct mapping size");
|
2021-02-23 19:42:32 +00:00
|
|
|
VERIFY(!m_main_system_description_table.is_null());
|
2020-04-09 16:15:02 +00:00
|
|
|
auto length = get_table_size(m_main_system_description_table);
|
|
|
|
auto revision = get_table_revision(m_main_system_description_table);
|
|
|
|
|
2022-01-13 16:20:22 +00:00
|
|
|
auto sdt = Memory::map_typed<Structures::SDTHeader>(m_main_system_description_table, length).release_value_but_fixme_should_propagate_errors();
|
2020-04-09 16:15:02 +00:00
|
|
|
|
2021-03-12 10:27:59 +00:00
|
|
|
dmesgln("ACPI: Main Description Table valid? {}", validate_table(*sdt, length));
|
2020-04-09 16:15:02 +00:00
|
|
|
|
|
|
|
if (m_xsdt_supported) {
|
2022-04-01 17:58:27 +00:00
|
|
|
auto& xsdt = (Structures::XSDT const&)*sdt;
|
2021-03-12 10:27:59 +00:00
|
|
|
dmesgln("ACPI: Using XSDT, enumerating tables @ {}", m_main_system_description_table);
|
|
|
|
dmesgln("ACPI: XSDT revision {}, total length: {}", revision, length);
|
|
|
|
dbgln_if(ACPI_DEBUG, "ACPI: XSDT pointer @ {}", VirtualAddress { &xsdt });
|
2020-04-09 16:15:02 +00:00
|
|
|
for (u32 i = 0; i < ((length - sizeof(Structures::SDTHeader)) / sizeof(u64)); i++) {
|
2021-02-07 12:03:24 +00:00
|
|
|
dbgln_if(ACPI_DEBUG, "ACPI: Found new table [{0}], @ V{1:p} - P{1:p}", i, &xsdt.table_ptrs[i]);
|
2020-04-09 16:15:02 +00:00
|
|
|
m_sdt_pointers.append(PhysicalAddress(xsdt.table_ptrs[i]));
|
|
|
|
}
|
|
|
|
} else {
|
2022-04-01 17:58:27 +00:00
|
|
|
auto& rsdt = (Structures::RSDT const&)*sdt;
|
2021-03-12 10:27:59 +00:00
|
|
|
dmesgln("ACPI: Using RSDT, enumerating tables @ {}", m_main_system_description_table);
|
|
|
|
dmesgln("ACPI: RSDT revision {}, total length: {}", revision, length);
|
2021-02-07 12:03:24 +00:00
|
|
|
dbgln_if(ACPI_DEBUG, "ACPI: RSDT pointer @ V{}", &rsdt);
|
2020-04-09 16:15:02 +00:00
|
|
|
for (u32 i = 0; i < ((length - sizeof(Structures::SDTHeader)) / sizeof(u32)); i++) {
|
2021-02-07 12:03:24 +00:00
|
|
|
dbgln_if(ACPI_DEBUG, "ACPI: Found new table [{0}], @ V{1:p} - P{1:p}", i, &rsdt.table_ptrs[i]);
|
2020-04-09 16:15:02 +00:00
|
|
|
m_sdt_pointers.append(PhysicalAddress(rsdt.table_ptrs[i]));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-19 20:29:46 +00:00
|
|
|
UNMAP_AFTER_INIT void Parser::locate_main_system_description_table()
|
2020-04-09 16:15:02 +00:00
|
|
|
{
|
2022-01-13 16:20:22 +00:00
|
|
|
auto rsdp = Memory::map_typed<Structures::RSDPDescriptor20>(m_rsdp).release_value_but_fixme_should_propagate_errors();
|
2020-04-09 16:15:02 +00:00
|
|
|
if (rsdp->base.revision == 0) {
|
|
|
|
m_xsdt_supported = false;
|
|
|
|
} else if (rsdp->base.revision >= 2) {
|
|
|
|
if (rsdp->xsdt_ptr != (u64) nullptr) {
|
|
|
|
m_xsdt_supported = true;
|
|
|
|
} else {
|
|
|
|
m_xsdt_supported = false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (!m_xsdt_supported) {
|
|
|
|
m_main_system_description_table = PhysicalAddress(rsdp->base.rsdt_ptr);
|
|
|
|
} else {
|
|
|
|
m_main_system_description_table = PhysicalAddress(rsdp->xsdt_ptr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-09-10 13:45:12 +00:00
|
|
|
UNMAP_AFTER_INIT Parser::Parser(PhysicalAddress rsdp, PhysicalAddress fadt, u8 irq_number)
|
|
|
|
: IRQHandler(irq_number)
|
|
|
|
, m_rsdp(rsdp)
|
|
|
|
, m_fadt(fadt)
|
2020-04-09 16:15:02 +00:00
|
|
|
{
|
2021-03-12 10:27:59 +00:00
|
|
|
dmesgln("ACPI: Using RSDP @ {}", rsdp);
|
2020-04-09 16:15:02 +00:00
|
|
|
locate_static_data();
|
|
|
|
}
|
|
|
|
|
2022-04-01 17:58:27 +00:00
|
|
|
static bool validate_table(Structures::SDTHeader const& v_header, size_t length)
|
2020-04-09 16:15:02 +00:00
|
|
|
{
|
|
|
|
u8 checksum = 0;
|
2022-04-01 17:58:27 +00:00
|
|
|
auto* sdt = (u8 const*)&v_header;
|
2020-04-09 16:15:02 +00:00
|
|
|
for (size_t i = 0; i < length; i++)
|
|
|
|
checksum += sdt[i];
|
|
|
|
if (checksum == 0)
|
|
|
|
return true;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2020-02-16 00:27:42 +00:00
|
|
|
}
|