wesnoth/data/test/scenarios
Steve Cotton a42030eae2 Test calculations of abilities with add and sub
Fix a typo in the add_sub_separated test, because it was testing
exactly the same code as add_sub_cumulative.

Add two new tests and clarify documentation, because in these tests
the order of the abilities determines whether the add or sub value is
used, it isn't that sub always overrides add.

(cherry picked from commit 547de5fd93)
2024-09-08 16:28:39 +02:00
..
behavioral_tests Fix indentation in unit tests using sync.evaluate_* 2023-01-20 02:28:54 +01:00
cve_tests Reindent unit tests and unit test macros 2023-01-20 02:28:54 +01:00
lua_tests allow units.remove_modifications to remove multiple types 2023-12-13 17:01:28 +01:00
macro_tests Add priority and filter to overwrite specials (#7746) 2023-10-08 10:09:31 -05:00
manual_tests Fix deprecation warning when setting [endlevel]end_credits= 2024-02-02 16:04:27 +01:00
test_tests Add berserk weapon special abilities tests (#8973) 2024-06-15 20:21:47 -05:00
wml_tests Test calculations of abilities with add and sub 2024-09-08 16:28:39 +02:00
README.md Convert readme in data/test/scenarios to Markdown, and add docs 2023-05-07 11:10:06 +02:00

Test scenarios

This directory contains both the scenarios used by C++ unit tests and those which are WML unit tests.

C++ unit tests

For the C++ unit tests, it is recommended to reuse the same scenario file as much as possible and just inject WML into it.

Injection can be done by adding a config object containing event code and then registering that manually for game_events.

Manual tests

The manual_tests subdirectory contains scenarios that expect to be run interactively, either by binding a hotkey for the main menu's "Choose Test Scenario" option, or with the command line argument -t <testname>.

Many of these are closer to workbenches than tests, allowing developers to do some action that isn't automated, and then to find out whether the result matched the expectation.

Automated WML unit tests

WML unit tests are self-contained scenario files to test a specific area of WML.

The test result is a status code from the unit_test_result enum found in game_launcher.hpp, or in rare cases tests expect to be timed out by the test runner. They can be run individually with Wesnoth's -u <testname> command line argument, but are usually run by the run_wml_tests script based on the list in wml_test_schedule.

They are unlikely to return the same status if run with -t <testname>. Running them with -t can still be helpful for debugging.

Guidelines for writing automated new tests

Tests are generally implemented with the GENERIC_UNIT_TEST macro, with two leaders called Alice and Bob on separate keeps. If your test needs them to be adjacent to each other, consider using COMMON_KEEP_A_B_UNIT_TEST instead, which puts their starting locations next to each other instead of needing to move them.

Most will expect the result PASS, and new tests should generally be written to result in a PASS. The testing mechanism supports other expectations too, however the optimisation to run a batch of tests in a single instance of Wesnoth currently only supports batching for tests that return PASS.

Tests that shouldn't PASS should have a name that makes that expectation obvious. However, the existing tests don't conform to this goal yet.

Names containing _fail_ or ending _fail are reserved for tests that should not PASS. However, they may expect a status that is neither PASS nor FAIL.

Names containing break or error might be for tests expected to PASS. Some of these are testing loops, or testing error-handling that is expected to handle the error.