wesnoth/data/test/scenarios
gfgtdf 851c909cd3
Fix #8460 [effect] apply_to=variation (#8475)
* Fix #8460 [effect] apply_to=variation

Previously the code could apply the variation effects
last, so that codes like
```
[effect]
  apply_to=variation
  ..
[/effect]
[effect]
  apply_to=hitpoints
  heal_full=yes
[/effect]
```
Would not set the unit hitpoints to the new variations
hitpoints because the variation effect was applied after
the healing effect.

In 1.16 this worked because healing was applied a little
too often but that lead also to bugs like #8342

* f prev

* f prev

* f prev

* f prev

* f prev

* Create modification_effect_type_variation.cfg

* Update wml_test_schedule
2024-03-07 01:25:59 +01:00
..
behavioral_tests Fix indentation in unit tests using sync.evaluate_* 2023-01-20 02:28:54 +01:00
cve_tests Reindent unit tests and unit test macros 2023-01-20 02:28:54 +01:00
lua_tests allow units.remove_modifications to remove multiple types 2023-12-13 17:01:28 +01:00
macro_tests Add priority and filter to overwrite specials (#7746) 2023-10-08 10:09:31 -05:00
manual_tests Fix deprecation warning when setting [endlevel]end_credits= 2024-02-02 16:04:27 +01:00
test_tests Tests cleanup. 2022-12-21 12:30:04 -06:00
wml_tests Fix #8460 [effect] apply_to=variation (#8475) 2024-03-07 01:25:59 +01:00
README.md Convert readme in data/test/scenarios to Markdown, and add docs 2023-05-07 11:10:06 +02:00

Test scenarios

This directory contains both the scenarios used by C++ unit tests and those which are WML unit tests.

C++ unit tests

For the C++ unit tests, it is recommended to reuse the same scenario file as much as possible and just inject WML into it.

Injection can be done by adding a config object containing event code and then registering that manually for game_events.

Manual tests

The manual_tests subdirectory contains scenarios that expect to be run interactively, either by binding a hotkey for the main menu's "Choose Test Scenario" option, or with the command line argument -t <testname>.

Many of these are closer to workbenches than tests, allowing developers to do some action that isn't automated, and then to find out whether the result matched the expectation.

Automated WML unit tests

WML unit tests are self-contained scenario files to test a specific area of WML.

The test result is a status code from the unit_test_result enum found in game_launcher.hpp, or in rare cases tests expect to be timed out by the test runner. They can be run individually with Wesnoth's -u <testname> command line argument, but are usually run by the run_wml_tests script based on the list in wml_test_schedule.

They are unlikely to return the same status if run with -t <testname>. Running them with -t can still be helpful for debugging.

Guidelines for writing automated new tests

Tests are generally implemented with the GENERIC_UNIT_TEST macro, with two leaders called Alice and Bob on separate keeps. If your test needs them to be adjacent to each other, consider using COMMON_KEEP_A_B_UNIT_TEST instead, which puts their starting locations next to each other instead of needing to move them.

Most will expect the result PASS, and new tests should generally be written to result in a PASS. The testing mechanism supports other expectations too, however the optimisation to run a batch of tests in a single instance of Wesnoth currently only supports batching for tests that return PASS.

Tests that shouldn't PASS should have a name that makes that expectation obvious. However, the existing tests don't conform to this goal yet.

Names containing _fail_ or ending _fail are reserved for tests that should not PASS. However, they may expect a status that is neither PASS nor FAIL.

Names containing break or error might be for tests expected to PASS. Some of these are testing loops, or testing error-handling that is expected to handle the error.