It causes more harm than good in many singleplayer scenarios, for example when there is an allied AI keep close to the player's AI.
The candidate action is still part of the Experimental AI, and its marco remains available, so it can easily be included in a scenario if desired.
The algorithm used in this CA is too simple to work reliably in a general setting, it tends to send whole groups of units toward small numbers of villages, or even individual ones. In its current version, it should not be used at all, not even in the Experimental AI. The recommended way to emphasize village hunting is to set the village_value aspect to a larger-than-default value and let the move-to-targets CA take care of it.
We are, however, leaving the CA code and the macros in place for potential future work.
The old villages CA is quite a bit better at distributing multiple units across multiple villages. The advantage of the new grab_villages CA is that it has a variable score, sometimes grabbing villages before, and sometimes after attacks. This does not outweigh its shortcomings though.
So for now, the default AI will continue to use the previous CA, and the Experimental AI will use the new one. Thus, the two AIs are not quite identical any more (but still very similar).
I also added a todo comment that the grab_villages CA might be reinstated if it is improved.
The return scores were changed in commit 4999b20bd1, but the max_score values in the configurations have not been updated yet. As the relative ranking was not changed, this should not have any effect on gameplay.
This also sets its default value to zero, in order to have consistent default behavior with versions from before this parameter was introduced.
It also provides a macro so that the ExpAI can be used with custom parameters in a scenario config, and adds high_level_fraction as an optional parameter to the Recruit Rushers Micro AI.
This means:
1. Adding the new CA to AI configs
2. Removing it whenever the combat CA is removed
3. Preventing conflicts for AIs that previously used overlapping scores
This changes the following:
- Fixes the experimental AI, without changing any of its code except for that in the [engine] tag
- Returns a dummy self from the dummy Lua engine, so that external CAs are more easily switched to using [params]
- Changes the order in which parameters are passed to AI component code. The order is now:
state/self, params, data
Though this may be in vain, as I read on the wiki that attack_depth is not
working with the RCA ai and is deprecated. While this bug might break
attack_depth on one difficulty level, it shouldn't have broken it entirely,
so this is probably not the reason it doesn't seem to be working.
- Alter macro {AI_CA_RECRUITMENT} to point to the new CA
- Create a new AI cfg file for a stronger AI
- Create a new AI cfg (dev) file for choosing the old recruitment CA in debug mode.
- Alter macro {AI_NO_RECRUITMENT}