scouting and exploration in partial observability settings. A similar, but not as extreme, situation can be
seen in the 32 ; 32 maps (map3 and map8), where
there were also a large number of ties.
Conclusions
This article has presented the results of the first μRTS
AI competition. The competition was organized
around three tracks, each of which used four open
and four hidden maps. Winners of the competition
are shown in table 1. Observing the results of all
three tracks, we can see that hard-coded bots
(POWorkerRush and POLightRush) perform very well
in standard maps, where their hard-coded rules
apply, but they perform worse on nonstandard maps.
These results are consistent with results from the
StarCraft AI competitions (Churchill et al. 2016),
where all maps are standard, and thus hard-coded
bots still dominate. We have also seen that game tree
search bots perform better on nonstandard maps, but
struggle with larger maps. Additionally, we have seen
that the concept used by Strategy Tactics of integrat-
ing long-term script-based search with low-level
game tree search can achieve very good performance,
being the only approach that outperformed the hard-
coded bots consistently. Finally, we have seen that
the one machine learning–based bot (SCV) performs
very well in scenarios similar to its training data, but
struggles when the maps look very different. One
strong point of the machine learning approaches is
the low CPU requirements, which leaves a lot of free
CPU time for potential integration with other tech-
niques.
The good performance of the hard-coded bots,
which would not be difficult for a human to beat,
indicates that there is significant room for improvement, and plenty of work yet to be done, to achieve
human-level AI bots in RTS games.
From the results, we can also see that the largest
factors in determining the performance of a bot are
( 1) the size of the map (scalability), ( 2) the type of
map (whether standard or not, and thus whether
hard-coded strategies will work or not), and ( 3) par-
tial observability. We observed that the small amount
of nondeterminism in RTS games did not have as
much effect as these three factors, which might pro-
vide hints as to which aspects to focus on in future
work.
A final thought, for future competitions, is that
although the bots presented in this edition represent
a wide spectrum of AI approaches, many others that
have performed well in related domains (such as
reinforcement learning or evolutionary algorithms)
were not represented. The results of this first edition
of the competition, however, provide a good baseline
for comparison, and hint at the different strengths
and weaknesses of the different approaches.
Notes
1. github.com/santiontanon/microrts.
2. github.com/nbarriga/microRTSbot.
3. github.com/rubensolv/SCV.
4. Incorporated into the main branch of μRTS at
github.com/santiontanon/microrts.
5. SCV was referred to as PVAIML ED in the competition
website.
References
Barriga, N. A.; Stanescu, M.; and Buro, M. 2017a. Combin-
ing Strategic Learning and Tactical Search in Real-Time
Strategy games. In Proceedings of the Thirteenth Annual AAAI
Conference on Artificial Intelligence and Interactive Digital
Entertainment (AIIDE). Palo Alto, CA: AAAI Press.
doi.org/10.1109/TCIAIG.2017.2717902
Barriga, N. A.; Stanescu, M.; and Buro, M. 2017b. Game Tree
Search based on Non-Deterministic Action Scripts in Real-Time Strategy Games. IEEE Transactions on Computational
Intelligence and AI in Games (PP) 99. doi.org/10.1109/TCI-
AIG.2017.2717902
Buro, M. 2003. Real-Time Strategy Games: A New AI
Research Challenge. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, 1534–1535.
San Francisco: Morgan Kaufmann.
Churchill, D., and Buro, M. 2013. Portfolio Greedy Search
and Simulation for Large-Scale Combat in StarCraft. In
Proceedings of the 2013 IEEE Conference on Computational Intelligence in Games (CIG), 2, 1–8. Piscataway, NJ: Institute for
Electrical and Electronics Engineers. doi.org/10.1109/
CIG.2013.6633643
Churchill, D.; Preuss, M.; Richoux, F.; Synnaeve, G.; Uriarte,
A.; Ontañón, S.; and Certicky, M. 2016. StarCraft Bots and
Competitions. Springer Encyclopedia of Computer Graphics and
Games. Berlin: Springer.
Corlett, R. A., and Todd, S. J. 1986. A Monte-Carlo Approach
Table 1. Competition Winners.
Track Open Maps Hidden Maps All Maps
Standard POLightRush NaïveMCTS StrategyTactics
Nondeterminism StrategyTactics NaïveMCTS StrategyTactics
Partially Observable POLightRush BS3NaïveMCTS POLightRush