and plan recognition, as separate suites of tests. Even
without these tests in place, new systems should at
least be evaluated against representative related systems in order to show that they represent a nontrivial improvement.
Results published about complete systems are similarly difficult to compare against one another due to
their varied methods of evaluation. Some of the only
comparable results come from systems demonstrated
against the inbuilt StarCraft AI, despite the fact that
the inbuilt AI is a simple scripted strategy that average human players can easily defeat (Weber, Mateas,
and Jhala 2010). Complete systems are more effectively tested in StarCraft AI competitions, but these
are run infrequently, making quick evaluation difficult. An alternative method of evaluation is to automatically test the bots against other bots in a ladder
tournament, such as in the StarCraft Brood War Ladder for BWAPI Bots. 26 In order to create a consistent
benchmark of bot strength, a suite of tests could be
formed from the top three bots from each of the
AIIDE StarCraft competitions on a selected set of
tournament maps. This would provide enough variety to give a general indication of bot strength, and
it would allow for results to be compared between
papers and over different years. An alternative to testing bots against other bots is testing them in matches against humans, such as how Weber, Mateas, and
Jhala (2010) tested their bot in the ICCup.
Finally, it may be useful to have a standard evaluation method for goals other than finding the AI best
at winning the game. For example, the game industry would be more interested in determining the AI
that is most fun to play against, or the most humanlike. A possible evaluation for these alternate objectives was discussed earlier.
This article has reviewed the literature on artificial
intelligence for real-time strategy games focusing on
StarCraft. It found significant research focus on tacti-
cal decision making, strategic decision making, plan
recognition, and strategy learning. Three main areas
were identified where future research could have a
large positive impact. First, creating RTS AI that is
more humanlike would be an interesting challenge
and may help to bridge the gap between academe
and industry. The other two research areas discussed
were noted to be lacking in research contributions,
despite being highly appropriate for real-time strate-
gy game research: multiscale AI, and cooperation.
Finally, the article finished with a call for increased
rigor and ideally standardization of evaluation meth-
ods, so that different techniques can be compared on
even ground. Overall the RTS AI field is small but
very active, with the StarCraft agents showing con-
tinual improvement each year, as well as gradually
becoming more based upon machine learning, learn-
ing from demonstration, and reasoning, instead of
using scripted or fixed behaviors.
1. Blizzard Entertainment: StarCraft: blizzard.com/games/
2. Wargus: wargus.sourceforge.net.
3. Open RTS: skatgame.net/mburo/orts.
4. Brood War API: code.google.com/p/bwapi.
5. AIIDE StarCraft AI Competition: www.starcraftaicompe-tition.com.
6. CIG StarCraft AI Competition: ls11-www.cs.uni-dort-
7. Mad Doc Software. Website no longer available.
8. SparCraft: code.google.com/p/sparcraft/.
9. Blizzard Entertainment: Warcraft III: blizzard.com/
10. TimeGate Studios: Kohan II Kings of War: www.
11. Spring RTS: springrts.com.
12. International Cyber Cup: www.iccup.com.
13. See A. J. Champandard, This Year  in Game AI:
Analysis, Trends from 2010 and Predictions for 2011.
14. Blizzard Entertainment: StarCraft II: blizzard.com/
15. Evolution Chamber: code.google.com/p/evolution-chamber/.
16. See A. Turner, 2012, Soar-SC: A Platform for AI Research
in StarCraft: Brood War github.com/bluechill/Soar-SC/tree/master/Soar-SC-Papers.
17. Introversion Software: DEFCON: www.introversion
18. RoboCup: www.robocup.org.
19. Uber Entertainment: Planetary Annihilation: www.
20. Personal communication with M. Robbins, 2013. Robbins is a software engineer at Uber Entertainment, formerly game-play engineer at Gas Powered Games.
21. Also see L. Dicken’s 2011 blog, altdevblogaday.com
22. Personal communication with B. Schwab, 2013. Schwab
is a senior AI/game-play engineer at Blizzard Entertainment.
23. BotPrize: botprize.org.
24. See L. Dicken’s 2011 blog, A Turing Test for Bots. altde-
25. BWAPI Standard Add-on Library: code.google.
26. StarCraft Brood War Ladder for BWAPI Bots: bots-
Aamodt, A., and Plaza, E. 1994. Case-Based Reasoning:
Foundational Issues, Methodological Variations, and Sys-
tem Approaches. AI Communications 7( 1): 39–59.
Aha, D.; Molineaux, M.; and Ponsen, M. 2005. Learning to
Win: Case-Based Plan Selection in a Real-Time Strategy
Game. In Case-Based Reasoning: Research and Development,