Ponsen, M.; Muñoz-Avila, H.; Spronck, P.; and Aha, D. 2005.
Automatically Acquiring Domain Knowledge for Adaptive
Game AI Using Evolutionary Learning. In Proceedings, The
Twentieth National Conference on Artifcial Intelligence and the
Seventeenth Innovative Applications of Artifcial Intelligence
Conference, 1535–1540. Palo Alto, CA: AAAI Press.
Ponsen, M.; Muñoz-Avila, H.; Spronck, P.; and Aha, D. 2006.
Automatically Generating Game Tactics Through Evolutionary Learning. AI Magazine 27( 3): 75–84.
Sailer, F.; Buro, M.; and Lanctot, M. 2007. Adversarial Planning Through Strategy Simulation. In Proceedings of the IEEE
Conference on Computational Intelligence and Games, 80–87.
Piscataway, NJ: Institute for Electrical and Electronics Engineers.
Sánchez-Pelegrín, R.; Gómez-Martín, M.; and Díaz-Agudo,
B. 2005. A CBR Module for a Strategy Videogame. Paper presented at the ICCBR05 Workshop on Computer Gaming
and Simulation Environments at the ICCBR, Chicago, IL,
23–26 August.
Schaeffer, J. 2001. A Gamut of Games. AI Magazine 22( 3):
29–46.
Scott, B. 2002. The Illusion of Intelligence. In AI Game Programming Wisdom, volume 1, ed. S. Rabin, 16–20. Hingham,
MA: Charles River Media.
Shantia, A.; Begue, E.; and Wiering, M. 2011. Connectionist Reinforcement Learning for Intelligent Unit Micro Management in StarCraft. Paper Presented at the 2011 International Joint Conference on Neural Networks (IJCNN). San
Jose, CA USA, 31 July– 5 August.
Sharma, M.; Holmes, M.; Santamaria, J.; Irani, A.; Isbell, C.;
and Ram, A. 2007. Transfer Learning in Real-Time Strategy
Games Using Hybrid CBR/RL. In Proceedings of the 20th International Joint Conference on Artifcial Intelligence. Palo Alto,
CA: AAAI Press.
Sutton, R. S., and Barto, A. G. 1998. Reinforcement Learning:
An Introduction. Cambridge Massachusetts: The MIT Press.
Synnaeve, G., and Bessière, P. 2011a. A Bayesian Model for
Plan Recognition in RTS Games Applied to StarCraft. In
Proceedings of the Seventh AAAI Conference on Artifcial Intelligence and Interactive Digital Entertainment, 79–84. Palo Alto,
CA: AAAI Press.
Synnaeve, G., and Bessière, P. 2011b. A Bayesian Model for
RTS Units Control Applied to StarCraft. In Proceedings of
the 2011 IEEE Conference on Computational Intelligence
and Games, 190–196. Piscataway, NJ: Institute for Electrical
and Electronics Engineers.
Synnaeve, G., and Bessière, P. 2012. A Dataset for StarCraft
AI and an Example of Armies Clustering. In Artificial Intelligence in Adversarial Real-Time Games: Papers from the
2012 AIIDE Workshop, AAAI Technical Report WS- 12-15.
Palo Alto, CA: AAAI Press.
Szczepanski, T., and Aamodt, A. 2009. Case-Based Reasoning for Improved Micromanagement in Real-Time Strategy
Games. Paper Presented at the Case-Based Reasoning for
Computer Games at the 8th International Conference on
Case-Based Reasoning, Seattle, WA, USA, 20–23 July.
Tozour, P. 2002. The Evolution of Game AI. In AI Game Programming Wisdom, volume 1, ed. S. Ragin, 3–15. Hingham,
MA: Charles River Media.
Uriarte, A., and Ontañón, S. 2012. Kiting in RTS Games
Using Influence Maps. In Artificial Intelligence in Adversarial Real-Time Games: Papers from the AIIDE Workshop.
Technical Report WS- 12-14. Palo Alto, CA: AAAI Press.
Weber, B., and Mateas, M. 2009. A Data Mining Approach to
Strategy Prediction. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence and Games, 140–147. Piscataway, NJ: Institute for Electrical and Electronics Engineers.
Weber, B.; Mateas, M.; and Jhala, A. 2010. Applying Goal-Driven Autonomy to StarCraft. In Proceedings of the Sixth
AAAI Conference on Artifcial Intelligence and Interactive Digital Entertainment, 101–106. Palo Alto, CA: AAAI Press.
Weber, B.; Mateas, M.; and Jhala, A. 2011a. Building
Human-Level AI for Real-Time Strategy Games. In Advances
in Cognitive Systems: Papers from the AAAI Fall Symposium. Technical Report FS-11-01, 329–336. Palo Alto, CA:
AAAI Press.
Weber, B.; Mateas, M.; and Jhala, A. 2011b. A Particle Model for State Estimation in Real-Time Strategy Games. In
Proceedings of the Seventh AAAI Conference on Artifcial Intelligence
and Interactive Digital Entertainment, 103–108. Palo Alto, CA:
AAAI Press.
Weber, B.; Mateas, M.; and Jhala, A. 2012. Learning from
Demonstration for Goal-Driven Autonomy. In Proceedings of
the Twenty-Sixth AAAI Conference on Artifcial Intelligence,
1176–1182. Palo Alto, CA: AAAI Press.
Weber, B.; Mawhorter, P.; Mateas, M.; and Jhala, A. 2010.
Reactive Planning Idioms for multiScale Game AI. In
Proceedings of the 2010 IEEE Conference on Computational Intelligence and Games, 115–122. Piscataway, NJ: Institute for Electrical and Electronics Engineers.
Weber, B., and Ontañón, S. 2010. Using Automated Replay
Annotation for Case-Based Planning in Games. Paper Presented at the Case-Based Reasoning for Computer Games at
the 8th International Conference on Case-Based Reasoning,
Seattle, WA, USA, 20–23 July.
Wintermute, S.; Xu, J.; and Laird, J. 2007. SORTS: A Human-Level Approach to Real-Time Strategy AI. In Proceedings of
the Eighth AAAI Conference on Artifcial Intelligence and Interactive Digital Entertainment, 55–60. Palo Alto, CA: AAAI
Press.
Woodcock, S. 2002. Foreword. In AI Techniques for Game Programming, ed. M. Buckland. Portland, OR: Premier Press.
Ian Watson is an associate professor of artificial intellli-gence in the Department of Computer Science at the University of Auckland, New Zealand. With a background in
expert systems Watson became interested in case-based reasoning (CBR) to reduce the knowledge engineering bottleneck. Watson has remained active in CBR, focusing on game
AI alongside other techniques. Watson also has an interest
in the history of computing, having written a popular science book called The Universal Machine.
Glen Robertson is a Ph.D. candidate at the University of
Auckland, working under the supervision of Ian Watson.
Robertson’s research interests are in machine learning and
artificial intelligence, particularly in unsupervised learning
for complex domains with large data sets.