ers (Weber, Mateas, and Jhala 2010; Weber et al.
Jaidee, Muñoz-Avila, and Aha (2011) integrate
CBR and RL to make a learning version of GDA,
allowing their system to improve its goals and
domain knowledge over time. This means that less
work is required from human experts to specify pos-
sible goals, states, and other domain knowledge
because missing knowledge can be learned automat-
ically. Similarly, if the underlying domain changes,
the learning system is able to adapt to the changes
automatically. However, when applied to a simple
domain, the system was unable to beat the perform-
ance of a nonlearning GDA agent (Jaidee, Muñoz-
Avila, and Aha 2011).
State Space Planning
Automated planning and scheduling is a branch of
classic AI research from which heuristic state space
planning techniques have been adapted for planning in RTS game AI. In these problems, an agent is
given a start and goal state, and a set of actions that
have preconditions and effects. The agent must then
find a sequence of actions to achieve the goal from
Figure 7. GDA Conceptual Model.
A planner produces actions and expectations from goals, and unexpected outcomes result in additional goals being produced (Weber,
Mateas, and Jhala 2012).