tions that give possible reasons for discrepancies and
create new goals to compensate for the change in
assumed situation. This system required substantial
domain engineering in order to define all of the possible goals, expectations, and explanations necessary
for a domain as complex as StarCraft.
Later work added the ability for the GDA system to
learn domain knowledge for StarCraft by analyzing
replays offline (Weber, Mateas, and Jhala 2012). In
this modified system, a case library of sequential
game states was built from the replays, with each case
representing the player and opponent states as
numerical feature vectors. Then case-based goal for-
mulation was used to produce goals at run time. The
system forms predictions of the opponent’s future
state (referred to as explanations in the article) by
finding a similar opponent state to the current oppo-
nent state in the case library, looking at the future of
the similar state to find the difference in the feature
vectors over a set period of time, and then applying
this difference to the current opponent state to pro-
duce an expected opponent state. In a similar man-
ner, it produces a goal state by finding the expected
future player state, using the predicted opponent
Figure 11. New Observations Update an Opponent’s Possible Plan Execution Statuses to
Determine Which Plans Are Potentially Being Followed.
(Kabanza et al. 2010).
... ... ...
t = 1 t = 2 t = 3
Action Action Action