There is also evidence that we humans continually
manage our own goals as we react to our evolving situation (Altmann and Trafton 2002). We may suspend
current goals to pursue others (for example, that
relate to biological, emotional, or social needs), or
even abandon goals we deem to be unachievable or
of lower priority. Such goal switching may occur, for
example, when we receive a request from our supervisor, when we unexpectedly encounter an old
friend, or when we are given tickets to attend a
Similarly, intelligent agents may benefit from
deliberating about, and changing, their active goals
when warranted. This flexibility may allow them to
behave competently when they are not preencoded
with a model that dictates what goals they should
pursue in all encounterable situations.
I refer to goal reasoning (GR) as the process by
which intelligent agents continually reason about
the goals they are pursuing, which may lead to goal
change (Cox 2007; Muñoz-Avila et al. 2010; Klenk et
al. 2013; Vattam et al. 2013). This general topic has
been studied, using different terminology, in multi-
ple disciplines for several decades. In this article, I
summarize our group’s research on GR, which has
been strongly influenced by perspectives on cogni-
tive architectures and symbolic task planning.
Situating Goal Reasoning Agents
Figure 1 highlights a key property of interactive GR
agents, where we use an observe, orient, decide, act
(OODA) loop to frame the agent’s decision cycle. 1 In
this figure, we assume a human operator can interact
with the agent, at least to provide an initial objective
or objectives. In contrast to some others, GR agents
can deliberate on a space of goals, dynamically adjust
goal priorities, and perform goal-management functions (for example, formulation, commitment, and
Many dimensions for goals exist. For example, borrowing and adapting from van Riemsdijk et al.’s
(2008) taxonomy, these include (among others) type,
specificity, duration, purpose, condition, and persistence.
Type: Goals can be declarative (referring to belief
states) or procedural (referring to actions).
Specificity: Goals may refer to a concrete instance or an
abstraction (for example, region of belief states,
sequence of actions).
Figure 1. Goal Reasoning Agents Can Formulate Their Own Goals.