to change its objectives, plans, or actions are consistent with operator/command intent? Guarantees are
required that a GR agent will operate correctly, both
when an operator provides direct oversight and
when it acts autonomously, which is complicated
when operating in complex environments where not
all situations can be predicted. This complexity is
exacerbated if these agents use online learning techniques to acquire or refine models of their environment and other agents. Related to the motivations
for and study of safe AI (Vassev 2016; Omohundro
2014), additional research is needed to develop best
practices for safe goal reasoning so that GR agents
can be deployed confidently as productive and
appropriately trusted partners in human-agent teams
and in autonomous settings.
Intelligent agents that dynamically deliberate on,
reprioritize, and self-select their goals have a long his-
tory of study (for example, Norman and Long 1996;
Altmann and Trafton, 2002; Cox, 2007; Tala-
madupula et al. 2010; Thangarajah et al. 2010;
Weber, Mateas, and Jhala 2012; Jaidee, Muñoz-Avila,
and Aha 2013; Klenk, Molineaux, and Aha 2013; Har-
land et al. 2014; Dannenhauer and Muñoz-Avila
2015; Cox, Dannenhauer, and Kondrakunta 2017).
Researchers studying these types of agents are moti-
vated by the challenges of deploying them in com-
plex environments, including to serve as members of
human-agent teams. Our group refers to these as goal
reasoning (GR) agents, and in this article I described
some of our inspirations, foundations, and emerging
applications. Although few applications of these
agents exist, demand for them should increase
because GR can serve as the foundation of highly
autonomous and proactive approaches for vehicle
control and intelligent decision aids. Many impor-
tant research directions on GR require further atten-
tion, in addition to those I noted previously. These
include, for example, representing and reasoning
with additional goal types (van Riemsdijk, Dastani,
and Winikoff 2008), dynamically recognizing other
agents’ goals (Vered and Kaminka 2017), recognizing
team intent (Franke et al. 2000), and methods for
learning goal priorities (Young and Hawes 2012).
Suitably constrained GR agents have tremendous
potential for applications of critical interest, but the
task of designing and developing them is AI-com-
plete (Shapiro 1992), as such agents must perform
comprehensive situation assessment and decision-
making tasks. For this reason I encourage AI
researchers to consider how their work relates to GR,
and to contribute to this interesting topic.
This article is based on my Robert S. Engelmore
Memorial Lecture, given at IAAI 2017 in honor of
Engelmore’s extraordinary service to AAAI and con-
tributions to applied AI. I did not survey the broader
topic of GR, including many contributions from, for
example, Daniel Borrajo, Nick Hawes, Tom Hinrichs,
Gal Kaminka, Mary Lou Maher, and Okun Topçu. For
more information, please see, as a start, the 2018 AI
Communications special issue on goal reasoning and
the proceedings from GR workshops held at AAAI- 10,
ACS- 13, ACS- 15, IJCAI- 16, and IJCAI- 17.
Thanks to the many colleagues who have contributed to our group’s work, including Ron Alford,
Tom Apker, Bryan Auslander, Dave Bonanno, Hayley
Borck, Dongkyu Choi, Alexandra Coman, Dustin
Dannenhauer, Michael Floyd, Keith Frazer, Kellen
Gillespie, Brian Houston, Ulit Jaidee, Ben Johnson,
Justin Karneeb, Matt Klenk, Michael Leece, Michael
Maynord, Jim McMahon, David Menager, Matt
Molineaux, Phil Moore, Héctor Muñoz-Avila, Jay
Powell, Mak Roberts, Vikas Shivashankar, Christine
Task, Son To, Swaroop Vattam, Mark Wilson, and
Artur Wolek. Thanks also to our sponsors (AFOSR,
DARPA, NRL, ONR, OSD ASD (R&E), with special
thanks to Michael Cox for steering us toward this
topic and to AAAI for providing this opportunity.
1. Although I use an OODA loop ( pogoarchives.org/m/dni/
I do not intend this as a constraint. GR can be expressed in
many other agent reasoning frameworks.
2. I’m referring to agreeable, rather than rebel, GR agents
here. That is, while GR agency can be useful when an operator is available, it is particularly well motivated when the
operator is inaccessible during complex environment scenarios.
3. This should read belief state throughout, but is shortened
4. Using the situation calculus, Task et al. (2018) provide a
formalization of the solution space through which this
search takes place so as to inform the selection of future
6. oceanai.mit.edu/moos-ivp/docs/Guide To_iOceanServer-Comms.pdf.
7. Returning is important. In 2010, four Navy AUVs, with a
collective value of one million dollars, were lost during a
training exercise. They were found only after an intense
8. I often use goal as shorthand for goal node in this section.
9. GRIM was implemented using our group’s ActorSim platform ( makro.ink/actorsim).
10. For example, the IJCAI- 17 Workshop on Goal Reasoning
Aha, D. W., and Coman, A. 2017. The AI Rebellion: Chang-
ing the Narrative. In Proceedings of the 31st AAAI Conference on
Artificial Intelligence, 4826–4830. Palo Alto, CA: AAAI Press.