efforts, but little attention has been paid to how to
integrate them within a single system; when the
unexpected happens, how to adapt models for future
success; or how best to respond to unexpected events.
The AAAI fall symposium on Integrating Planning,
Diagnosis, and Causal Reasoning brought together
researchers to explore these questions, with four
challenge talks about integrated systems and seven
paper presentations on relevant themes.
Christophe Guettier (SAFRAN) presented Planning
and Safety Challenges in Autonomous Driving Systems. J. Benton (NASA Ames) presented Autonomous
Air Vehicles. Mark Micire (NASA Ames) presented Distributed Spacecraft Autonomy. Jeremy Frank (NASA
Ames) presented The Europa Lander Mission: A Space
Exploration Challenge for Autonomous Operations.
We encouraged extended conversations about the themes
with small group brainstorming and large group discussions. Next we highlight themes related to models,
system integration, problem solving, evaluation, and
Models define the representation used for problem
solving. The biggest challenges for integrating planning and diagnosis result from their differing models.
Model fidelity between such models may mismatch,
exacerbating the adage “all models are wrong, some
are useful” because model mismatch is not commonly
researched. Integrating models with different input
and output types and parameters is a substantial
verification and validation challenge for integrating
models. Early model integration reduces risk, where
analyzing interfaces between models or components can
identify not just differences in syntax (that is, Boolean
true versus 0/1) but also differences in semantics.
Ideally, a living document should describe what is
designed, implemented, and maintained over time,
reflecting how models may change and interrelate.
Mapping between models is not commonly considered with respect to execution monitoring. During
execution, there is merit in maintaining the independent state for each model, but potential exists for
better mechanisms and monitoring multiple models
at varying fidelities. After execution, the mapping
between models (and individual models) can be re-
fined from observed traces, where errors result from
mismatches from the observed and predicted traces.
System integration challenges differ from model
integration; it is not clear how to determine what one
component needs from another. What kind of feedback about a fault, or its consequences, does a planner
require to replan? How much detail from a plan
should be provided to the execution system? These
questions can be resolved by defining and maintaining design principles that reduce the risk of future
project phases with a clear distribution of responsibility across planner, executive, system health management, and all other planning/execution assets,
such as lower-level software (for example, hardware
controllers), support software (for example, additional
domain-specific planners), and, if relevant, additional
planners/executives on other systems.
Evaluation and testing of integrated systems poses
another major hurdle. How do we verify that the
developed system behaves like the designed system?
While validation and verification can be accomplished through scenario testing and stress testing to
ensure timely and well-behaved execution, it can be
difficult to answer, “What is good enough?” Unlike
human-operated deterministic systems, the threshold
for success of autonomous planner and executive-operated systems has little precedent. Defining a
priori thresholds will promote acceptance and deployment of autonomous systems.
Consideration of planning and diagnosis systems’
interactions with humans included topics such as
explanations, detecting and accounting for the differences between the user’s model and that used by
the system, and how to design systems accomplishing
these objectives. Topics such as mixed-initiative systems and user interfaces were elaborated on by more
nuanced and broader ethical considerations and
questions. Can a diagnosis system detect or manage
cognitive impairment on the part of users? If so, when
should it intervene? Should they be rebel agents (for
example, acting against human wishes in the interests
of safety)? Even when humans are not impaired,
human cognition is limited; how do we design interaction between computers and humans in light of
The symposium was organized by Jeremy Frank
(NASA), Matt Molineaux (Wright State), and Mark
Roberts (Naval Research Laboratory). Summary contributions were provided by Christian Muise, Rashied
Amini, Michael Rubin, and Shakil Khan. Further information can be found at the symposium website:
in Artificial Intelligence
for Human-Robot Interaction
The fifth AAAI symposium on Artificial Intelligence
for Human-Robot Interaction was held in October
2018 under the theme of interactive learning. This
symposium provides a gathering place for researchers
working at the intersection of the fields of AI and
human-robot interaction (HRI) — an interdisciplinary
area that historically has presented unique challenges.
Accordingly, the previous iterations of the symposium
respectively focused on ( 1) creating a venue for work at
this intersection, ( 2) improving interactions between
the AI and HRI communities, ( 3) critically analyzing the
nature of work conducted at this intersection, and ( 4)
presenting new challenges for the AI and HRI com-
munities. The 2018 symposium focused on one specific
research challenge at this intersection, with the in-
tention of attracting new attendees to the symposium
and holding more focused research-oriented discus-
sions rather than community-oriented discussions.
The chosen focus topic was interactive learning:
how robots can interact with humans to learn online.