approach would take into account concepts around
cybernetics, distributed cognition, the limits of narrow AI, and the complexity of human creativity.
Speculative design: Designers have a long history of
practice that is oriented not towards solving a problem, but rather exploring a new domain or future
potential. Design speculation for AI can consider the
new opportunities, the ethical, cultural, and societal
impacts, and the potential hazards.
Cybernetics: As designers move from the design of
individual things to the complex ecologies of smart
things, cybernetics has renewed relevance for designers in terms of understanding the dynamics of systems, goals, feedback, and conversations.
Design tools: Given the differences in design goals
and strategies for autonomous systems, new tools
are needed that help designers build working prototypes for both exploration and application. These
tools should provide ways of working around the
difficult aspects of AI and provide ways for designers and others to quickly experiment and iterate so
they can build understanding and design better systems.
Explainable AI: Beyond the technical challenges of
XAI, our discussion focused on the design issues
involved. What affordances can we make available so
the user can respond to explanations? How much
explanation should there be, and how much is too
much? What is the role of trust? What if an AI decision is nonintuitive? Does the public need to understand how AI makes decisions?
Elizabeth Churchill (Google), Mike Kuniavsky
(Xerox Parc), and Philip Van Allen (Art Center College of Design) served as the cochairs of this symposium, with the help of Molly Steensen (Carnegie Mellon University). The papers of the symposium were
published in the AAAI digital library.
and Execution for
Recent advances in AI and robotics have led to a
resurgence of interest in the objective of producing
intelligent agents that help us in our daily lives. Such
agents must be able to rapidly adapt to the changing
goals of their users, and the changing environments
in which they operate. These requirements lead to a
balancing act that most current systems have diffi-
culty contending with: on the one hand, human
interaction and computational scalability favor the
use of abstracted models of problems and environ-
ment domains; on the other, generating goal-direct-
ed behavior in the real world typically requires accu-
rate models that are difficult to obtain and
computationally hard to reason with.
This symposium addressed the core research ques-
tions that arise in designing autonomous systems
that execute their actions in complex environments
using imprecise models. The sources of imprecision
may range from computational pragmatism to
imperfect knowledge of the actual problem domain.
The symposium brought together researchers
from a variety of subfields of AI such as robot planning, model error detection, reasoning with abstractions, statistical learning for sequential decision-making and robotics, and cognitive systems. The
symposium featured presentations of 25 accepted
papers in addition to the invited talks. These presentations included short, position-paper presentations as well as longer presentations for full technical papers. The audience participated actively in the
presentations using allocated discussion times in
each presentation session. The symposium also hosted three invited speakers: Jeremy Frank (NASA Ames
Research Center), David Aha (US Naval Research
Laboratory), and Emma Brunskill (Stanford University). Finally, the attendees visited the Stanford
Robotics Lab, where they were hosted by Oussama
Khatib and Mikael Jorda, who explained the
OceanOne robot and demonstrated haptic control
One of the main themes of the symposium was the
notion of discrepancies, particularly discrepancies
between the expected state of the world according to
a model and the observed state of the world. Such
discrepancies can be used to trigger a correction to
the model or a refinement of the abstraction used in
creating the model. They could also be used to trigger goal reasoning, as they might imply that the goal
currently being pursued by the system is irrelevant,
or that there are more important goals to pursue at
Siddharth Srivastava, Shiqi Zhang, Nick Hawes,
Erez Karpas, George Konidaris, Matteo Leonetti,
Mohan Sridharan, and Jeremy Wyatt served as
cochairs of this symposium. Siddharth Srivastava,
Shiqi Zhang, and Eraz Karpas prepared this report.
The papers of the symposium were published in the
AAAI digital library.
Learning, Inference, and
Control of MultiAgent Systems
Agents are and will be deployed in a range of envi-
ronments. They will need to compete in market
places, to cooperate in teams, to communicate with
others, to coordinate their plans, and to negotiate
outcomes. Examples include self-driving cars inter-
acting in traffic, personal assistants acting on behalf
of humans and negotiating with other agents,
swarms of unmanned aerial vehicles, financial trad-
ing systems, robotic teams, and household robots.
Multiagent systems can have desirable properties
such as robustness and scalability, but their design