understanding, to planning, to commonsense reasoning, mental modeling, creativity, and moral evaluation. As such, aspects of narrative intelligence turn
up in various subfields of artificial intelligence and
have even sparked subfields of their own. In convening this symposium, our aim was to provide a forum
to bring these aspects together. Our watchword was
story-enabled intelligence, the principle that the
mechanisms that enable humans to understand and
tell stories fundamentally enable many other aspects
of intelligent behavior.
Accordingly, the symposium gathered researchers
from overlapping fields such as planning, narratol-ogy, cognitive science, robotics, user interface design,
machine learning, and linguistics. The papers and
invited talks necessarily covered a broad range of topics but tended to center on one of two key themes.
The first theme that emerged was the explanatory
role of story intelligence: story intelligence as a tool for
rendering, for example, machine judgments, automated plans, or user interfaces intelligible to human
users. For example, our panel on public perception
of AI discussed how narratives can foster trust in otherwise opaque systems and what science journalists
can do to inform the public on issues relating to AI.
Kelly Neville (Soar, Inc.) demonstrated a narrative-based
support tool for augmenting the judgments of human
analysts; Cindy Bishop (MIT Media Lab) discussed
how art can be used to depict and communicate
about AI systems; Ted Selker showed how to improve
human-computer interfaces using systems that recognize users’ storied actions; Joshua Grossman (
Stanford) demonstrated conversational tutoring agents
in mathematics; and Ron Petrick (Heriot-Watt University) argued that effective, real-time planning
systems must sense and adapt to the emotional
responses of its users.
The second theme that emerged was how modeling story-understanding mechanisms can shed light
on various forms of human intelligence, such as plan
understanding, narrative comprehension, and social
reasoning. Pat Langley (Institute for the Study of
Learning and Expertise) discussed real-time planning in the context of disaster response; Danielle
Olson (Massachusetts Institute of Technology)
described how to tailor virtual reality narratives to
the experiences and biases of individual users; Risto
Miikkulainen (University of Texas at Austin) modeled
schizophrenic symptoms as breaks in story processing; and Yu-Jung Heo (Seoul National University)
argued for richer, gradated data sets for measuring
the performance of story understanding systems.
Stefan Sarkadi (King's College London), Eugene Shvarts
(October), Adam Amos-Binks (Applied Research Associates, Inc.), Mariya Yao (Metamaven), and Jongbin
Jung (Stanford) gave talks and organized panel discussions that connected story-enabled intelligence
to such wide-ranging topics as argumentation, decision making, trust, deception, and rebellion.
Throughout these discussions, participants weighed
the merits of various representations. Rogelio E.
Cardona-Rivera (University of Utah), Robert Kirby,
Morteza Behrooz (UC Santa Cruz), Andrew Gordon
(Institute for Creative Technologies), and Zhutian Yang
(Nanyang Technological University) discussed neural
networks, word embeddings, scripts, frames, ontological models of narrative, conceptual primitives,
and novel forms of causal reasoning and alignment-based learning through stories. John Mitros (
University College Dublin), Mary Ellen Foster (University
of Glasgow), and Taisuke Akimoto (Kyushu Institute
of Technology) focused on interpretability, explaina-bility, and narrative generation.
On reflection, Mark Finlayson (Florida International University) set the overarching agenda in
the opening talk, outlining the history and limits
of narrative fundamentalism in AI and calling for
interdisciplinary research necessary to support such
an enterprise. The resulting debate on whether narrative intelligence is fundamental or merely epiphenomenal resonated through the subsequent talks.
Starting from our deliberately all-embracing title,
Story-Enabled Intelligence, participants displayed a
variety of concrete applications — including planning, visualization, interpretability, computer games,
autonomous robots, and art. Topics spanned theory
of mind, interpretability as storytelling, prospective cognition, and natural language generation for
robotics. By bringing together these diverse applications and fostering a shared vision of narrative-based
intelligence, the symposium was able to take the field
to another level. Through a variety of talks, conversations, and panel debates, we articulated the fundamental role of stories in promoting better interaction
between computer agents and their human users
and in developing computational models that help
us humans better understand ourselves.
The symposium was organized by Leilani H. Gilpin
(Massachusetts Institute of Technology), Dylan
Holmes (Massachusetts Institute of Technology),
and Jamie C. Macbeth (Smith College). All three prepared this report. Papers from the symposium are
being prepared for publication in a CEUR Workshop
Toward AI for
Collaborative Open Science
The scientific community is undergoing a far-reaching
shift toward greater openness and interconnectiv-
ity. This trend is driven by a confluence of forces.
Spurred by the replication crisis in several branches
of science, scientists now place greater emphasis on
research transparency at every stage of the scientific
process. For example, it is becoming more common
to publish preregistered study designs, data sets, data
analysis code, preprint articles, and other nontra-
ditional research artifacts. With the emergence of
the open access and citizen science movements, sci-
entific research is also becoming more democratic.
Finally, data sets are becoming larger and more com-
plex, because of new high-throughput measurement