34 AI MAGAZINE
Every cognitive architecture starts with a set of theoreti- cal commitments. We have argued (Forbus 2016) that human-level artificial intelligences will be built by cre-
ating sufficiently smart software social organisms. By that we
mean systems capable of interacting with people using natu-
ral modalities, operating and learning over extended periods
of time, as apprentices and collaborators, instead of as tools.
Just as we cannot directly access the internal representations
of the people and animals we work with, cognitive systems
should be able to work with us on our terms. But how does
one create such systems? We have two core hypotheses,
inspired by research in cognitive science:
Our first core hypothesis is that analogical reasoning and
learning are central to human cognition. There is evidence that
processes described by Gentner’s (1983) structure-mapping
theory of analogy and similarity operate throughout human
cognition, including visual perception (Sagi, Gentner, and
Lovett 2012), reasoning and decision making (Markman and
Medin 2002), and conceptual change (Gentner et al. 1997).
Our second core hypothesis is that qualitative representations
(QRs) are a key building block of human conceptual structure.
Continuous phenomena and systems permeate our environment
and our ways of thinking about it. This includes the physical
world, where qualitative representations have a long track
record of providing human-level reasoning and performance
(Forbus 2014), but also in social reasoning (for example,
degrees of blame [Tomai and Forbus 2007]). Qualitative representations carve up continuous phenomena into symbolic
descriptions that serve as a bridge between perception and
cognition, facilitate everyday reasoning and communication,
and help ground expert reasoning.
Analogy and Qualitative
Representations in the
Kenneth D. Forbus, Thomas Hinrichs
; The Companion cognitive architecture is aimed at reaching human-level
AI by creating software social organisms — systems that interact with people using natural modalities, working
and learning over extended periods of
time as collaborators rather than tools.
Our two central hypotheses about how
to achieve this are ( 1) analogical reasoning and learning are central to cognition, and ( 2) qualitative representations provide a level of description that
facilitates reasoning, learning, and
communication. This article discusses
the evidence we have gathered supporting these hypotheses from our experiments with the Companion architecture. Although we are far from our
ultimate goals, these experiments provide strong evidence for the utility of
analogy and qualitative representation
across a range of tasks. We also discuss
three lessons learned and highlight three
important open problems for cognitive
systems research more broadly.