one another, the goal of the symposium was to bring
together members from each of the major AI subfields to discuss our diverse abstractions, perspectives,
inspirations, and methodologies, thereby encouraging a cross-pollination of ideas between various levels and types of abstraction.
To our delight, the symposium was attended by researchers from most subfields of AI, including natural language processing, computational linguistics,
computational neuroscience, robotics, symbolic AI,
evolutionary algorithms, deep learning, reinforcement learning, and developmental robotics. We also
had attendees from a variety of other disciplines and
backgrounds, including statisticians, engineers, biologists, and entrepreneurs.
One controversial topic of conversation throughout the symposium was deep learning, which is a
bioinspired technique based on how the brain
processes information that has recently shown impressive performance over a wide spectrum of machine learning benchmarks. The symposium included keynotes from Andrew Ng (Stanford University)
and Randall O’Reilly (University of Colorado, Boulder) related to this topic. Andrew Ng provided background on deep learning and its accomplishments
from a more abstract machine-learning perspective,
while Randall O’Reilly took more direct inspiration
from the architecture of the human brain.
The multidisciplinary group of researchers also included biologists like Georg Striedter (University of
California, Irvine), author of the book Principles of
Brain Evolution, who gave a keynote describing how
brain functionality has been viewed historically,
from Plato to the present. In another keynote, Risto
Miikkulainen (University of Texas at Austin) described how the field of neuroevolution, which
evolves artificial neural networks and creates cognitive architectures through bottom-up evolutionary
design instead of by top-down human engineering.
Pierre-Yves Oudeyer (Inria, France) described his
work in developmental robotics in a keynote that focused on how robots can be motivated by a curiosity
to explore their world. In the process they build a
model of the world and develop impressive skill sets.
While the current winds of AI seem favorable to
subsymbolic approaches, proponents of symbolic AI
made convincing arguments for its continued relevance and promise. In particular, Gary Marcus (New
York University) argued that nonsymbolic approaches to AI may help us create AI at a nonhuman animal
level, but that symbolic manipulation (good old-fashioned AI) will be essential for human-level thinking such as understanding language and inference.
John Laird (University of Michigan) highlighted the
difference in goals and outlook between general symbolic AI and the more common approach of tailoring
AI to achieve a particular goal.
Through the course of the discussion, many re-
maining challenges for AI became evident that cut
across traditional boundaries. Different approaches
had different strengths, weaknesses, and focuses.
Their current abilities were clustered around three
main divisions: building features from raw percep-
tion, making reactive decisions based on features,
and higher-level cognitive reasoning. For example,
deep learning focuses mainly on building features
from raw perception, while reinforcement learning
techniques focus on making decisions given such
features. On the other end of the spectrum, symbol-
ic approaches focus directly on cognitive reasoning.
However, no approach seemed yet able to encompass
all three levels.
The participants in general expressed interest in
attending a follow-up conference and many report-
ed that they had gained a greater understanding of
AI as a whole, and in particular how the various
questions tackled within subfields connect and com-
plement each other.
Sebastian Risi, Joel Lehman, and Jeff Clune served
as cochairs of this symposium. The papers of the
symposium were published as AAAI Press Technical
Integrated cognition is concerned with consolidating the fundamental functionality and phenomena
implicated in natural minds and brains or artificial
cognitive systems, such as are key to building virtual humans, intelligent agents, and intelligent robots.
It captures a grand challenge central to both artificial
intelligence and cognitive science: how minds that
are capable of yielding human-level performance in
complex environments arise from the interactions
among their constituent parts and mechanisms.
Integrated cognition spans not only the traditional cognitive aspects — such as planning and problem
solving, knowledge representation and reasoning,
language and interaction, reflection/metacognition
and learning — that have been the focus of unified
cognitive architectures, but also seeks a grand unification with the key noncognitive aspects, such as perception and control, personality and emotion, and
motivation. It also concerns integration within and
across multiple levels of processing and representation, from high-level social and rational thought and
symbolic cognitive processes down to low-level biological, reactive, and subsymbolic processing.
In principle, several comprehensive conferences
overlap substantially with integrated cognition, including AAAI’s own annual conference. However, a
focus on individual capabilities rather than integrated systems, and on methods of evaluation that are
appropriate for the parts but not necessarily for the
whole, has made them a less natural fit than they
ought to be. A number of more specialized conferences have also arisen over the years that overlap
with integrated cognition — such as Advances in