vant things, because they pay attention to how their
interlocutors will be affected by what they say. The
navigator software that tells you “at the next roundabout, take the second exit” sounds stupid because it
doesn’t know that “go straight” would be a much more
compact and relevant message.
Aspirational issue: In reality, a particular user may
prefer the robotic message from the navigator. So, to
function at the human level, a cognitive system will
have to learn the theories of mind of each of the
members of its team and act accordingly. The theory
of mind must be extended to cover the mind of the
cognitive system itself, that is, to help model the
intelligent agent’s self-knowledge; an interesting
modeling angle here is that the agent’s self-image
may (and, as a rule, does) differ from how the agent
is viewed by other agents or from the “objective”
state of the world, as seen by an omniscient “
demiurge” (that is, in reality, the system developer).
The ultimate criteria of success in building cognitive systems are ( 1) whether the resulting system
behaves (understands, makes decisions) like a human
and ( 2) whether its behavior can be explained in
terms that make sense to humans that interact with
them. Reliable formal evaluation procedures for
establishing this are currently expensive, as they
require experimentation with human subjects or settings such as Loebner competitions. Aspirational
issue: Developing better measures of progress that are
both specifically geared to cognitive systems and
accepted by the broad AI community is an urgently
required direction of research.
The purpose of this article is to present a bird’s-eye
view of a research community. It is clear that there
will be omissions and lack of detail. The rest of the
cognitive systems contributions in this issue will help
to fill at least some of these lacunae.
This article is not a call to stop working on data-driven AI. In fact, there is a two-way symbiotic relationship between data-driven and knowledge-based
AI. Thus, corpus annotation by people is typically a
prerequisite for developing statistical NLP systems.
Conversely, sophisticated analyses of large data sets
offer immense help to knowledge acquirers — both
human and, in the near future, automatic ones.
Investigating the potential of using both approaches
simultaneously in building AI systems is one of the
most promising ways of overcoming the knowledge
acquisition bottleneck of cognitive systems and the
narrow applicability and quality bottleneck of ML-based ones. Building orthotic systems would be the
first choice. But improvements may very well be as
tangible in prosthetic ones.
This research was supported in part by Grant
#N00014-16-1-2118 from the US Office of Naval
Research. Any opinions or findings expressed in this
material are those of the author and do not necessarily reflect the views of the Office of Naval Research.
Many thanks to Paul Bello and Marge McShane for
useful comments on an earlier draft. All remaining
misunderstandings and obscurities are mine.
Baumeister, R. F. 1986. Identity: Cultural Change and the Strug-
gle for Self. Oxford, UK: Oxford University Press.
Bello, P. F., Bridewell, W. 2017. There Is No Agency Without
Attention. AI Magazine 38( 4). doi.org/10.1609/aimag.v38i4.
Brockman, J. 2015. What to Think About Machines That
Think: Today’s Leading Thinkers on the Age of Machine Intelligence. New York: Harper Perennial.
Cartwright, N. 1983. How the Laws of Physics Lie. Oxford, UK:
Oxford University Press. doi.org/10.1093/0198247044
Church, K. 2011. A Pendulum Swung Too Far. Linguistic
Issues in Language Technology 6( 5): 1–27.
DeJong, G. 2004. Explanation-Based Learning. In Computer
Science Handbook, ed. A. Tucker, 68–1– 68–20. Boca Raton, FL:
Laird, J. E.; Lebiere, C.; Rosenbloom, P. S. A Standard Model
of the Mind: Toward a Common Computational Framework
Across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. AI Magazine 38( 4). doi.org/10.1609/
Lombrozo, T. 2013. Explanation and Abductive Inference.
In The Oxford Handbook of Thinking and Reasoning, ed. K. J.
Holyoak and R. G. Morrison. Oxford, UK: Oxford University Press.
McShane, M. 2017. Natural Language Understanding (NLU,
not NLP) in Cognitive Systems. AI Magazine 38( 4). doi.org/
McShane, M. 2003. Applying Tools and Techniques of Natural Language Processing to the Creation of Resources for
Less Commonly Taught Languages. IALLT Journal of Language Learning Technologies 35( 1): 25–46.
Minsky, M. 2006. The Emotion Machine. New York: Pantheon, and Simon and Schuster.
Oflazer, K.; Nirenburg, S.; and McShane, M. 2001. Boot-strapping Morphological Analyzers by Combining Human
Elicitation and Machine Learning. Computational Linguistics
27( 1): 59–85. doi.org/10.1162/089120101300346804
Piantadosi, S. T.; Tily, H.; Gibson, E. 2012. The Communicative Function of Ambiguity in Language. Cognition 122( 3):
Scheutz, M. 2017. The Case for Explicit Ethical Agents. AI
Magazine 38( 4). doi.org/10.1609/aimag.v38i4.2746
Summerfield, C., and Egner, T. 2009. Expectation (and
Attention) in Visual Cognition. Trends in Cognitive Sciences
13( 9): 403–409. doi.org/10.1016/j.tics.2009.06.003
Sergei Nirenburg is a professor of computer science and
electrical engineering at the University of Maryland, Baltimore County.