Laird, Pat Langley, and Sergei Nirenburg for discussion and comments that led to improvements in the
text. The authors would like to acknowledge support
from the Office of Naval Research under grants
N0001416WX01112, N0001417WX00153 and N000
1416WX00762. The views expressed in this article are
solely those of the authors and should not be taken to
reflect any official policy or position of the United
States government or the Department of Defense.
1. We assume a broad view of sensing here that allows for
thoughts, memories, and other subjective features that
might be distracting.
2. It should go without saying that human agency, at least
typically, includes the capacity for moral agency.
3. The Supreme Court of Canada wrote in its judgment, “It
may be that some will regard the exoneration of an accused
through a defence of somnambulism as an impairment of
the credibility of our justice system. Those who hold this
view would also reject insane automatism as an excuse from
criminal responsibility. However, these views are contrary to
certain fundamental precepts of our criminal law: only those
who act voluntarily with the requisite intent should be punished
by criminal sanction” (italics added, quoted in Broughton et
Allport, D. A. 1987. Selection for Action: Some Behavioral
and Neurophysiological Considerations of Attention and
Action. In Perspectives on Perception and Action, ed. H. Heuer
and A. F. Sanders, 395–419. Hillsdale, NJ: Lawrence Erlbaum
Bernays, E. A., and Wcislo, W. T. 1994. Sensory Capabilities,
Information Processing, and Resource Specialization. The
Quarterly Review of Biology 69( 2): 187–204. doi.org/10.1086/
Bridewell, W., and Bello, P. 2016. A Theory of Attention for
Cognitive Systems. In Proceedings of the Fourth Annual Conference on Advances in Cognitive Systems, 1–16. Palo Alto, CA:
Cognitive Systems Foundation.
Broughton, R.; Billings, R.; Cartwright, R.; Doucette, D.;
Edmeads, J.; Edwardh, M.; Ervin, F.; Orchard, B.; Hill, R.; and
Turrell, G. 1994. Homicidal Somnambulism: A Case Report.
Sleep 17( 3): 253–64.
Cohen, P. R., and Levesque, H. J. 1990. Intention Is Choice
with Commitment. Artificial Intelligence 42( 2–3): 213–261.
Davis, R., and Buchanan, B. G. 1984. Meta-Level Knowledge.
In Rule-Based Expert Systems: The MYCIN Experiments of the
Stanford Heuristic Programming Project, ed. B. G. Buchanan
and E. H. Shortliffe, 507–530. Reading, MA: Addison-Wesley
Fischer, J. M., and Ravizza, M. 1998. Responsibility and Control:
A Theory of Moral Responsibility. Cambridge, UK: Cambridge
University Press. doi.org/10.1017/CBO9780511814594
Forbus, K., and Hinrichs, T. 2017. Analogy and Relational
Representations in the Companion Cognitive Architecture.
AI Magazine 38( 4). doi.org/10.1609/aimag.v27i2.1882
Hommel, B. 2010. Grounding Attention in Action Control:
The Intentional Control of Selection. In Effortless Attention:
A New Perspective in the Cognitive Science of Attention and
Action, ed. B. Bruya, 121–140. Cambridge, MA: The MIT
Johnson, B.; Coman, A.; Floyd, M. W.; and Aha, D. W. 2017.
Goal Reasoning and Trusted Autonomy. In Foundations of
Trusted Autonomy, ed. H. Abbass, J. Scholz, and D. Reid.
Laird, J.; Lebiere, C.; and Rosenbloom, P. 2017. A Standard
Model of the Mind: Toward a Common Computational
Framework Across Artificial Intelligence, Cognitive Science,
Neuroscience, and Robotics. AI Magazine 38( 4). doi.org/10.
Laird, J. E. 2012. The Soar Cognitive Architecture. Cambridge,
MA: The MIT Press.
Malle, B. F.; Guglielmo, S.; and Monroe, A. E. 2014. A Theory of Blame. Psychological Inquiry 25( 2): 147–186.
National Science and Technology Council. 2016. Preparing
for the Future of Artificial Intelligence. October 2016. Washington, D.C.: Executive Office of the President, Committeee
on Technology. ( www.whitehouse.gov/sites/default/files/
whitehousefiles/microsites/ostp/NSTC/preparing for the
future of ai.pdf.) Accessed October 12, 2016.
Newell, A. 1981. The Knowledge Level: Presidential Address.
AI Magazine 2( 1): 1–20. doi.org/10.1609/aimag.v2i2.99
Russell, S. J., and Norvig, P. 2002. Artificial Intelligence: A
Modern Approach, 2nd edition. Upper Saddle River, NJ: Prentice Hall.
Scheutz, M. 2017. The Case for Explicit Ethical Agents. AI
Magazine 38( 4). doi.org/10.1609/aimag.v38i4.2746
Simon, H. A. 1996. The Sciences of the Artificial, 3rd edition.
Cambridge, MA: The MIT Press.
Stankowich, T., and Blumstein, D. T. 2005. Fear in Animals:
A Meta-Analysis and Review of Risk Assessment. Proceedings
of the Royal Society B: Biological Sciences 272(1558): 2627–
Wu, W. 2011. Confronting Many-Many Problems: Attention
and Agentive Control. Noûs 45( 1): 50–76. doi.org/
Paul F. Bello is the director of the Interactive Systems Section at the U.S. Naval Research Laboratory, and the former
director of the Cognitive Science program at the Office of
Naval Research. His research interests lie at the interface
between attention, perception, reasoning, and action with a
particular focus on consciousness and moral agency. He
received his Ph.D. in cognitive science, M.S. in computer science, and B.S. in both computer engineering and philosophy from Rensselaer Polytechnic Institute. He is the codesigner of the ARCADIA attention-driven cognitive system
and codirects the ARCADIA research program.
Will Bridewell is a computer scientist at the U.S. Naval
Research Laboratory. Formerly he was a research scientist at
Stanford University. His current research investigates the
relationship between attention and intentional action with
a broader interest in computational theories of consciousness. He holds Ph.D. and M.S. degrees in computer science
from University of Pittsburgh and B.S. degrees in psychology, mathematics, and computer science from Northern Kentucky University. He is the codesigner of the ARCADIA cognitive system and codirects the ARCADIA research program.