8. The first condition is that the robot must never be placed
in a position of danger to itself, or must be so easily replace-able that it did not matter whether it was destroyed or not.
Second, it must be designed to respond automatically to certain stimuli with fixed responses, with nothing else expected of it, so that no order need ever be given it, and the fixed
responses must never entail danger to human beings.
Arkin, R., and Ulam, P. 2009. An Ethical Adaptor: Behavioral
Modification Derived from Moral Emotions. In Proceedings of
the IEEE 2009 International Symposium on Computational intel-
ligence in Robotics and Automation (CIRA 2009), 381–387. Pis-
cataway, NJ: Institute for Electrical and Electronics Engineers.
Arkin, R.; Wagner, A.; and Duncan, B. 2009. Responsibility
and Lethality for Unmanned Systems: Ethical Pre-Mission
Responsibility Advisement. Paper presented at the 2009 IEEE
Workshop on Roboethics, Kobe, Japan 17 May.
Arkin, R. C., and Balch, T. 1997. Aura: Principles and Practice in Review. Journal of Experimental and Theoretical Artificial Intelligence 9( 2): 175–189.
Arnold, T., and Scheutz, M. 2016. Feats Without Heroes:
Norms, Means, and Ideal Robotic Action. Frontiers in Robotics and AI 3( 32). doi.org/10.3389/frobt.2016.00032
Asimov, I. 1942. Runaround. Astounding Science Fiction.
Bello, P., and Bridewell, W. 2017. There Is No Agency Without Attention. AI Magazine 38( 4). doi.org/10.1609/aimag.
Blass, J. A., and Forbus, K. D. 2015. Moral Decision-Making
by Analogy: Generalizations Versus Exemplars. In
Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX. Palo Alto, CA: AAAI Press.
Briggs, G., and Scheutz, M. 2013. A Hybrid Architectural
Approach to Understanding and Appropriately Generating
Indirect Speech Acts. In Proceedings of the Twenty-Seventh
AAAI Conference on Artificial Intelligence, Bellevue, WA, 1213–
1219. Palo Alto, CA: AAAI Press.
Briggs, G., and Scheutz, M. 2015. “Sorry, I Can’t Do That:”
Developing Mechanisms to Appropriately Reject Directives
in Human-Robot Interactions. In Artificial Intelligence for
Human-Robot Interaction: Papers from the AAAI 2015 Fall Symposium, ed. B. Hayes and M. Gombolay, 32–36. Palo Alto,
CA: AAAI Press.
Dehghani, M.; Tomai, E.; Iliev, R.; and Klenk, M. 2008.
Moraldm: A Computational Model of Moral Decision-Making. In Proceedings of the 30th Annual Conference of the Cognitive Science Society, Washington, D.C. Austin, TX: Cognitive
Science Society Inc.
Fasola, J., and Mataric, M. 2013. A Socially Assistive Robot
Exercise Coach for the Elderly. Journal of Human-Robot Interaction 2( 2): 3–32.
Forbus, K. D., and Hinrichs, T. R. 2017. Analogy and Relational Representations in the Companion Cognitive Architecture. AI Magazine 38( 4). doi.org/10.1609/aimag.v27i2.
Gert, B. 2005. Morality: Its Nature and Justification. Oxford,
UK: Oxford University Press.
Gips, J. 1995. Toward the Ethical Robot. In Android Epistemology, ed. K. M. Ford, C. Glymour, and P. J. Hayes, 243–252.
Cambridge, MA: AAAI Press / The MIT Press.
Iba, W., and Langley, P. 2011. Exploring Moral Reasoning in
a Cognitive Architecture. In Proceedings of the Thirty-Third
Annual Meeting of the Cognitive Science Society, Boston, MA.
Austin, TX: Cognitive Science Society Inc.
Laird, J.; Lebiere, C.; and Rosenbloom, P. 2017. A Standard
Model of the Mind: Toward a Common Computational
Framework Across Artificial Intelligence, Cognitive Science,
Neuroscience, and Robotics. AI Magazine 38( 4). doi.org/
Licato, J., Sun, R., and Bringsjord, S. 2014. Structural Representation and Reasoning in a Hybrid Cognitive Architecture.
In 2014 International Joint Conference on Neural Networks (
IJC-NN 2014), 891–898. Piscataway, NJ: Institute for Electrical
and Electronics Engineers.
McShane, M. 2017. Natural Language Understanding (NLU,
not NLP) in Cognitive Systems. AI Magazine 38( 4). doi.org/
Malle, B. F., and Scheutz, M. 2014, Moral Competence in
Social Robots. In Proceedings of the IEEE 2014 International
Symposium on Ethics in Engineering, Science, and Technology,
30–35. Piscataway, NJ: Institute for Electrical and Electronics Engineers.
Malle, B. F.; Scheutz, M.; and Austerweil, J. L. 2015. Networks
of Social and Moral Norms in Human and Robot Agents.
Paper presented at the International Conference on Robot
Ethics ICRE 2015, Lisbon, Portugal.
Mikhail, J. 2014. Any Animal Whatever? Harmful Battery
and Its Elements as Building Blocks of Moral Cognition.
Ethics 124( 4): 750–786.
Moor, J. H. 2006. The Nature, Importance, and Difficulty of
Machine Ethics. IEEE Intelligent Systems. 21( 4): 18–21.
Moor, J. H. 2009. Four Kinds of Ethical Robots. Philosophy
Russell, S.; Dewey, D.; and Tegmark, M. 2015. Research Priorities for Robust and Beneficial Artificial Intelligence. AI
Magazine 36( 4): 61–70. doi.org/10.1609/aimag.v36i4.2577
Scassellati, B.; Admoni, H.; and Mataric, M. 2012. Robots for
Use in Autism Research. Annual Review of Biomedical Engineering Volume 14: 275–294. Palo Alto, CA: Annual Reviews,
Scheutz, M. 2012. The Inherent Dangers of Unidirectional
Emotional Bonds Between Humans and Social Robots. In
Anthology on Robo-Ethics, ed. P. Lin, G. Bekey, and K. Abney.
Cambridge, MA: The MIT Press.
Scheutz, M. 2016. The Need for Moral Competency in
Autonomous Agent Architectures. In Fundamental Issues of
Artificial Intelligence, ed. V. C. Müller, 517–527. Berlin:
Scheutz, M.; Schermerhorn, P.; Kramer, J.; and Anderson, D.
2007, May. First Steps Toward Natural Human-Like HRI.
Autonomous Robots 22( 4): 411–423.
Matthias Scheutz is a professor of cognitive and computer
science in the Department of Computer Science and Bernard
M. Gordon Senior Faculty Fellow in the School of Engineering at Tufts University. He has more than 250 peer-reviewed
publications in artificial intelligence, natural language processing, cognitive modeling, robotics, and human-robot
interaction. His current research focuses on complex cognitive robots with rudimentary moral competence.