We know instinctively that we must be very careful
about endowing machines with the power of choice.
Deontological ethics tells us in precisely what sense
we must be careful: We must ensure that our increasingly intelligent machines have the capacity for true
autonomy as well as independence.
One might object that we have offered no real, engineering solutions to the threats potentially posed
by autonomous machines. Such an objection misunderstands our thesis. Our position is that a first
step toward a real solution is not more sophisticated
engineering, but a more sophisticated concept of
autonomy. We need a revolution in thought.
1. Contemporary philosophers Pettit and Smith (1996)
advance a related thesis that autonomy is inherently
responsive to reason. To contrast mere independence
from reason-responsive autonomy, they dub the latter
“orthonomy.” Our reconceptualization of Kantian autonomy is somewhat similar to their orthonomy, but a detailed
comparison is beyond the scope of this essay.
2. The generalization principle is closely related to Kant’s
Categorical Imperative, which is notoriously subject to interpretation. The Imperative appears in our development
as the universality of reason, of which the generalization
principle is seen as a direct consequence. An interpretation
of the Imperative that is not based on the reasons for action
is LN4 of Parfit (2011, p. 317).
3. While behavior must be explicable as based on reasons to
qualify as action, this does not mean that irrational factors
like emotion or feelings can play no role in the rationale
for an action. Suppose I avoid driving over a certain bridge
because I had a serious accident there at some point in the
past. My avoidance of the bridge is an action if I can explain,
in some coherent fashion, why I avoid the bridge. Perhaps
the memory of my accident makes me feel nervous to drive
over the bridge, and it is unpleasant to feel nervous. My
aversion to driving over the bridge may be irrational in
some sense, particularly if the bridge is as safe as any other
route. Yet my rationale is a coherent explanation for my
avoidance. It may be ethical as well, unless (for instance) a
refusal to use the bridge prevents me from carrying out obligations on the other side. On the other hand, if I simply
avoid the bridge without adducing any reasons why —
reasons that can be checked for coherence — then my avoidance is not an action. In this case, my unpleasant memory
of the accident is merely a cause for my avoidance rather
than a reason for it. Similarly, the output of a robot’s neural network is a cause of the robot’s behavior, rather than
a reason for it. To be autonomous, the robot must generate
reasons for its behavior that can be put to the test ethically.
Anderson, M., and Anderson, S. L. 2007. Machine Ethics:
Creating an Ethical Intelligent Agent. AI Magazine 28( 4):
Anderson, S. L., and Anderson, M. 2011. A Prima Facie Duty
Approach to Machine Ethics: Machine Learning of Features
of Ethical Dilemmas, Prima Facie Duties, and Decision
Principles Through a Dialogue With Ethicists. In Machine
Ethics. M. Anderson, and S. L. Anderson, editors. 476–92.
New York: Cambridge University Press. doi.org/10.1017/
Anscombe, G. E. M. 1957. Intention. Oxford, UK: Basil
Beer, J. M.; Fisk, A. D.; and Rogers, W. A. 2012. Toward
a Psychological Framework for Levels of Robot Autonomy in
Human-Robot Interaction. Technical Report. Atlanta, GA: Georgia
Institute of Technology.
Bilgrami, A. 2006. Self-Knowledge and Resentment. Cambridge,
MA: Harvard University Press.
Bostrom, N. 2014. Superintelligence: Paths, Dangers, Strategies.
Oxford, UK: Oxford University Press.
Covrigaru, A. A., and Lindsay, R. K. 1991. Deterministic
Autonomous Systems. AI Magazine 12( 3): 110.
Davidson, D. 1963. Actions, Reasons, and Causes. The Journal
of Philosophy 60( 23): 685–700. doi.org/10.2307/2023177.
Donagan, A. 1984. Justifying Legal Practice in the Adversary
System. Lanham, MD: Rowman and Allanheld.
Franklin, S., and Graesser, A. 1996. Is It an Agent, or Just
a Program? A Taxonomy for Autonomous Agents. In
International Workshop on Agent Theories, Architectures, and Languages, 21–35. Berlin: Springer.
Hooker, J. N. 2018. Taking Ethics Seriously: Why Ethics Is
an Essential Tool for the Modern Workplace. Abingdon, UK:
Taylor & Francis. doi.org/10.4324/9781315097961.
Hooker, J. N., and Kim, T.-W. 2018. Toward Non-Intuition-Based Machine and Artificial Intelligence Ethics: A Deontological Approach Based on Modal Logic. In Proceedings
of the First Association for the Advancement of Artificial Intelligence (AAAI)/Association for Computing Machinery (ACM)
Conference on Artificial Intelligence, Ethics and Society (AIES).
New York: Association for Computing Machinery. doi.org/
Huang, H.-M.; Pavek, K.; Ragon, M.; Jones, J.; Messina, E.;
and Albus, J. 2007. Characterizing Unmanned System
Autonomy: Contextual Autonomous Capability and Level
of Autonomy Analyses. In Unmanned Systems Technology IX.
Vol. 6561, 65611N. Bellingham, WA: International Society
for Optics and Photonics.
IEEE. 2018. Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent
Systems. Piscataway, NJ: Institute for Electrical and Electronics Engineers.
Kant, I. 1785. Groundwork of the Metaphysics of Morals.
Akademie edition. Vol. 4, 458. Berlin: Walter de Gruyter.
Korsgaard, C. M. 1996. The Sources of Normativity. Cambridge, UK: Cambridge University Press. doi.org/10.1017/
Luck, M., and d’Inverno, M. 1995. A Formal Framework
for Agency and Autonomy. In Proceedings of the First International Conference on Multiagent Systems. 254–60. Cambridge,
MA: The MIT Press.
Luck, M., and d’Inverno, M. 2001. A Conceptual Framework
for Agent Definition and Development. The Computer Journal
44( 1): 1–20. doi.org/10.1093/comjnl/44.1.1.
Mueller, E. T. 2016. Transparent Computers: Designing Understandable Intelligent Systems. Scotts Valley, CA: CreateSpace
Independent Publishing Platform.
Nagel, T. 1986. The View from Nowhere. Oxford, UK: Oxford
Nelkin, D. K. 2000. Two Standpoints and the Belief in
Freedom. The Journal of Philosophy 97( 10): 564–76. doi.