The concept of action plan also allows interference when there is informed consent because
consent in effect becomes part of the action plan.
Suppose a robot performs surgery on me that leads
to complications, thwarting my plan to travel next
month. However, I signed a release that permits surgery, knowing that complications could result. This
modified my action plan for travel, which became,
“If there are no complications from surgery, then
travel next month.” The robotic surgeon therefore
did not interfere with my action plan. So, we have
the principle of informed consent:
Principle of Informed Consent
Interfering with an agent’s action plan is no violation
of autonomy when that agent has given informed
consent to the possibility of interference, and giving
this consent is, itself, a coherent action plan.
Finally, the obligation to respect autonomy does
not forbid interfering with unethical behavior, in
the sense of behavior that violates other ethical
principles, because unethical behavior is not an
exercise of agency in the first place. This leads to
a companion principle, the interference principle:
Coercion that prevents only unethical behavior does
not compromise autonomy.
If your robot companion grabs your arm when
you attempt to steal someone’s smart phone, there
is no violation of your autonomy, because theft is
(normally) ungeneralizable and therefore unethical.
However, if your robot locks you in a closet to prevent
you from writing false numbers on your income
tax form, it violates your autonomy, even though
income tax evasion is unethical. Being locked in a
closet prevents you from performing any number of
ethical actions. A more extensive analysis of when
restraint is justified, based on a concept of joint autonomy, can be found in Hooker (2018).
Building Ethical Machines
Nothing in this essay is meant to imply that autonomous machines can or should be developed.
It only argues that if we move in this direction, we
can make sure the machines are ethical by making
them truly autonomous, as opposed to merely independent of human control. In the meantime,
deontological analysis can be a valuable guide to
building machines that are ethical but not yet autonomous (Hooker and Kim 2018). We need only
apply ethical principles to the human designer of
the machine rather than the machine itself.
It is useful to think about how to design an ethical
machine, because this will help us understand how a
truly autonomous machine must be structured. First,
for us to apply ethical principles, the machine must
be ultimately governed by action plans; that is, by
if-then rules that instruct the machine to perform
certain actions in certain circumstances. The ante-
cedent (if-part) of a rule is interpreted as the reason
for the action, and the ethical tests applied on that
If we are to apply the tests properly, the antecedent
must capture the true reason for the action, in full
generality. Suppose, for example, that a self-driving
ambulance is instructed to use sirens and lights in
a medical emergency. This is acceptable, but the
designer has inserted additional instructions of the
If a patient needs nonemergency transport from lo-
cation X to location Y between 9 and 10 AM, and if
using siren and lights would result in faster delivery, give
the patient a ride with siren and lights using route Z.
There are instructions for each pair of locations
and each time of day because different routes are
optimal in each case. Nonemergency use of siren
and lights (to save time) violates the generalization
principle, because if it were generalized, other driv-
ers would simply ignore ambulances, and the siren
and lights would not save time. Yet each of the in-
structions is generalizable because the conditions
are so specific that they apply only occasionally and
would have no effect on the behavior of other driv-
ers. The problem is that the scope of the antecedent
is too narrow. The real reason the sirens and lights
are to be used is to transport patients more rapidly.
The specific instructions are derived from the gen-
eral action plan
If a patient needs nonemergency transport, and if
using siren and lights would result in faster delivery,
then give the patient a ride using siren and lights
along the optimal route.
This is the action plan that must be subjected to
ethical scrutiny, and it is not generalizable.
AI systems frequently use multilayer neural net-
works to select actions. This might be captured in
an action plan like
If a neural network of a certain architecture, trained
in a certain way on a certain data set, indicates that a
patient should be transported using siren and lights,
then transport the patient using siren and lights.
To check this action for generalizability, the designer must investigate the results of operating all
ambulances as dictated by the neural network. This
could be difficult to assess, due to the nontransparency of the network. Nonetheless questions of this
sort must be answered if deep learning is to provide
an ethical basis for machine behavior.
Instructions must also be evaluated with respect to
whether they violate the autonomy of other agents.
To adapt an example from Anderson and Anderson
(2007, 2011), suppose a robotic assistant in a nursing
home administers medication to patients. It is given