If you offer prescribed medication X to patient Y at
the appropriate time, and the patient refuses to take
it, then inform the nursing staff.
The patient insists that she has the right to control what goes into her body and does not wish the
nursing staff to be informed of her refusal. Ignoring
her wishes may seem to be a violation of autonomy,
but it is not, because it neither compels her to take
the medication nor interferes with any other ethical
action plans. Her desire to keep her refusal secret is
not an action plan, ethical or otherwise. It is only a
desire, and the autonomy principle does not require
us to grant a wish simply because someone desires it.
Now suppose that medication X is necessary to
prevent the patient Y from becoming disoriented.
The nursing staff confines disoriented patients to the
building, because otherwise they may suffer an ac-
cident on the busy streets outside. If the patient plans
to leave the building while disoriented — perhaps she
has a coherent reason for taking the risk — the afore-
mentioned instruction violates the autonomy principle.
A modified instruction, however, could pass muster
due to the principle of informed consent:
If you offer prescribed medication X to patient Y at
the appropriate time, the patient refuses to take it, and
the patient autonomously gave informed consent to a
policy of informing the nursing staff of such refusals
when she voluntarily entered the nursing home, then
inform the nursing staff.
Further refinements of the instruction may be necessary in a realistic setting, but we at least have a
fairly precise guide for evaluating its ethical status.
Building an Autonomous Machine
A truly autonomous machine formulates action plans
as well as following them. To create an action plan,
the machine must supply the reasons that comprise
the antecedent of the action plan, and those reasons must be coherent enough to explain why the
resulting action is undertaken. In particular, they
must satisfy the generalization principle and respect
Transparency and explainability are therefore essential characteristics of an autonomous machine. If
a machine’s every action must result from an action
plan, then the machine must be reasons-responsive.
It must be able to provide a coherent reason for every
action to formulate the action plan. The practical
importance of transparency and explainability in AI
has been much discussed (Mueller 2016; Wortham,
Theodorou, and Bryson 2016a,b). We now see that
it is not only important but bound up in the very
concept of an autonomous agent.
We can also begin to see what kinds of abilities
are required for genuine autonomy. If an autonomous
AI system is to rely on deep learning and neural networks, for example, these networks must deliver
not only action choices but reasons for the actions.
Furthermore, the system must be able to determine
whether the resulting action plans (or more precisely, the overarching plans from which the more
specific plans derive) satisfy the generalization and
other principles. This requires that the system carry
out thought experiments, which in turn rely on its
ability to accumulate beliefs about matters of fact
and assess whether they are rational. For example,
a truly autonomous ambulance must be able to
determine whether it is rationally constrained to
believe that drivers would ignore ambulances if they
all abused the siren and lights.
None of this implies that truly autonomous machines must acquire such human traits as feelings,
sympathy, loyalty, or intellectual curiosity. They need
only exhibit the formal properties of agency. Yet,
as we see, building these properties into a machine
is an extremely daunting challenge. The challenge
may eventually be met, but perhaps only in such
limited domains as driving, household chores, or
certain personal services.
Policy and Standards
Current laws define autonomous systems in terms
of independence. For instance, California Senate
Bill 1298 (Chapter 570), which authorized the De-
partment of Motor Vehicles to develop regulations
for the testing and operation of autonomous ve-
hicles, defines “autonomous technology” as “tech-
nology that has the capability to drive a vehicle
without the active physical control or monitoring
of a human operator” and “autonomous vehicle”
as “any vehicle with autonomous technology”
(Division 16. 6). Autonomous vehicles defined in this
manner can indeed present a threat to humans, as
discussed at the beginning of this essay.
Most major standards for the safety and ethics of
AI likewise equate autonomy with independence.
For instance, The IEEE Global Initiative on Ethics of
Autonomous and Intelligent Systems endorses the
definition of an autonomous weapon system offered
by the International Committee of the Red Cross:
… a system that can select (that is, search for or detect,
identify, track, select) and attack (that is, use force
against, neutralize, damage or destroy) targets without
human intervention (IEEE 2018, p. 116).
To guard against marauding machines, AI policies
and standards should take account of true autonomy
as well as independence, and make sure that one ac-companies the other. Laws can mandate that a code
of ethics be programmed into machines, but to the
extent that the machines are independent, they can
ignore such admonitions.
The tension between autonomy (as popularly
conceived) and ethics can be resolved only through
a unified approach that recognizes the fundamental
connection between the two.