Nietzche once said that “there are no moral phenome- na at all, only a moral interpretation of phenomena.” That insight has implications for how we should see
ethical machines and ourselves. The political philosopher
John Gray has argued (2002) that we have little or no insight
into how we take decisions, moral or otherwise, and a great
deal of modern psychology agrees with him.
With the rise of machine learning as the core AI paradigm,
we are getting used to the idea that we do not know how our
programs make decisions either; hence, the rise of research in
XAI, explainable AI, and the DARPA program to provide that.
The European Commission has legislated a demand (Order
GDPR 2016/2679) specifying that deployed machine learning systems must explain their decisions. The commission
has done this even though no one knows how to provide
what they are requiring. What would follow if we and
machines are in roughly the same position with respect to
the transparency of our ethical decision-making?
I want to reintroduce the notion of orthosis into ethical
explanation: medically, an orthosis is an externally applied
device designed and fitted to the body to aid rehabilitation,
and usually contrasted with a prosthesis, which replaces a
missing part, like a foot or leg. Here, it will mean an explanatory software agent associated with a human or machine.
Could such an orthosis explain our own ethical behavior to
us, as well as that of machines?
Gray’s starting point is that professional discussions of ethical decision-making have little or nothing to do with how
humans or animals actually seem to act. He believes they act
simply “like machines” (and he means that in a positive
sense). For Gray, we do not calculate ethical rules or consequences before acting, as the ethics text books tend to assume
— and so neither should machines, he might have added. He
may be right about the conscious processes of humans in
Moral Orthoses: A New
Approach to Human
and Machine Ethics
n I argue that both human and machine actions are more opaque than is
generally realized and that the actions
of both require explanation that an ethical orthosis might provide as aspects of
artificial Companions for both human
and machine actors. These explanations might well be closer to ethical
accounts based on moral sentiment or
emotion in the tradition of the primacy
of sentiment over reason in this area of
human and machine action.