80 AI MAGAZINE
The global race to develop artificial intelligence (AI) —
systems that imitate aspects of human cognition — is
likely to accelerate the development of highly capable,
application-specific AI systems with national secu-
rity implications. … AI-enhanced systems are likely
to be trusted with increasing levels of autonomy and
decision-making, presenting the world with a host of
economic, military, ethical, and privacy challenges.
Furthermore, interactions between multiple advanced
AI systems could lead to unexpected outcomes that
increase the risk of economic miscalculation or battle-
field surprise.
In the following bet, we hope readers note that
we are keenly aware of the risk of surprise posed by
what is at stake if we scientists oversell the value of
AI while ignoring its dangers, or the greater risk in
sitting on the sidelines and not participating in the
race described by Coats. At the same time, however,
we recognize that at least one of our arguments is
likely to be flawed; thus, we also welcome from readers
their comments, their clarifications — and their side
bets, too.
The Bet
Within 5 years from the publication of this bet, humans
will permit AI-enabled systems to self-authorize taking
responsibility from their human operator in a non-
contrived, nonacademic setting.
Adjudication Criteria
The real issue here is whether the machine can, or
will, be permitted to take control against the will
of the human operator. Evidence for support of the
bet in favor includes recent implementations of AI
assistance in commercial systems such as lane assist
and emergency braking in automobiles for distracted
or errant drivers, and the automatic ground control
avoidance system (Auto-GCAS) for regaining control
of military aircraft from unconscious fighter pilots.
If an autonomous AI system could detect malevo-
lent intent on the part of the human operator, for
example, it could take control away from a suicidal
and homicidal copilot, such as the copilot who com-
mitted suicide and mass murder in 2015 by crashing
his Germanwings airliner, killing all aboard. The dis-
tinction to be made here is whether within the next
5 years an AI system will be designed to take control
against the will of the human. With Auto-GCAS, the
assumption is that the human fighter pilot is uncon-
scious, disabled, or otherwise unable to fly the plane
safely. Were the pilot able, the assumption is that the
pilot would prefer to not crash the plane and avoid
loss of his or her own life and possibly the lives of
others. In a remarkable scene in 2001: A Space Odyssey,
the deviant computer HAL 9000, when asked to
open the pod bay doors, responded, “I’m sorry Dave,
I’m afraid I can’t do that.” We have automatic steer-
ing correction built into many new vehicles, but the
human can easily counter the motion if desired. In
the case of the Germanwings crash, if the AI or auto-
pilot were to take control away from the copilot to
save lives, then such a system would address the key
requirement: taking control to counter a malicious
(or intentionally ignorant) human operator. A single
example of an AI taking control from a human oper-
ator in a noncontrived, nonacademic setting to save
lives will settle the bet in favor of the pro side. The
lack of such an example will settle the bet in favor of
the con side.
Pro bet: Adjudication criteria accepted.
Con bet: Adjudication criteria accepted.
For: W. F. Lawless
Living electromechanical entities, known as humans,
are at the beginning stages of teaming with mobile
electromechanical entities, known as machines or
robots. Humans, not machines, are the primary cause
of accidents. Humans, not machines, get distracted,
drowsy, inebriated, angry, suicidal …. In my view, as
part of a team, machines are more likely to save rather
than threaten human lives (Lawless et al. 2017). But
based on the adjudication criteria established by the
referee, will we humans allow machines to override
a willful human operator intent on harming others?
Before answering that question, I review what
humans are doing now; afterward, I briefly con-
sider accountability from the consequence of a ma-
chine acting against the will of its human-operator
teammate.
Automobiles
AI integrated into the electromechanical systems of
cars is already helping humans with lane assist, predictive maintenance, insurance claims, and manufacturing. AI is protecting or saving human lives by
detecting drowsiness (Novosilska 2018), providing
emergency braking, and adjusting speed in construction zones (Krisher 2018). Moreover, based on the
National Highway Traffic Safety Administration’s
news that 10,874 deaths occurred in the United States
because of drunk driving in 2017, Volvo is designing
cars that limit speed or park in a safe place “to intervene if a clearly intoxicated or distracted driver
does not respond to warning signals and is risking an accident involving serious injury or death”
(Frangoul 2019).
US Air Force Fighter Planes
Before Auto-GCAS was deployed to save pilot lives,
there were dozens of cases ranging from G-induced
loss of consciousness to cockpit decompression,
hypoxia, and spatial disorientation; since deployment,
these systems have saved lives (Lemoine 2009). In
the case of a probable ground collision of the new
F- 35, Auto-GCAS activates, takes control from the
pilot, and returns the plane to a safe altitude and attitude until the pilot recovers (Casem 2018).
What about a willful, malicious human operator?
In 2015, a Germanwings airliner was flown into the
ground by its copilot, who committed suicide and
killed all 150 aboard (French Civil Aviation Safety