Zell and Krizan [2014]). Putting aside that psychologic tests can be financially lucrative, personality
and other tests have been discredited as invalid, but
they have not been discontinued — for example,
the Myers–Briggs personality test, the most lucrative
test in psychology (Emre 2018); the implicit association test, used to determine implicit racism (Blanton
2009), although racism is resistant to treatment
(Jussim 2017); and self-esteem tests, used in schools
for decades (Baumeister et al. 2005).
The Future Determination
of Shared Context in Real Time
with Human-Machine Teams
The US Department of Defense is shifting to real-
time operations, which makes critical the computa-
tional determination of context. Based on the RAND
Corporation’s story about Petrov in 1983, the biggest
concern with AI is its use to determine the context
of a nuclear confrontation when humans are not in
the loop, a so-called Skynet situation (Lawless 2018).
Of concern, in the recent analysis by RAND, is that
AI systems may undermine the stability between
nations and make catastrophic war more likely (see,
for example, the work of Geist and Lohn [2018]).
China has demonstrated swarm intelligence algorithms that enable drones to hunt in packs. Russia
has announced plans for an underwater drone that
could guide itself across oceans to deliver a nuclear
warhead powerful enough to vaporize a major city.
Adding urgency to the determination of context in
real time, China and Russia have announced the
addition of hypersonic missiles to their military arsenals. Despite this urgency, “Americans seem generally
complacent about the dominance of their armed
forces ... creating a crisis of national security” (Edelman
and Roughead 2018, p. vi).
As with Uber and its vehicle operator in the death
of the pedestrian (National Transportation Safety
Board 2018), where the machine worked and the
human operator failed, what if in future situations
contexts change more rapidly that humans can process, so that at some point AI systems alone must
determine context? Woo (2018), for example, notes
that quicker human-reflex-like responsiveness is
thought to be likely with 5G.
How can we arrange human-machine teams to
make the best possible decisions in real time, not
only to protect national defense, to respond to med-
ical emergencies, or to warn other cars while riding
inebriated in a self-driving car, but also to accom-
plish these tasks more productively, efficiently, and
safely than now? For example, can user interventions
improve the learning of context for autonomous ma-
chines operating in unfamiliar environments or ex-
periencing unanticipated and rapid events? Can au-
tonomous machines be taught to explain contexts
by reasoning, with inferences about causality, and
with decisions to humans relying on comprehensible
explanations (Kambhampati 2018). And for mutual
context, can AI machines interdependently affect
human awareness, teams, and society, and how
might these machines be affected in turn? In short,
in real time, can situational awareness of context be
mutually constructed, mutually shared, and mutu-
ally trusted among machines and humans and thus
be productive, safe, efficient, and a benefit to society?
To address these questions, we need to know more
about the effects of interdependence, which Jones
(1998, p. 33) said characterized social interaction but
was bewildering theoretically. Nonetheless, our knowledge about interdependence is growing (Lawless
2017a; Lawless 2017b). It not only determines context (Lawless et al. 2018), but it is a social state very
sensitive to changes in context, exemplified by instability when two adversaries angrily express their two-sided stories, but once adversaries compromise, their
context is determined. (See, for example, the bipartisan legislation passed overwhelmingly in response
to the 2018 nuclear posture review [Mattis 2018;
Payne 2018].) The universal motivation is for convergence to a single story (however, removing an alternative interpretation increases uncertainty and risk
[Lukianoff and Haidt 2018]) and nonfactorability —
for example, the struggle to write a successful screenplay that dramatizes a courtroom scene, to direct a
winning political battle, or to describe protectively
an engineering innovation in a patent.
According to the National Academy of Sciences
(Cooke and Hilton 2015), teams are interdependent,
and the best teams are highly interdependent
(Cummings 2015). Interdependence is associated with
innovation (Lawless 2017a; Lawless 2017b). However,
to maintain a state of interdependence, a leader must
train or quickly replace poorly performing team
members (Hackman 2011), such as when Verizon
removed the architect of its struggling online advertisement business (FitzGerald and Ramachandran
2018); keep a complex technology composed of numerous parts fully integrated, such as the US Army’s
recent successful missile defense system (Freedberg
2018); reduce uncertainty by sustaining an active
competition among self-interested, two-sided perspectives not only to reach the best decisions — for
example, the “informed assessment of competing interests” 4 — but also to reduce human error (Lawless
et al. 2017); and, finally, to keep a team focused on
collecting and analyzing the objective and statistical evidence that guides the search for vulnerabilities in a team and its opponents without becoming
overly confident (Massey and Thaler 2005), such as
when an overconfident CBS was defeated by Viacom
(James 2018).
Theoretically (Lawless 2017b), it has been difficult
to explain why information obtained from observing the performance of the best teams seldom generalizes (for example, even veteran movie studios
with past successes can fail at the box office with a
movie sequel [Fritz 2017]). One reason is that, mathematically, by reducing the degrees of freedom, the