learning was that the models were developed from
poor data. As another example, AIS can be knowingly
spoofed. Furthermore, one could imagine an adversarial attack on AIS data (for example, Kessler, Craiger,
and Has 2018). In this case, AI should not be allowed
to authorize itself to make a decision even after concluding that a vessel was a potential threat based on
apparent anomalies in its kinematic patterns that
could have been planted by an adversary.
As of now, facial recognition technology works better
for men than women and for people of lighter complexion than for those of color (Galston, 2018). The
danger is that false positives infected with systematic
biases must not be ignored. Giving AI-enabled systems too much authority too soon is an overreach
that may impair the rights of citizens. Technology
companies fear a backlash from overreaching; that
is one reason that they are developing codes of ethical behavior. An uneven application of these codes,
however, offers a role for new laws and for government oversight.
HAL 9000 was devised by the science fiction author
Arthur C. Clarke (Hintze 2017) and later brought
to the screen by movie director Stanley Kubrick in
2001: A Space Odyssey; this fictional computer provided a great example of a system that takes control
from a human but fails to protect human life because
of unintended consequences (Hintze 2017). In many
complex systems — the RMS Titanic, NASA’s space
shuttle, the Chernobyl nuclear power plant, the two
Boeing 737 Max planes — engineers stack together
one layer after another of various components.
In these and other cases, the engineers may have
known how each aspect of a system worked individually, but they did not know well enough how all
the subsystems worked together. The result was complex systems that could never be fully understood
and that even could fail in unexpected ways. In each
of these and many other tragedies — a sunken ship,
two shuttles lost, radioactive contamination spread
across Europe and Asia, two planes falling from the
sky — a set of relatively small system failures combined to create a tragedy.
Summary for the Con Bet
When human lives are at stake, it would be nice
to have a system that rescues and safeguards them
until authorities could take over, a future dream.
But the biases in machine learning software, their
common lack of quality data, and the steps that
an autonomous system may take unexpectedly and
spontaneously likely preclude our human designers
from designing a machine to self-authorize taking responsibility from its human operator in a
noncontrived, nonacademic setting over the next
Bollacker, K.; Paritosh, P.; and Welty, C. 2018. The AI Bookie.
Place Your Bets: Adversarial Collaboration for Scientific
Advancement. AI Magazine 39( 4): 84–7. doi.org/10.1609/
French Civil Aviation Safety Investigation Authority. 2016.
Final Report. Accident on March 24, 2015 at Prads-Haute-Bléone (Alpes-de-Haute-Provence, France) to the Airbus
A320-211 Registered D-AIPX Operated by Germanwings.
Paris: Government of France.
Casem, G. 2018. F-35s Begin Auto GCAS Test Flights.
Washington, DC: US Air Force Public Affairs.
Castelvecchi, D. 2019. AI Pioneer: The Dangers of Abuse
Are Very Real. Nature News Q&A (April 4). doi.org/10.1038/
Coats, D. R. 2019. Statement for the Record: Worldwide
Threat Assessment of the US Intelligence Community. Washington, DC: US Senate Select Committee on Intelligence.
Frangoul, A. 2019. Volvo to Put Cameras and Sensors in Its
Cars to Tackle Drunk Driving. CNBC (March 21). www.cnbc.
Galston, W. A. 2018. Why the Government Must Help Shape
the Future of AI. Washington, DC: Brookings Institute.
Hamada, S.; Yancey, K. G.; Pardo, Y.; Gan, M.; Vanatta, M.;
An, D.; Hu, Y.; Derrien, T L.; Ruiz, R.; Liu, P., Sabin, J.; and
Luo, D. 2019. Dynamic DNA Material with Emergent Locomotion Behavior Powered by Artificial Metabolism. Science
Robotics 4( 29): eaaw3512.
Hintze, A. 2017 What an Artificial Intelligence Researcher
Fears about AI. The Conversation (July 13). theconversation.com/
Insurance Information Institute. 2019. Insurance Coverage for
Nuclear Accidents . New York: Insurance Information Institute.
International Maritime Organization. 2019, AIS Transponders.
London, UK: International Maritime Organization.
Kessler, G. C.; Craiger, J. P.; and Haass, J. C. 2018. A Taxonomy Framework for Maritime Cybersecurity: A Demonstration Using the Automatic Identification System, TransNav.
International Journal on Marine Navigation and Safety of Sea
Transportation 12( 3). doi.org/10.12716/1001.12.03.01
Krisher, T. 2018. New Cars Are Quickly Getting Self-Driving
Safety Features. Phys.org (March 27). phys.org/news/2018-
Lawless, W. F.; Mittu, R.; Sofge, D.; and Russell, S., editors.
2017. Autonomy and Artificial Intelligence: A Threat or Savior?
New York: Springer. doi.org/10.1007/978-3-319-59719-5
Lawless, W. F.; Mittu, R.; Sofge, D. A.; and Hiatt, L. 2019. Artificial
Intelligence, Autonomy, and Human-Machine Teams: Interdependence, Context, and Explainable AI. AI Magazine 40( 3).
Lemoine, C. W. 2009. What Exactly Is Auto GCAS? Fighter
Lipton, Z. C., and Steinhardt, J. 2019. Troubling trends in
machine learning scholarship. ACM Queue.
Magnuson, S. 2019. Hypersonic Jet Project Reaches Major
Milestone. National Defense (April 11). www.national-
Mishra, S. 2019. Could Unmanned Underwater Vehicles
Undermine Nuclear Deterrence? The Strategist (May 8).