product that, with just 10 minutes of audio,
can exactly replicate a person’s voice in limitless artificial audio (Carter, Kinnucan, and
16. Notably, China has adjusted its strategic
focus from yesterday’s informatized ways of
warfare to tomorrow’s intelligentized warfare, for which AI will be critical (Kania
2017). Russia has already demonstrated its
willingness to engage in information warfare (Floridi and Taddeo 2014) during the
2016 US presidential election and its ability
to target more than 10,000 Twitter users in
the US Defense Department (Calabresi
Allen, G. and Chan, T. 2017. Artificial Intel-
ligence and National Security. A US Intelli-
gence Advanced Research Projects Activity
Study. Cambridge, MA: Belfer Center for Sci-
ence and International Affairs, Harvard
Airbus. 2017. A Statistical Analysis of Commercial Aviation Accidents 1958-2016. Annual Investigative Report. Blagnac Cedex,
France: AIRBUS S.A.S. flightsafety.org/wp-
Amato, F.; López, A.; Peña-Méndez, E. M.;
Vanhara, P.; Hampl, A.; and Havel, J. 2013.
Artificial Neural Networks in Medical Diagnosis. Journal of Applied Biomedicine 11( 2):
Anderson, J. M., and Matsumura, J. M. 2015.
Civilian Developments in Autonomous Vehicle Technology and Their Civilian and Military Policy Implications. In Autonomous Systems: Issues for Defence Policymakers, edited by
Andrew P. Williams and Paul D. Scharre, 127–
48. Technical Report AD10110077. The
Hague, Netherlands: NATO Communications
and Information Agency.
Angwin, J.; Larson, J.; Mattu, S.; and Kirch-ner, L. 2016. Machine Bias. ProPublica, May
23, 2016. www.propublica.org/article/
Baillie, J. C. 2016. Why AlphaGo Is Not AI.
IEEE Spectrum, March 17, 2016. spectrum.
Baker, M. 2016. 1,500 Scientists Lift the Lid
on Reproducibility. Nature 533(7604): 452–
54. doi.org/10.1038/ 533452a.
Banko, M., and Brill E. 2001. Scaling to Very
Very Large Corpora for Natural Language
Disambiguation. In Proceedings of the 39th
Annual Meeting on Association for Computational Linguistics, 26–33. San Francisco: Morgan Kaufmann.
Beeby, D. 2018. Liberal Government Looks
to Update Fight Against Online Child Porn.
CBC News, January 10, 2018. www.cbc.ca/
Bogost, I. 2015. The Cathedral of Computation. The Atlantic, January 15, 2015. www.
Booth, S.; Tompkin, J.; Pfister, H.; Waldo, J.;
Gajos, K.; and Nagpal, R. 2017. Piggybacking Robots: Human-Robot Overtrust in University Dormitory Security. In Proceedings of
the 2017 ACM/IEEE International Conference
on Human-Robot Interaction, 426–34. New
York: Association for Computing Machinery. doi.org/10.1145/2909824.3020211.
Brundage, M.; Avin, S.; Clark, J.; Toner, H.;
Eckersley, P.; Garfinkel, B.; Dafoe, A.;
Scharre, P.; Zeitzoff, T.; Filar, B.; et al. 2018.
The Malicious Use of Artificial Intelligence:
Forecasting, Prevention, and Mitigation.
Workshop Report. arXiv preprint arXiv:
1802.07228 [ cs.AI]. Oxford, UK: Future of
Humanity Institute, Centre for the Study of
Existential Risk, Centre for the Future of
Buchanan, B., and Miller, T. 2017. Machine
Learning for Policymakers: What It Is and
Why It Matters. The Cyber Security Project.
Cambridge, MA: Harvard Kennedy School,
Belfer Center for Science and International
Buolamwini, J., and Gebru T. 2018. Gender
Shades: Intersectional Accuracy Disparities in
Commercial Gender Classification.
Proceedings of Machine Learning Research 81: 1–15.
Calabresi, M. 2017. Inside Russia’s Social
Media War on America. Time, May 18, 2017.
Campolo, A.; Sanfilippo, M.; Whittaker, M.;
and Crawford, K. 2017. AI Now 2017 Report.
Edited by A. Selbst and S. Barocas. New
York: New York University, AI Now Institute. assets.contentful.com/8wprhhvnpfc0/
Canada’s National Statement. 2016. Presented at the Experts Meeting on Lethal
Autonomous Weapons Systems Convention on Certain Conventional Weapons
(CCW). Geneva, Switzerland, April 11–15.
Carter, W. A.; Kinnucan, E.; and Elliot, J.
2018. A National Machine Intelligence Strategy
for the United States: A Report of the CSIS Technology Policy Program. Washington, DC: Center for Strategic and International Studies.
Association for Computing Machinery,
7. See the Montreal Declaration on the
Responsible Development of Artificial Intelligence, produced at the 2017 Forum on the
Socially Responsible Development of Artificial Intelligence, nouvelles.umontreal.ca/
8. See deepmind.com/applied/deepmind-ethics-society
9. See www.partnershiponai.org.
10. We have chosen to use a broad definition of ethics because evidence-informed
policymaking also has a broad base — considering elements of societal and political
pressures, resources, safety and security, and
11. Machine autonomy exists on a spectrum. Our definitions specific to autonomy
adopt the following approach.
Semiautonomous or human in the loop indicates that
a weapons system waits for human command and permission before taking action.
Supervised autonomy or human on the loop
refers to systems that may track, target, and
act defensively, but that are supervised by
humans who can monitor and, if necessary,
intervene in the weapon’s operation, as
with, for example, the Phalanx Close-In
Weapons System, which is used to defend
ships against incoming enemy missiles. Full
autonomy or human out of the loop refers to
when human input activates a weapon that
then selects and engages targets without
further operator intervention, for example,
the Harpy drone. Full autonomy that is
based on AGI refers to LAWS. While it is
accurate to say there are a number of
weapon systems in existence today that can
perform independent actions, these systems
act in accordance with a defined rule set
based on complex sensor(s) input, and thus,
would be better described as automated.
12. Other technologies we have addressed
in brief overviews include object recognition, facial recognition, and gait recognition; using AI to monitor mental health;
sentiment analysis; AI for dis/misinforma-tion; robotic casualty evacuation; robotic
telesurgery; robotics and sensors for IED,
explosive, and chemical detection; and
13. Please note that our research supports
future policy development, which is why we
included a “policy implications” category.
14. Canada officially supports the term
“appropriate human involvement” (
Canada’s National Statement 2016), introduced
in 2016 as a bridge between the terms
“meaningful human control” and “
appropriate human judgment.”
15 In 2017, Adobe demonstrated a new