4 AI MAGAZINE
The first article to be published is an interview
with the current president of the Association for the
Advancement of Artificial Intelligence, Yolanda Gil,
whose long view on successful AI research helps us
to see beyond the recent excitement around AI and
appreciate how the AI community is evolving and
how it has held constant. We are inspired by Gil’s
approaches to thinking big and measuring progress,
and hope that this readership will be as well.
Meanwhile, the field of AI is facing a reproducibility crisis (Hutson 2018). Producing reproducible research is a cornerstone for making scientific progress
and is arguably an essential measure of successful research. Computer scientist Odd Erik Gundersen has
performed an assessment of this crisis in the AI community based on research presented at top AI conferences and proposes a methodology for addressing
this crisis (Gundersen and Kjensmo 2018). His conclusion is that “We are not standing on each other’s
shoulders. It is more like we are standing on each
other’s feet. The quality of documentation of empirical AI research must clearly improve.” Thankfully,
he also includes a discussion of potential barriers
to reproducible research, allowing each of us to reflect on the potential role that we can play in overcoming them.
It would be remiss to overlook the role that government plays in enabling successful applications of
AI. Therefore, we are also publishing an interview
with Arvind Gupta on the Indian government’s investment in AI innovations for business and social
causes. Gupta has over two decades of experience in
leadership, policy, and entrepreneurial roles, in both
Silicon Valley and India. His interview provides
insights on how the Indian government defines
success, and their plans for achieving it.
Additionally, in a subsequent issue, Aaron Mannes,
a senior policy advisor to the U.S. Department of
Homeland Security, will share his perspective on
how governments can use hard and soft governance
methods for preventing harmful AI research from
being conducted or deployed, while also protecting
useful AI research from the blocking effects of unfounded public fear.
For researchers in search of the twin-win, forming
partnerships between industry and academe may be
part of the answer. In light of the rapid developments
in AI technology and the equally dynamic business
climate, the barriers between where technologies are
developed and where they are deployed need to be
lowered. However, forming effective partnerships is
far from straightforward. A careful consideration of
incentives, sustainability, and market forces are par-
ticularly crucial in AI research today when there is a
tremendous uptick in the investment of resources in
AI research, across academe and industry (Columbus
2019). Based on first-hand knowledge, Lisa Amini
and her co-authors Ching-Hua Chen, David Cox,
Aude Oliva, and Antonio Torralba, in industry and ac-
ademe, will reflect on three large, academia-industry
initiatives; their article is intended to help spawn a
dialog on the motivations that drive such collabora-
tions, and the execution challenges that shape their
designs. Importantly, their experiences reveal the nu-
anced decision-making around designing organiza-
tional structures for promoting successful AI research.
While Aaron Manne’s article articulates how
political governance can influence the success of AI
research endeavors, Jeanna Matthews describes a set
of antipatterns of behavior that AI researchers engage
in that increase the risk of harm from AI. Both
Mannes and Matthews drive home the message that
all those involved in creating and deploying AI technologies need to feel more personally accountable
for ensuring that the AI is used responsibly.
We find the phrase be careful what you ask for to be
especially pertinent in AI research, as we invent technologies that are designed to automatically do what
we ask. In this issue, we hope that the collection of
articles and interviews devoted to this topic are educational, thought-provoking, and even controversial.
We don’t expect the AI community to ever converge
around a universal notion of success; however, we
hope to see a continuous and healthy discussion
around this topic.
Columbus, L. 2019. 10 Charts That Will Change Your Perspec-
tive on Artificial Intelligence’s Growth. Forbes.com. Accessed
on January 12, 2019. www.forbes.com/sites/louiscolumbus/
Gundersen, O. E., and Kjensmo, S. 2018. State of the Art:
Reproducibility in Artificial Intelligence. Proceedings of the
Thirty-Second AAAI Conference on Artificial Intelligence, 1-8.
Palo Alto, CA: AAAI Press. www.aaai.org/ocs/index.php/
Hutson, M. 2018. Artificial Intelligence Faces Reproducibility
Crisis. Science 359(6377): 725-6. science.sciencemag.org/
Shneiderman, B., and Hendler, J. 2017 It’s the Partnership,
Stupid. Issues in Science and Technology 33(4): Summer.
Ching-Hua Chen is a research staff member at the T.J.
Watson Research Center in Yorktown Heights, New York.
She manages the Health Behavior and Decision Science
group within the Center for Computational Health. She
graduated from Penn State University with a dual-title PhD
in Business Administration and Operations Research.
Jim Hendler is the Tetherless World Professor of Computer,
Web and Cognitive Sciences at Rensselaer Polytechnic Institute. He was the recipient of the 2017 Association for Advancement of Artificial Intelligence Distinguished Service
award and is a fellow of the Association for Advancement of
Artificial Intelligence, Association for Computing Machinery,
Institute of Electrical and Electronics Engineers, and the
National Academy of Public Administration.
Sabbir Rashid is a graduate student working with Deborah
McGuinness at Rensselaer Polytechnic Institute on research
related to the semantic web. Rashid has contributed to technologies involving data annotation and harmonization,