Hoffman, R. R., Mueller, S. T.; and Klein, G. 2017. Explaining
Explanation, Part 2: Empirical Foundations. IEEE Intelligent
Systems 32( 4): 78–86. doi.org/10.1109/MIS.2017.3121544
Hu, R.; Andreas, J.; Rohrbach, M.; Darrell, T.; and Saenko, K.
2017. Learning to Reason: End-to-End Module Networks for
Visual Question Answering. In Proceedings of the IEEE International Conference on Computer Vision, 804–13. New York:
Huang, S. H.; Bhatia, K.; Abbeel, P.; and Dragan, A. 2018.
Establishing Appropriate Trust via Critical States. Presented at the 13th Annual ACM/IEEE International
Conference on Human-Robot Interaction Workshop on
Explainable Robot Behavior. Madrid, Spain; October 1–5.
Kim, J., and Canny, J. 2017. Interpretable Learning for
Self-Driving Cars by Visualizing Causal Attention. In
Proceedings of the International Conference on Computer Vision, 2942–50. New York: IEEE. doi.org/10.1109/ICCV.
Klein, G. 2018. Explaining Explanation, Part 3: The Causal
Landscape. IEEE Intelligent Systems 33( 2): 83–88. doi.org/10.
Letham, B.; Rudin, C.; McCormick, T. H.; and Madigan, D.
2015. Interpretable Classifiers Using Rules and Bayesian
Analysis: Building a Better Stroke Prediction Model. Annals of
Applied Statistics 9( 3): 1350–71. doi.org/10.1214/15-
Marazopoulou, K.; Maier, M.; and Jensen, D. 2015. Learning
the Structure of Causal Models with Relational and Temporal
Dependence. In Proceedings of the Thirty-First Conference on
Uncertainty in Artificial Intelligence, 572–81. Association for
Uncertainty in Artificial Intelligence.
Miller, T. 2017. Explanation in Artificial Intelligence: Insights
from the Social Sciences. arXiv preprint. arXiv:1706.07269v1
[ cs.AI]. Ithaca, NY: Cornell University Library.
Park, D. H.; Hendricks, L. A.; Akata, Z.; Rohrbach, A.; Schiele,
B.; Darrell, T.; and Rohrbach, M. 2018. Multimodal Explanations: Justifying Decisions and Pointing to the Evidence.
In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition. New York: IEEE. doi.org/10.1109/CVPR.
Pfeffer, A. 2016. Practical Probabilistic Programming.
Greenwich, CT: Manning Publications.
Qi, Z., and Li, F. 2017. Learning Explainable Embeddings for
Deep Networks. Paper presented at the NIPS Workshop on
Interpreting, Explaining and Visualizing Deep Learning.
Long Beach, CA, December 9.
Ramanishka, V.; Das, A.; Zhang, J.; and Saenko, K. 2017. Top-Down Visual Saliency Guided by Captions. In Proceedings of the
30th IEEE Conference on Computer Vision and Pattern Recognition,
7206–15. New York, IEEE.
Ras, G.; van Gerven, M.; and Haselager, P. 2018 Explanation
Methods in Deep Learning: Users, Values, Concerns and
Challenges. arXiv preprint. arXiv:1803.07517v2 [ cs.AI].
Ithaca, NY: Cornell University Library.
Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2016. “Why Should
I Trust You?”: Explaining the Predictions of Any Classifier. In
Proceedings of the 22nd ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, 1135–44. New York:
Association for Computing Machinery. doi.org/10.1145/
She, L., and Chai, J. Y. 2017. Interactive Learning for Acquisition of Grounded Verb Semantics towards Human-Robot Communication. In Proceedings of the 55th Annual
Meeting of the Association for Computational Linguistics, vol. 1,
1634–44. Stroudsburg, PA: Association for Computation
Vicol, P.; Tapaswi, M.; Castrejon, L.; and Fidler, S.
2018.MovieGraphs: Towards Understanding Human-Cen-tric Situations from Videos. In IEEE Conference on Computer
Vision and Pattern Recognition. New York: IEEE. doi.org/10.
Vong, W.-K.; Sojitra, R.; Reyes, A.; Yang, S. C.-H.; and Shafto,
P. 2018. Bayesian Teaching of Image Categories. Paper presented at the 40th Annual Meeting of the Cognitive Science
Society (CogSci). Madison, WI, July 25–28.
Yang, S. C.-H., and Shafto, P. 2017. Explainable Artificial Intelligence via Bayesian Teaching. Paper presented at the 31st
Conference on Neural Information Processing Systems
Workshop on Teaching Machines, Robots and Humans. Long
Beach, CA, December 9.
Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; and Torralba, A.
2015. Object Detectors Emerge in Deep Scene CNNs. Paper
presented at the International Conference on Learning Representations. San Diego, CA, May 7–9.
David Gunning is a program manager in DARPA’s Information Innovation Office, as an Intergovernmental Personnel Act from the Pacific Northwest National Labs.
Gunning has more than 30 years of experience in developing
AI technology. In prior DARPA tours he managed the PAL
project that produced Siri and the CPOF project that the US
Army adopted as its C2 system for use in Iraq and Afghanistan. Gunning was also a program director at PARC, a
senior research manager at Vulcan, senior vice president at
SET Corporation, vice president of Cycorp, and a senior
scientist at the Air Force Research Labs. Gunning holds an MS
in computer science from Stanford University and an MS in
cognitive psychology from the University of Dayton.
David W. Aha is acting director of NRL’s Navy Center for
Applied Research in AI in Washington, D.C. His interests
include goal reasoning, XAI, case-based reasoning, and
machine learning, among other topics. He has coorganized
many events on these topics (for example, the IJCAI- 18 XAI
Workshop), launched the UCI Repository for ML Databases,
served as an AAAI Councilor, and leads DARPA XAI’s evaluation team.