confidence levels; ( 2) causal and counterfactual reasoning, realized by extracting causal diagrams from STC-PGs to predict what will happen if certain alternative
actions had been taken; and ( 3) utility explanations,
which explain why the system made certain decisions.
UCLA is addressing both XAI challenge problem
areas using a common framework of representation
and inference. For data analytics, UCLA demonstrated
their system using a network of video cameras for scene
understanding and event analysis. For autonomy,
UCLA demonstrated it in scenarios using robots executing tasks in physics-realistic virtual reality
platforms and autonomous vehicle driving game
Testing of Deep Adaptive Programs
Oregon State University (OSU) is developing tools for
explaining learned agents that perform sequential
decision making and is identifying best principles for
designing explanation user interfaces. OSU’s explainable agent model employs explainable deep adaptive
programs (xDAPs), which combine adaptive programs,
deep RL, and explainability. With xDAPs, programmers can create agents by writing programs that include choice points, which represent decisions that
are automatically optimized via deep RL through
simulator interaction. For each choice point, deep RL
attaches a trained deep decision neural network
(dNN), which can yield high performance but is inherently unexplainable.
After initial xDAP training, xACT trains an explanation neural network (Qi and Li 2017) for each dNN.
These provide a sparse set of explanation features (x-
features) that encode properties of a dNN’s decision
logic. Such x-features, which are neural networks, are
not initially human interpretable. To address this,
xACT enables domain experts to attach interpretable
descriptions to x-features, and xDAP programmers to
annotate environment reward types and other con-
cepts, which are automatically embedded into the
dNNs as “annotation concepts” during learning.
The dNN decisions can be explained via the descriptions of relevant x-features and annotation concepts, which can be further understood via neural
network saliency visualization tools. OSU is investigating the utility of saliency computations for
explaining sequential decision making.
OSU’s explanation user interface allows users to
navigate thousands of learned agent decisions and
obtain visual and natural language (NL) explanations. Its
design is based on information foraging theory (IFT),
which allows a user to efficiently drill down to the most
useful explanatory information at any moment. The
assessment of rationales for learned decisions may more
efficiently identify flaws in the agent’s decision
making and improve user trust.
OSU is addressing the autonomy challenge problem area and has demonstrated xACT in scenarios
using a custom-built real-time strategy game engine.
Pilot studies have informed the explanation user interface design by characterizing how users navigate
AI-agent game play and tend to explain game decisions (Dodge et al. 2018).
Learning and Explanation
The Palo Alto Research Center (PARC) team (including
researchers from Carnegie Mellon University, the Army
Cyber Institute, the University of Edinburgh, and the
University of Michigan) is developing an interactive
sensemaking system that can explain the learned capabilities of an XAI system that controls a simulated
unmanned aerial system.
An XAI system’s explanations should communicate
what information it uses to make decisions, whether it
APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC JAN FEB MAR APR MAY
Kickoff Progress Report Tech Demos Eval 1 Results Eval 2 Results Eval 3 Results
Refine and Test Explainable
Summarize Current Psychological
Theories of Explanation
Develop Theoretical Model of
Refine and Test Model of Explanation to Support
System Development and Evaluation
Refine and Test Explainable
( 1 Team)
( 11 Developer Teams)
Evaluator (NRL) Define Evaluation Framework Prepare for Eval 1 Eval 1 Analyze Results
Prepare for Eval 2
FY2017 FY2018 FY2019 FY2020
Results Prepare for Eval 3 Eval 3 Analyze Results; Accept Software Libraries/Toolkits
Figure 6. XAI Program Schedule.
SUMMER 2019 51
Deep Learning and Security