Eric Horvitz, a key figure in the statistical and
probabilistic turn in artificial intelligence research,
shared this same vision of the importance of expla-
nation especially in contemporary work:
Working to provide people with insights or explana-
tions about the rationale behind the inferences made
by reasoning systems is a really fabulous area for
research. I expect to see ongoing discussions and a
stream of innovations in this realm. As an example,
one approach being explored for making machine-
learned models and their inferences more inspectable
is a representation developed years ago in the statistics
community named generalized additive models.
With this approach, models used for inferences are
restricted to a sum of terms, where each term is a simple function of one or a few observables. The representation allows people in some ways to “see” and better understand how different observations contribute
to a final inference. These models are more scrutable
than trying to understand the contributions of thousands of distributed weights and links in top-perform-ing multilayered neural networks or forests of decision
There’s been a sense that the most accurate models
must be less understandable than the simpler models.
Recent work with inferences in healthcare show that
it’s possible to squeeze out most of the accuracy
shown by the more complex models with use of the
more understandable, generalized, additive models.
But even so, we are far from the types of rich explana-
tions provided by chains of logic developed during the
expert systems era. Working with statistical classifiers
is quite different than production systems, but I think
we can still make progress.
Feigenbaum too stressed the importance of expla-
nation — intelligibility — not just in the motivations
behind artificial intelligence systems, but also, with
Davis and Horvitz, as part of their instrumentality,
their value in use:
I’ve been engaged in giving extended tutorials to a
group of lawyers at the very, very top of the food chain
in law. And the message is: we (lawyers) need a story.
That’s how we decide things. And we (lawyers) under-
stand about those networks and — we understand
about, at the bottom, you pass up .825 and then it
changes into .634 and then it changes into .345.
That’s not a story. We (lawyers) need a story or we
can’t assess liability, we can’t make judgments. We
need that explanation in human terms.
While Horvitz is most associated with the statisti-
cal turn in artificial intelligence that is seen as adding
profound new challenges to explanation and trans-
parency, his route to this stance was through his
engagement with and deep interest in expert sys-
tems. Horvitz explained:
I came to Stanford University very excited about the
principles and architectures of cognition, and I was
excited about work being done on expert systems of
the day. Folks were applying theorem-proving tech-
nologies to real-world tasks, helping people in areas
like medicine. I was curious about deeper reasoning
systems. I remember talking to John McCarthy early
on. I was curious about his efforts in commonsense
reasoning. In my first meeting with him, I happened
to mention inferences in medicine and John very qui-
etly raised his hand and pointed to the left and said,
“I think you should go see Bruce Buchanan.”
And so [I] went to see Bruce and then met Ed [Feigen-
baum], Ted Shortliffe, and others. I shared their sense
of excitement about moving beyond toy illustrations
to build real systems that could augment people’s abil-
ities. Ted and team had wrestled with the complexity
of the real world, working to deliver healthcare deci-
sion support with the primordial, inspiring MYCIN
system. Ted had introduced a numerical representa-
tion of uncertainty, called “certainty factors,” on top
of a logic-based production system used in MYCIN.
I was collaborating with David Heckerman, a fellow
student who had become a close friend around our
shared pursuit of principles of intelligence. David and
I were big fans of the possibilities of employing probabilities in reasoning systems. We started wondering
how certainty factors related to probabilities … David
showed how certainty factors could be mapped into a
probabilistic representation … We found that certainty factors and their use in chains of reasoning were
actually similar to ideas about belief updating in a theory of scientific confirmation described by philosopher Rudolf Carnap in the early 20th century.
Relaxing the independence assumptions in proba-
AAAI Archive File Photo.