from a starting point characterized by labels like fast,
recognition, look-up, bottom-up, data-driven.
For example, Fast versus Slow. The focus of Daniel
Kahneman’s 2011 book is a dichotomy between
these two modes of thought: “fast, instinctive, and
emotional” and “slow, more deliberative, and more
logical” (Kahneman 2011).
Alternatively, Herb Simon put it this way: “The situation has provided a cue; this cue has given the
expert access to information stored in memory, and
the information provides the answer. Intuition is
nothing more and nothing less than recognition.”
(Simon 1992). Fast corresponds to recognition. Slow
corresponds to cognition or search. In this regard, compare the recognition approach of the human chess
master to the search approach of Deep Blue (
Campbell, Hoane, and Hsu 1999). (Because of this, a typical grandmaster does six orders of magnitude less
search per move than Deep Blue did.)
Another example, well known to American football fans, is that of the Manning brothers, Peyton and
Eli. It has been widely reported that their father
Archie started the boys learning football and quarterbacking at the earliest possible age. This maximized the time they had to store millions of the small
chunks of recognition knowledge, later buttressed by
countless hours spent studying game film.
Rod Brooks championed what he called a new
approach to artificial intelligence and robot design —
which can be called “bottom-up” — as an alternative
to the “top-down” model-driven approach of the pioneers (Brooks 1991).
Today, some authors seem to see a conflict between
“data-driven” (new think) systems and “
model-driven” (old think) systems as if the “good” applications
today are all data driven and work well, in contrast
with the “bad” model-driven applications of the old
days that didn’t work well.
Many AI apps have combined reasoning from
opposite starting points, going way back to the early
days. The Hearsay II speech-understanding system
combined top-down and bottom-up processing
(Erman et al. 1980). Mycin used backward and forward rule chaining (Buchanan and Shortliffe 1984).
And the Dipmeter Advisor was both data driven, converting raw signals to patterns, and model driven,
using rules to classify stratigraphic and tectonic features from the patterns (Smith and Baker 1983).
Overall accuracy depended on the contributions of
all the components — data driven and model driven.
We also don’t accept the criticism that the early AI
community was too focused on model-driven
approaches when it should have been focused on
data-driven approaches. We believe the pioneers were
doing the best they could with the machines and
data available to them. They were forced into cogni-
tive approaches in some cases (for example, vision)
because they had to do something to finesse the need
for orders of magnitude more processing power, stor-
age, and sensors than were available to them in the
The good news these days is that all the components are substantially more powerful, thanks to the
computing and data revolutions. We are not restricted to either a “fast” or a “slow” starting point. We
can have both.
That said, it is important for developers to give due
consideration to the new possibilities offered by the
substantial increases in processor speed and memory available today — and to not implicitly be stuck in
the “slow” thinking mind set of the early days.
Going forward, there is the possibility of storing massively larger knowledge bases that are composed of
small chunks of very specific domain and task
knowledge, retrieved by fast recognition processes
(more of what Simon was referring to).
Thus, a knowledge base for a domain would have
powerful rules (as in the past, thousands of them)
plus these small chunks of very specific experiential
knowledge (millions of them). With modern sensors,
the small chunks may be very easy to capture. Certainly, there will be things missing that might have
been implied by rules (that is, not everything possible is actually observed and remembered as a chunk).
But overall, knowledge acquisition will have become
far easier to do and cheaper. These “hybrid” knowledge base architectures will dominate in applications. This also seems like a fruitful avenue for reconsidering older models of human cognition. (The
authors thank Ed Feigenbaum for this observation.)
Checklist for Tomorrow’s
Our examination of nearly 30 IAAI conferences, our
personal experiences, and stories related to us by col-
leagues and friends, lead to the checklist in table 3.
We briefly explain each entry in the following.
As will be apparent to experienced application
developers, much of this advice mirrors general software engineering best practice. But some of the
points are even more important for AI systems. We
invite your feedback and your own lessons learned.
Select Problems with a Solid Business Case
Successful IT applications in general start with a focus
on the business case and the customer — not the technology. This is particularly true for AI applications. In
the early days of AI applications, the mind share of
the developers tended more heavily to the technology (the knowledge-representation methods and the
reasoning machinery) than it did to the customer
need. In retrospect, this was to be expected. The early
implementers were almost always AI researchers,
infringing on an IT community that was by and large
skeptical of the hype and the baggage that came along
with the technology — nonstandard hardware and
software, methods that were not understood by the