90 AI MAGAZINE
January 2011, the museum opened its permanent
exhibition, called Revolution. Here is a quote from
the museum’s press release:
Ten years in the making, Revolution is the product of
the Museum’s professional staff collaborating with
designers, content producers and more than 200
experts, pioneers and historians around the world.
Revolution showcases 20 different areas of computers, computer science, semiconductors, and communications, from early history to futuristic visions.
Among those 20 areas is one called AI and Robotics.
For each area, the museum staff has chosen one
historical artifact to be the icon exhibit for the area.
For AI and Robotics, the icon is Shakey the Robot,
beautifully exhibited.
The museum could have chosen any one of a
dozen or more landmark AI artifacts. It could have
chosen AI’s first heuristic problem-solving program
(the Logic Theorist of Newell, Shaw, and Simon); or a
speech-understanding program from Reddy; or one
of the early expert systems from our Stanford group;
or Deep Blue, the AI system that beat the world’s
chess champion. It could have … but in the end the
museum chose Shakey.
So let’s put the first story and the second story
together to make a:
Third Story
SRI’s Shakey work was a decade or two ahead of its
time in demonstrating the power of integrating AI
with robotics. Remarkably, even today, when robotics is being taught to high school students, and computing and sensors cost almost nothing, most robots
in labs and companies do not have the AI capabilities
that Shakey had in the 1970s.
Historians of the field have given Shakey deserved
recognition, but the field of AI had not. It took a
while for an AAAI national program committee to
recognize this and make room for this celebration. I
want to thank the AAAI- 15 program committee, and
hope that this will be a model for bringing forth other important parts of AI’s history.
Fourth Story
The Shakey Project was done from 1966 to 1972.
What was AI and computer technology like before
and during that period?
There is a generation of younger researchers that
have no idea how few were the powerful ideas of the
first decade of AI (1956 to 1966) to build upon for
new AI systems. Nor can that younger generation
envision the lack of power of the computers that we
had upon which to build these systems.
But there was no lack of enthusiasm, and excitement; no lack of interaction, because almost everyone
in the field knew almost everyone; and we all read
each other’s papers, tech reports, and books. That’s
what it’s like, when a field is small and emerging.
The AI science had a workable set of ideas about
how to use heuristic search to solve problems. But
proving things about heuristic search had to wait
until later (the Shakey group’s A*). Some powerful
successful experiments had been done: the Logic
Theorist; Gelernter’s Geometry Theorem Proving
program; Slagle’s calculus problem solving programs
are examples. These were all on the “cognitive” side
of AI work. On this side, much discussion and energy was focused on generality in problem solving:
Newell and Simon with means-ends analysis;
McCarthy and other “logicists” with theorem proving.
On the “perceptual” side of AI work, a similar story can be told about research on vision. There were
several basic workable techniques involving line finding, curve finding, and putting elements together
into logical descriptions of objects. Generality of the
techniques was also an issue, as it still is today.
What did we have with which to do this work? Our
programming languages were great! List processing
was invented at CMU and then made more powerful
and beautiful in LISP at the Massachusetts Institute of
Technology (MIT). But there was almost no interaction between people and computers. Time-shared
interaction did not become available to most
researchers in this first decade.
Try to imagine this about computer processing
power and memory: I did my thesis work on an IBM
650 computer in the late 1950s: maximum 2500
operations per second; memory was 20,000 digits
(what we would now call bytes). Not only your program, but your language interpreter had to fit into
this memory. There was no virtual memory.
In 1959, the IBM’s large multimillion-dollar transistorized computer was introduced. It ran at 100K
FLOPS, and had about 150K bytes of main memory.
The largest DEC computer that would have been
available in 1966 for the Shakey group to buy was the
PDP- 6, which operated at 250,000 additions per second with a memory of about 150K bytes.
Compare these numbers with, say, today’s Apple
MacPro at four gigaops/sec with memory of 16 giga-bytes; or even today’s smartphones at about 1
gigaop/sec but with memories going up to 128 giga-bytes.
Fifth Story
All projects end, even the great ones. The DARPA
funding pendulum for support of AI swung away
from robotics and toward both knowledge-based systems and the national speech understanding project.
As funding shifted, SRI continued to do world-class
work in both of these other themes of the 1970s.
Final Story
The Shakey project, as cutting edge work in computer science, inspired young people to do great things.
In an email to Eric Horvitz, former president of AAAI,