Table 2. Sample Open Source AI libraries and Toolkits.
TensorFlow (Google) Machine learning toolkit
OpenCV (itseez) Computer vision library
Sphinx (CMU) Speech recognition toolkit
Drools (Red Hat) Rule-driven expert system shell, planning engine
GATE (University of Sheffield) Natural language processing toolkit
Robot Operating System (Open Source Robotics
Platform for integrating various algorithms and libraries
related to robotics
Figure 2. Google Trends Rankings for Various Search Terms.
The y-axis represents smoothed relative interest.
2004 2006 2008 2010 2012 2014 2016
numeric computation, early AI systems relied on
heuristics encoded symbolically, even in data-heavy
tasks such as computer vision. Though significant
progress continues to be made in symbolic reasoning, it is clear that the power available to process vast
amounts of data — even at high data rates — has
enabled practical deployment of machine-learning
techniques and resulted in a wide diversity of successful applications from speech recognition and face
recognition to self-driving cars. Additionally, it is
interesting to note that machine learning seems to
dominate the popular perspective of AI today, just as
it was considered by early AI researchers to be an
essential component of intelligence (Minsky 1961).
We find evidence of this in Google Trends data,
shown in figure 2. Interest in AI appeared to wane
after the dot-com crash and hit a low around 2009.
The search term computer science follows a similar
trend. Recently, interest has renewed and appears to
be supported by machine learning and recent work
in deep learning.
Reduced Business Risk
These points add up to another change. Because of
the greater computing power and more readily available data sets and software, today there is less need to
build massive technology platforms. Hence, it is
cheaper to build AI systems. More effort can be spent
on solving specific business problems, thereby reducing the risk associated with artificial intelligence.
Compared to 1989, today it is orders of magnitude
easier to integrate AI systems into a company’s overall IT portfolio. The reasons include: modern AI systems utilize standard hardware and software (in
many cases); they integrate more easily into existing
architectures; the iterative development process pioneered in AI projects has become common across IT;
and, the success of high-profile AI systems such as
Watson and Siri means that most people know that
AI can work in the real world. (The authors thank
Neil Jacobstein for this insight.)
Distributions and Trends
from the IAAI Conferences
At the outset, IAAI included only applications that
had been deployed; that is, for which there was experience based on actual use, and for which payoff
could be estimated. In 1997, an emerging applications track was added to bridge the gap between AI
research and AI application development. The goal
was to support information sharing among
researchers and system builders: researchers could see
which techniques proved fruitful in deployed applications, and builders could learn of emerging techniques that had yet to be proven in the field but
An analysis of the topics covered by IAAI articles is
shown in figures 3 and 4. This analysis is provided by
i2k Connect ( i2kconnect.com), whose goal is help
organizations find, filter, and analyze unstructured