application that is required to integrate multiple
sources and types of information. The architecture of
such an AI application should make such integration
feasible by, for example, separating different processing tasks into distinct modules and supporting a common interface for communication among the components. The Robot Operating System3 is a paradigmatic
example of such an architecture. Different robots may
have vastly different components and purposes, yet
ROS offers high-level abstractions that enable various
sensors, actuators, and algorithms to communicate
using a common language.
AI Applications Must Integrate
into Existing Work Flows
Perhaps the most important lesson learned by AI system builders is that success depends on integrating
into existing workflows — the human context of
actual use. It is rare to replace an existing work flow
completely. Thus, the application must play nicely
with the other tools that people use. Put another way,
ease of use delivered by the human interface is the
“license to operate.” Unless designers get that part
right, people may not ever see the AI power under the
hood; they will have already walked away.
As AI systems began to function well enough that
they were able to play in the center ring, so to speak,
risk mitigation, project management, and budgetary
control became more important. The systems were
no longer in a “research” or “proof of concept”
phase. In other words, standard IT rules — and consumer mobile app acceptance rules — apply. Many AI
practitioners have made these points in the context
of AI applications in particular. But the rules are valid
for all applications of information technology.
In the early days, we talked as if AI systems had a
big box of AI — the important stuff — and a small
box of all that other messy IT stuff. We quickly
learned that in real-world systems, it was mostly the
other way around. The AI was a piece of the puzzle,
and sometimes not a very big piece.
Consider the Dipmeter Advisor (Smith and Baker
1983), started at Schlumberger in the early 1980s and
based on the knowledge of the legendary oil finder,
Al Gilreath, shown in figure 7. The Dipmeter Advisor
demonstrated the challenges of infrastructure: getting
the data from the field systems was a bigger problem
than originally anticipated; and the challenges of
technology transfer: nontraditional hardware (
D-Machines) and software (Interlisp-D) became major
stumbling blocks, though without these technologies
Schlumberger would have had no system at all.
The amount of effort that had to be devoted to the
non-AI components was dominant. The user interface accounted for almost half the code. The rule
engine and knowledge base accounted for 30 percent.
Of course, lines of code do not necessarily tell the
whole story, but the numbers are consistent with the
development effort expended. Much of the coding
effort went into the interactive graphics system, not
the AI. For some clients, interactive graphics was the
most important element.
Security and privacy have become increasingly
crucial over time, and the application’s performance
characteristics in the deployed setting must meet
industry or consumer expectations.
Additionally, change management is unavoidable
(Hiatt 2006). But the amount of change management
required is inversely proportional to the power of the
new technology. It is also directly proportional to the
amount of change in existing work flows required to
Convincing people to make substantial changes to
their existing work flows to take advantage of a new
technology that isn’t much better than the old technology requires a great deal of change management
effort. On the other hand, convincing people to
make small changes to their existing work flows to
take advantage of new technology that is an order of
magnitude better than the old technology requires
only modest change management effort.
As Mehmet Goker put it in a private communica-
tion to the authors: “Applications with a small and
flexible core that solve a real-world problem have the
biggest impact and are the easiest to put into the work-
To summarize, in any large organization, standard
IT rules apply and the AI application should fit into the
broader IT infrastructure to ensure successful adoption.
Management, end user, and IT support and participa-
tion are essential. Budget approval will be challenging
without business unit management support, deploy-
ment into a company’s existing infrastructure is not
possible without support from the IT organization, and
adoption is unlikely without continuous end-user par-
ticipation in system development.
In the real world of applications, our experience
also suggests that the dichotomy suggested by
Markoff (2015) between artificial intelligence and
intelligence augmentation or amplification does not
exist. They are two ends of a spectrum that meet in
most applications. The successful systems enable
people to do what people do best and use computers
to do what computers do best.
A Way Around the Knowledge
Machine learning offers a way around the knowl-
edge acquisition bottleneck ... but success depends
on human insight folded into the methods, like the
choice of features.
One thing has not changed over the history of
IAAI. It is still very hard to build, curate, and maintain large knowledge bases by hand. The manual
knowledge-acquisition bottleneck is still firmly in
Aside: This is a special case of a larger point. Manual
information governance is not sustainable. Very few