The idea of an artificial superintelligence (SI) goes back at least to I. J. Good’s “ultraintelligent machine” (1965) and Turing’s (1951) warning: “If a machine can think,
it might think more intelligently than we do, and then where
should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at
strategic moments, we should, as a species, feel greatly humbled. This new danger is certainly something which can give
us anxiety.” It has also been a staple of science fiction, most
famously HAL 9000, the malign spaceship computer in
Kubrick’s film of Arthur C. Clarke’ story 2001. Science fiction
is relevant here because I will argue that science fiction has
been a motivation in the revival of the notion in Nick
Bostrom’s (2016) best-selling book Superintelligence, which
has been applauded by Elon Musk, Bill Gates, and Stephen
Hawking among others, all of whom share something of
Bostrom’s pessimistic view about AI’s futures.
Will There Be Superintelligence
and Would It Hate Us?
; This article examines Nick Bostrom’s
notion of superintelligence and argues
that, although we should not be sanguine about the future of AI or its potential for harm, superintelligent AI is
highly unlikely to come about in the
way Bostrom imagines.