Author | Ray Kurzweil |
---|---|
Language | English |
Publisher | Viking |
Publication date | September 2005 |
Publication place | United States |
Pages | 652 |
ISBN | 978-0-670-03384-3 |
OCLC | 57201348 |
153.9 | |
LC Class | QP376 .K85 |
Preceded by | The Age of Spiritual Machines |
Followed by | How to Create a Mind (next book) The Singularity is Nearer (sequel) |
The Singularity Is Near: When Humans Transcend Biology is a 2005 non-fiction book about artificial intelligence and the future of humanity by inventor and futurist Ray Kurzweil. A sequel book, The Singularity Is Nearer , was released on June 25, 2024. [1]
The book builds on the ideas introduced in Kurzweil's previous books, The Age of Intelligent Machines (1990) and The Age of Spiritual Machines (1999). In the book, Kurzweil embraces the term "the singularity", which was popularized by Vernor Vinge in his 1993 essay "The Coming Technological Singularity." [2]
Kurzweil describes his Law of Accelerating Returns , which predicts an exponential increase in technologies like computers, genetics, nanotechnology, robotics and artificial intelligence. Once the singularity has been reached, Kurzweil says that machine intelligence will be infinitely more powerful than all human intelligence combined. The singularity is also the point at which machines' intelligence and humans would merge; Kurzweil predicts this date: "I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045". [3]
Kurzweil characterizes evolution throughout all time as progressing through six epochs, each one building on the one before. He says the four epochs which have occurred so far are Physics and Chemistry, Biology and DNA, Brains, and Technology. Kurzweil predicts the singularity will coincide with the next epoch, The Merger of Human Technology with Human Intelligence. After the singularity he says the final epoch will occur, The Universe Wakes Up. [4]
Kurzweil explains that evolutionary progress is exponential because of positive feedback; the results of one stage are used to create the next stage. Exponential growth is deceptive, nearly flat at first until it hits what Kurzweil calls "the knee in the curve" then rises almost vertically. [5] In fact Kurzweil believes evolutionary progress is super-exponential because more resources are deployed to the winning process. As an example of super-exponential growth Kurzweil cites the computer chip business. The overall budget for the whole industry increases over time, since the fruits of exponential growth make it an attractive investment; meanwhile the additional budget fuels more innovation which makes the industry grow even faster, effectively an example of "double" exponential growth. [6]
Kurzweil dictates evolutionary progress looks smooth, but that really it is divided into paradigms, specific methods of solving problems. Each paradigm starts with slow growth, builds to rapid growth, and then levels off. As one paradigm levels off, pressure builds to find or develop a new paradigm. So what looks like a single smooth curve is really series of smaller S curves. [7] For example, Kurzweil notes that when vacuum tubes stopped getting faster, cheaper transistors became popular and continued the overall exponential growth. [8]
Kurzweil calls this exponential growth the law of accelerating returns, and he believes it applies to many human-created technologies such as computer memory, transistors, microprocessors, DNA sequencing, magnetic storage, the number of Internet hosts, Internet traffic, decrease in device size, and nanotech citations and patents. [9] Kurzweil cites two historical examples of exponential growth: the Human Genome Project and the growth of the Internet. [10] Kurzweil claims the whole world economy is in fact growing exponentially, although short term booms and busts tend to hide this trend. [11]
A fundamental pillar of Kurzweil's argument is that to get to the singularity, computational capacity is as much of a bottleneck as other things like quality of algorithms and understanding of the human brain. Moore's Law predicts the capacity of integrated circuits grows exponentially, but not indefinitely. Kurzweil feels the increase in the capacity of integrated circuits will probably slow by the year 2020. [8] He feels confident that a new paradigm will debut at that point to carry on the exponential growth predicted by his law of accelerating returns. Kurzweil describes four paradigms of computing that came before integrated circuits: electromechanical, relay, vacuum tube, and transistors. [8] What technology will follow integrated circuits, to serve as the sixth paradigm, is unknown, but Kurzweil believes nanotubes are the most likely alternative among a number of possibilities:
nanotubes and nanotube circuitry, molecular computing, self-assembly in nanotube circuits, biological systems emulating circuit assembly, computing with DNA, spintronics (computing with the spin of electrons), computing with light, and quantum computing. [12]
Since Kurzweil believes computational capacity will continue to grow exponentially long after Moore's Law ends it will eventually rival the raw computing power of the human brain. Kurzweil looks at several different estimates of how much computational capacity is in the brain and settles on 1016 calculations per second and 1013 bits of memory. He writes that $1,000 will buy computer power equal to a single brain "by around 2020" [13] while by 2045, the onset of the singularity, he says the same amount of money will buy one billion times more power than all human brains combined today. [3] Kurzweil admits the exponential trend in increased computing power will hit a limit eventually, but he calculates that limit to be trillions of times beyond what is necessary for the singularity. [14]
Kurzweil notes that computational capacity alone will not create artificial intelligence. He asserts that the best way to build machine intelligence is to first understand human intelligence. The first step is to image the brain, to peer inside it. Kurzweil claims imaging technologies such as PET and fMRI are increasing exponentially in resolution [15] while he predicts even greater detail will be obtained during the 2020s when it becomes possible to scan the brain from the inside using nanobots. [16] Once the physical structure and connectivity information are known, Kurzweil says researchers will have to produce functional models of sub-cellular components and synapses all the way up to whole brain regions. [17] The human brain is "a complex hierarchy of complex systems, but it does not represent a level of complexity beyond what we are already capable of handling". [18]
Beyond reverse engineering the brain in order to understand and emulate it, Kurzweil introduces the idea of "uploading" a specific brain with every mental process intact, to be instantiated on a "suitably powerful computational substrate". He writes that general modeling requires 1016 calculations per second and 1013 bits of memory, but then explains uploading requires additional detail, perhaps as many as 1019 cps and 1018 bits. Kurzweil says the technology to do this will be available by 2040. [19] Rather than an instantaneous scan and conversion to digital form, Kurzweil feels humans will most likely experience gradual conversion as portions of their brain are augmented with neural implants, increasing their proportion of non-biological intelligence slowly over time. [20]
Kurzweil believes there is "no objective test that can conclusively determine" the presence of consciousness. [21] Therefore, he says nonbiological intelligences will claim to have consciousness and "the full range of emotional and spiritual experiences that humans claim to have"; [22] he feels such claims will generally be accepted.
Kurzweil says revolutions in genetics, nanotechnology and robotics will usher in the beginning of the singularity. [23] Kurzweil feels with sufficient genetic technology it should be possible to maintain the body indefinitely, reversing aging while curing cancer, heart disease and other illnesses. [24] Much of this will be possible thanks to nanotechnology, the second revolution, which entails the molecule by molecule construction of tools which themselves can "rebuild the physical world". [25] Finally, the revolution in robotics will really be the development of strong AI, defined as machines which have human-level intelligence or greater. [26] This development will be the most important of the century, "comparable in importance to the development of biology itself". [27]
Kurzweil concedes that every technology carries with it the risk of misuse or abuse, from viruses and nanobots to out-of-control AI machines. He believes the only countermeasure is to invest in defensive technologies, for example by allowing new genetics and medical treatments, monitoring for dangerous pathogens, and creating limited moratoriums on certain technologies. As for artificial intelligence Kurzweil feels the best defense is to increase the "values of liberty, tolerance, and respect for knowledge and diversity" in society, because "the nonbiological intelligence will be embedded in our society and will reflect our values". [28]
Kurzweil touches on the history of the singularity concept, tracing it back to John von Neumann in the 1950s and I. J. Good in the 1960s. He compares his singularity to that of a mathematical or astrophysical singularity. While his ideas of a singularity is not actually infinite, he says it looks that way from any limited perspective. [29]
During the singularity, Kurzweil predicts that "human life will be irreversibly transformed" [30] and that humans will transcend the "limitations of our biological bodies and brain". [31] He looks beyond the singularity to say that "the intelligence that will emerge will continue to represent the human civilization." Further, he feels that "future machines will be human-like, even if they are not biological". [32]
Kurzweil claims once nonbiological intelligence predominates the nature of human life will be radically altered: [33] there will be radical changes in how humans learn, work and play. [34] Kurzweil envisions nanobots which allow people to eat whatever they want while remaining thin and fit, provide copious energy, fight off infections or cancer, replace organs and augment their brains. Eventually people's bodies will contain so much augmentation they'll be able to alter their "physical manifestation at will". [35]
Kurzweil says the law of accelerating returns suggests that once a civilization develops primitive mechanical technologies, it is only a few centuries before they achieve everything outlined in the book, at which point it will start expanding outward, saturating the universe with intelligence. Since people have found no evidence of other civilizations, Kurzweil believes humans are likely alone in the universe. Thus Kurzweil concludes it is humanity's destiny to do the saturating, enlisting all matter and energy in the process. [36] [37]
As for individual identities during these radical changes, Kurzweil suggests people think of themselves as an evolving pattern rather than a specific collection of molecules. Kurzweil says evolution moves towards "greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity, and greater levels of subtle attributes such as love". [38] He says that these attributes, in the limit, are generally used to describe God. That means, he continues, that evolution is moving towards a conception of God and that the transition away from biological roots is in fact a spiritual undertaking. [38]
Kurzweil does not include an actual written timeline of the past and future, as he did in The Age of Intelligent Machines and The Age of Spiritual Machines, however he still makes many specific predictions. Kurzweil writes that by 2010 a supercomputer will have the computational capacity to emulate human intelligence [39] and "by around 2020" this same capacity will be available "for one thousand dollars". [13] After that milestone he expects human brain scanning to contribute to an effective model of human intelligence "by the mid-2020s". [39] These two elements will culminate in computers that can pass the Turing test by 2029. [40] By the early 2030s the amount of non-biological computation will exceed the "capacity of all living biological human intelligence". [41] Finally the exponential growth in computing capacity will lead to the singularity. Kurzweil spells out the date very clearly: "I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045". [3]
A common criticism of the book relates to the "exponential growth fallacy". As an example, in 1969, humans landed on the moon. Extrapolating exponential growth from there one would expect huge lunar bases and crewed missions to distant planets. Instead, exploration stalled or even regressed after that. Paul Davies writes "the key point about exponential growth is that it never lasts" [42] often due to resource constraints. On the other hand, it has been shown that the global acceleration until recently followed a hyperbolic rather than exponential pattern. [43]
Theodore Modis says "nothing in nature follows a pure exponential" and suggests the logistic function is a better fit for "a real growth process". The logistic function looks like an exponential at first but then tapers off and flattens completely. For example, world population and the United States's oil production both appeared to be rising exponentially, but both have leveled off because they were logistic. Kurzweil says "the knee in the curve" is the time when the exponential trend is going to explode, while Modis claims if the process is logistic when you hit the "knee" the quantity you are measuring is only going to increase by a factor of 100 more. [44]
While some critics complain that the law of accelerating returns is not a law of nature [42] others question the religious motivations or implications of Kurzweil's singularity. The buildup towards the singularity is compared with Judeo-Christian end-of-time scenarios. Beam calls it "a Buck Rogers vision of the hypothetical Christian Rapture". [45] John Gray says "the Singularity echoes apocalyptic myths in which history is about to be interrupted by a world-transforming event". [46]
The radical nature of Kurzweil's predictions is often discussed. Anthony Doerr says that before you "dismiss it as techno-zeal" consider that "every day the line between what is human and what is not quite human blurs a bit more". He lists technology of the day, in 2006, like computers that land supersonic airplanes or in vitro fertility treatments and asks whether brain implants that access the internet or robots in our blood really are that unbelievable. [47]
In regard to reverse engineering the brain, neuroscientist David J. Linden writes that "Kurzweil is conflating biological data collection with biological insight". He feels that data collection might be growing exponentially, but insight is increasing only linearly. For example, the speed and cost of sequencing genomes is also improving exponentially, but our understanding of genetics is growing very slowly. As for nanobots Linden believes the spaces available in the brain for navigation are simply too small. He acknowledges that someday humans will fully understand the brain, except not on Kurzweil's mythical timetable. [48]
Paul Davies wrote in Nature that The Singularity is Near is a "breathless romp across the outer reaches of technological possibility" while warning that the "exhilarating speculation is great fun to read, but needs to be taken with a huge dose of salt." [42]
Anthony Doerr in The Boston Globe wrote "Kurzweil's book is surprisingly elaborate, smart, and persuasive. He writes clean methodical sentences, includes humorous dialogues with characters in the future and past, and uses graphs that are almost always accessible." [47] while his colleague Alex Beam points out that "Singularitarians have been greeted with hooting skepticism". [45] Janet Maslin in The New York Times wrote "The Singularity is Near is startling in scope and bravado", but says "much of his thinking tends to be pie in the sky". She observes that he's more focused on optimistic outcomes rather than the risks. [49]
In 2006, Barry Ptolemy and his production company Ptolemaic Productions licensed the rights to The Singularity Is Near from Kurzweil. Inspired by the book, Ptolemy directed and produced the film Transcendent Man , which went on to bring more attention to the book.
Kurzweil also directed his own film adaptation, produced in partnership with Terasem; The Singularity is Near mixes documentary interviews with a science-fiction story involving his robotic avatar Ramona's transformation into an artificial general intelligence. Screened at the World Film Festival, the Woodstock Film Festival, the Warsaw International FilmFest, the San Antonio Film Festival in 2010 and the San Francisco Indie Film Festival in 2011, the movie was released generally on July 20, 2012. [50] It is available on DVD or digital download. [51] [52]
Raymond Kurzweil is an American computer scientist, author, entrepreneur, futurist, and inventor. He is involved in fields such as optical character recognition (OCR), text-to-speech synthesis, speech recognition technology and electronic keyboard instruments. He has written books on health technology, artificial intelligence (AI), transhumanism, the technological singularity, and futurism. Kurzweil is a public advocate for the futurist and transhumanist movements and gives public talks to share his optimistic outlook on life extension technologies and the future of nanotechnology, robotics, and biotechnology.
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.
The Age of Spiritual Machines: When Computers Exceed Human Intelligence is a non-fiction book by inventor and futurist Ray Kurzweil about artificial intelligence and the future course of humanity. First published in hardcover on January 1, 1999, by Viking, it has received attention from The New York Times, The New York Review of Books and The Atlantic. In the book Kurzweil outlines his vision for how technology will progress during the 21st century.
Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind.
The term superhuman refers to humans, humanoids or other beings with qualities and abilities that exceed those naturally found in humans. These qualities may be acquired through natural ability, self-actualization or technological aids. The related concept of a super race refers to an entire category of beings with the same or varying superhuman characteristics, created from present-day human beings by deploying various means such as eugenics, euthenics, genetic engineering, nanotechnology, and/or brain–computer interfacing to accelerate the process of human evolution.
Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
A matrioshka brain is a hypothetical megastructure of immense computational capacity powered by a Dyson sphere. It was proposed in 1997 by Robert J. Bradbury (1956–2011). It is an example of a class-B stellar engine, employing the entire energy output of a star to drive computer systems. This concept derives its name from the nesting Russian matryoshka dolls. The concept was deployed by Bradbury in the anthology Year Million: Science at the Far Edge of Knowledge.
The Age of Intelligent Machines is a non-fiction book about artificial intelligence by inventor and futurist Ray Kurzweil. This was his first book and the Association of American Publishers named it the Most Outstanding Computer Science Book of 1990. It was reviewed in The New York Times and The Christian Science Monitor. The format is a combination of monograph and anthology with contributed essays by artificial intelligence experts such as Daniel Dennett, Douglas Hofstadter, and Marvin Minsky.
In futures studies and the history of technology, accelerating change is the observed exponential nature of the rate of technological change in recent history, which may suggest faster and more profound change in the future and may or may not be accompanied by equally profound social and cultural change.
An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.
Stephen Grossberg is a cognitive scientist, theoretical and computational psychologist, neuroscientist, mathematician, biomedical engineer, and neuromorphic technologist. He is the Wang Professor of Cognitive and Neural Systems and a Professor Emeritus of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering at Boston University.
The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.
A subject matter expert Turing test is a variation of the Turing test where a computer system attempts to replicate an expert in a given field such as chemistry or marketing. It is also known, as a Feigenbaum test and was proposed by Edward Feigenbaum in a 2003 paper.
Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies—and What It Means to Be Human (ISBN 0-385-50965-0) is a book published in 2005 by Joel Garreau.
Transcendent Man is a 2009 documentary film by American filmmaker Barry Ptolemy about inventor, futurist and author Ray Kurzweil and his predictions about the future of technology in his 2005 book, The Singularity is Near. In the film, Ptolemy follows Kurzweil around his world as he discusses his thoughts on the technological singularity, a proposed advancement that will occur sometime in the 21st century when progress in artificial intelligence, genetics, nanotechnology, and robotics will result in the creation of a human-machine civilization.
This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.
How to Create a Mind: The Secret of Human Thought Revealed is a non-fiction book about brains, both human and artificial, by the inventor and futurist Ray Kurzweil. First published in hardcover on November 13, 2012 by Viking Press it became a New York Times Best Seller. It has received attention from The Washington Post, The New York Times and The New Yorker.
Hypothetical technology is technology that does not exist yet, but that could exist in the future. This article presents examples of technologies that have been hypothesized or proposed, but that have not been developed yet. An example of hypothetical technology is teleportation.
Some scholars believe that advances in artificial intelligence, or AI, will eventually lead to a semi-apocalyptic post-scarcity and post-work economy where intelligent machines can outperform humans in almost every, if not every, domain. The questions of what such a world might look like, and whether specific scenarios constitute utopias or dystopias, are the subject of active debate.