2081: A Hopeful View of the Human Future

Last updated
2081: A Hopeful View of the Human Future
Gerard K. O'Neill - 2081 a hopeful view of the human future.png
Author Gerard K. O'Neill
IllustratorCal Sacks
CountryUnited States
Subject Futurology
Publisher Simon & Schuster
Publication date
1981
Media typePrint (Hardcover)
Pages284 pp (first edition)
ISBN 978-0-671-24257-2
OCLC 7205001
303.4 19
LC Class CB161 .O53

2081: A Hopeful View of the Human Future is a 1981 book by Princeton physicist Gerard K. O'Neill. The book is an attempt to predict the social and technological state of humanity 100 years in the future. O'Neill's positive attitude towards both technology and human potential distinguished this book from gloomy predictions of a Malthusian catastrophe by contemporary scientists. Paul R. Ehrlich wrote in 1968 in The Population Bomb , "in the 1970s and 1980s hundreds of millions of people will starve to death". The Club of Rome's 1972 Limits to Growth predicted a catastrophic end to the Industrial Revolution within 100 years from resource exhaustion and pollution.

Contents

O'Neill's contrary view had two main components. First, he analyzed the previous attempts to predict the future of society—including many catastrophes that had not materialized. Second, he extrapolated historical trends under the assumption that the obstacles identified by other authors would be overcome by five technological "Drivers of Change". He extrapolated an average American family income in 2081 of $1 million/year.

Two developments based on his own research were responsible for much of his optimism. In The High Frontier: Human Colonies in Space O'Neill described solar power satellites that provide unlimited clean energy, making it far easier for humanity to reach and exceed present developed-world living standards. Overpopulation pressures would be relieved as billions of people eventually emigrate to colonies in free space. These colonies would offer an Earth-like environment but with vastly higher productivity for industry and agriculture. These colonies and satellites would be constructed from asteroid or lunar materials launched into the desired orbits cheaply by the mass drivers O'Neill's group developed.

Part I: The Art of Prophecy

Previous futurist authors he cites:

Clarke

Arthur C. Clarke's Profiles of the Future included a long list of predictions, many of which O'Neill endorsed. Two of his maxims that O'Neill quotes [1] seem to sum up O'Neill's attitude, as well:

Anything that is theoretically possible will be achieved in practice, no matter what the technical difficulties, if it is desired greatly enough.

We can never run out of energy or matter, but we can all too easily run out of brains.

Part II: The Drivers of Change

Sections are included on the five key "Drivers of Change" believed by O'Neill to be the focus of future development:

O'Neill applied basic physics to understand the limits of possible change, using the history of the technology to extrapolate likely progress. He applied the history of computing to reason about how people and institutions will shape and be shaped by the likely changes. He predicted that future computers must run at a very low-voltage because of heat. The main basis of his technology extrapolation for computers is Moore's Law, one of the greatest successes of Trend estimation in predicting human progress.

His predicted the social aspects of the future of computers. He identified computers as the most certain of his five "drivers of change", because their adoption could be driven primarily by individual or local decisions, while the other four such as space colonies depended on large-scale decision-making. He observed the success of minicomputers, calculators, and the first home computers, and predicted that every home would have a computer in a hundred years. With the aid of speculations by computer pioneers such as John von Neumann and the writers of "tracts" such as Zamyatin's We , O'Neill also predicted that privacy would be under siege from computers in 2081.

O'Neill predicted that software engineering issues and the intractability of artificial intelligence problems would require massive programming efforts and very powerful processors to achieve truly usable computers. His prediction was based on the difficulties and failures of computer use he had observed in 1981, including a candid horror story of his own Princeton University library's attempt to computerize its operations. His computers of the future, represented by the robot butler his visitor to Earth encounters in 2081, included speaker-independent speech recognition and natural language processing. O'Neill correctly pointed out the huge difference between computers and human brains, and stated that, while a more human-like artificial brain is a worthy goal, computers will be vastly improved descendants of today's rather than truly intelligent and creative artificial brains.

Part III: The World in 2081

This section was written as a series of dispatches home from "Eric C. Rawson", a native of a distant space colony called "Fox Cluster". By analogy with American religious colonists such as the Puritans and Mormons, O'Neill suggests that such a colony might have been founded by a group of pacifists who chose to live about twice as far from the Sun as Pluto in order to avoid involvement in Earth's wars. His calculations indicate that colonies at this distance could have Earth-level sunlight using a mirror the same weight as the colony itself. Eric pays a visit to the Earth of 2081 to take care of family business and explore a world that is nearly as foreign to him as it is to us.

After each dispatch, O'Neill added a section that described his reasoning for each situation the visitor described, such as riding a "floater" train going thousands of miles per hour in vacuum, interacting with a household robot or visiting a fully enclosed Pennsylvania city with a tropical climate in midwinter. Each section was written from his perspective as a physicist. For example, his description of "Honolulu, Pennsylvania" included multiple roof layers that could be retracted in good weather. The city enjoyed an artificial tropical climate all year because of internal climate controls and advanced insulation. He also proposed magnetically levitated "floater" trains moving in very-low-pressure tunnels that would replace airplanes on heavily traveled routes.

Part IV: Wild Cards

This section explores not the most probable outcomes, but "the limits of the possible": how likely some scenarios O'Neill considered less probable are, and what they might mean. These included nuclear annihilation, attaining immortality, and contact with extraterrestrial civilizations. For this last case, he presents a thought experiment about how a hypothetical alien civilization, the "Primans", could explore the galaxy with self-replicating robots, monitoring every planetary system in the Galaxy without betraying their own position, and destroying intelligent life (by building giant mirrors to incinerate the planet) if they felt threatened. This experiment seems to prove that conflict or even surprise contact with an intelligent alien life form—that staple of science fiction—is highly unlikely.

See also

Prediction

Technologies discussed

Related Research Articles

Hugo de Garis AI researcher

Hugo de Garis is a retired researcher in the sub-field of artificial intelligence (AI) known as evolvable hardware. He became known in the 1990s for his research on the use of genetic algorithms to evolve artificial neural networks using three-dimensional cellular automata inside field programmable gate arrays. He claimed that this approach would enable the creation of what he terms "artificial brains" which would quickly surpass human levels of intelligence.

The technological singularity—or simply the singularity—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

<i>The Age of Spiritual Machines</i> 1999 non-fiction book by Ray Kurzweil

The Age of Spiritual Machines: When Computers Exceed Human Intelligence is a non-fiction book by inventor and futurist Ray Kurzweil about artificial intelligence and the future course of humanity. First published in hardcover on January 1, 1999 by Viking, it has received attention from The New York Times, The New York Review of Books and The Atlantic. In the book Kurzweil outlines his vision for how technology will progress during the 21st century.

Space habitat Type of space station, intended as a permanent settlement

A space habitat is a more advanced form of living quarters than a space station or habitation module, in that it is intended as a permanent settlement or green habitat rather than as a simple way-station or other specialized facility. No space habitat has been constructed yet, but many design concepts, with varying degrees of realism, have come both from engineers and from science-fiction authors.

Stanford torus Proposed NASA design for space habitat

The Stanford torus is a proposed NASA design for a space habitat capable of housing 10,000 to 140,000 permanent residents.

Hans Peter Moravec is an adjunct faculty member at the Robotics Institute of Carnegie Mellon University in Pittsburgh, USA. He is known for his work on robotics, artificial intelligence, and writings on the impact of technology. Moravec also is a futurist with many of his publications and predictions focusing on transhumanism. Moravec developed techniques in computer vision for determining the region of interest (ROI) in a scene.

Friendly artificial intelligence refers to hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to foster the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.

Singularitarianism Belief in an incipient technological singularity

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

Gerard K. ONeill Physicist, author, and inventor

Gerard Kitchen O'Neill was an American physicist and space activist. As a faculty member of Princeton University, he invented a device called the particle storage ring for high-energy physics experiments. Later, he invented a magnetic launcher called the mass driver. In the 1970s, he developed a plan to build human settlements in outer space, including a space habitat design known as the O'Neill cylinder. He founded the Space Studies Institute, an organization devoted to funding research into space manufacturing and colonization.

<i>The Singularity Is Near</i> 2005 non-fiction book by Ray Kurzweil

The Singularity Is Near: When Humans Transcend Biology is a 2005 non-fiction book about artificial intelligence and the future of humanity by inventor and futurist Ray Kurzweil.

AI takeover Hypothetical artificial intelligence scenario

An AI takeover is a real-world scenario in which computer's artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

In futures studies and the history of technology, accelerating change is a perceived increase in the rate of technological change throughout history, which may suggest faster and more profound change in the future and may or may not be accompanied by equally profound social and cultural change.

<i>The High Frontier: Human Colonies in Space</i>

The High Frontier: Human Colonies in Space is a 1976 book by Gerard K. O'Neill, a road map for what the United States might do in outer space after the Apollo program, the drive to place a human on the Moon and beyond. It envisions large human occupied habitats in the Earth-Moon system, especially near stable Lagrangian points. Three designs are proposed: Island one, Island two, and Island 3. These would be constructed using raw materials from the lunar surface launched into space using a mass driver and from near-Earth asteroids. The habitats were to spin for simulated gravity and be illuminated and powered by the Sun. Solar power satellites were proposed as a possible industry to support the habitats.

Good Taste

"Good Taste" is a science fiction short story by American writer Isaac Asimov. It first appeared in a limited edition book of the same name by Apocalypse Press in 1976. It subsequently appeared in Asimov's Science Fiction and in the 1983 collection The Winds of Change and Other Stories.

<i>Physics of the Future</i> 2011 book by Michio Kaku

Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 is a 2011 book by theoretical physicist Michio Kaku, author of Hyperspace and Physics of the Impossible. In it Kaku speculates about possible future technological development over the next 100 years. He interviews notable scientists about their fields of research and lays out his vision of coming developments in medicine, computing, artificial intelligence, nanotechnology, and energy production. The book was on the New York Times Bestseller List for five weeks.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

ONeill cylinder Space settlement concept

An O'Neill cylinder is a space settlement concept proposed by American physicist Gerard K. O'Neill in his 1976 book The High Frontier: Human Colonies in Space. O'Neill proposed the colonization of space for the 21st century, using materials extracted from the Moon and later from asteroids.

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.

References

Bibliography