Author | James Lovelock (with Bryan Appleyard) [1] |
---|---|
Language | English |
Subject | Environmentalism Superintelligence |
Genre | Non-fiction |
Publisher | Penguin Books Limited |
Publication date | July 4, 2019 |
Publication place | United Kingdom |
Media type | Hardcover |
Pages | 160 |
ISBN | 9780241399361 |
Novacene: The Coming Age of Hyperintelligence is a 2019 non-fiction book by scientist and environmentalist James Lovelock. It has been published by Penguin Books/Allen Lane in the UK, [2] and republished by the MIT Press. [3] The book was co-authored by journalist Bryan Appleyard. [4] It predicts that a benevolent eco-friendly artificial superintelligence will someday become the dominant lifeform on the planet and argues humanity is on the brink of a new era: the Novacene.
Lovelock builds upon his Gaia hypothesis, wherein he sees Earth's systems as well as the organisms on it as one cooperating superorganism. [1] This system, Gaia, regulates and protects itself against external threats, such as an increasing heat output from the sun or asteroids. Another assertion of the hypothesis is that Gaia has an (unintentional) evolutionary strategy to sustain itself by sprouting life capable of countering said hazards. Lovelock also sketches the development of life, first anaerobic and then aerobic. [5] He further articulates the central role of sunlight in evolution's progress via three stages: [6]
Lovelock also articulates his views that reason is overvalued compared to intuition, arguing that step-by-step logic cannot explain all mechanisms. [4] According to Lovelock, human language is a curse that forces causal and linear vertical thinking at the expense of intuition. [1]
Lovelock discusses that the Anthropocene, a proposed geological epoch characterized by human ability to greatly shape the environment to fit man's needs, starts in 1712, after the invention of the Newcomen atmospheric engine, a vital catalyst for the later Industrial Revolution. Lovelock proposes a successor to the Anthropocene dubbed the Novacene, an epoch that will see the rise of super-intelligent robotic agents (referred to as 'cyborgs' by Lovelock). These electronic lifeforms would be capable of thinking exponentially more quickly than humans and would also mould their surroundings for sustenance. [6]
Lovelock emphasizes that the evolution of the Anthropocene was propelled by market forces, stressing that profitability is a crucial feature of inventions such as Newcomen's steam engine. Economic significance of technologies ensures their development. [6]
Cyborgs would be intelligent enough to rapidly improve themselves and correct faults, much like Darwinian selection, but moreso a form of intentional selection, superior to evolution's slow and arbitrary natural selection. Self-learning AI agents are mentioned, under which Deepmind's AlphaZero, which taught itself chess by playing against itself. In combination with rapid processing speed, they would greatly surpass human intelligence; in Lovelock's words, they may see us the way we see plants: passive and slow. He further mentions these cyborgs may tap into natural resources for their sustenance, much like plants and animals rely on sunlight through photosynthesis or energy stored in organic food such as fruits. [6]
Lovelock argues a future AI takeover will save both the planet and the human race from catastrophic climate change: the cyborgs will recognize the danger of global heating themselves and act to stop the warming of the planet. [4] Contrary to Max Tegmark and others who fear existential risk from advanced artificial intelligence, Lovelock argues that robots will need organic life to keep the planet from overheating, and that therefore robots will want to keep humanity alive, perhaps as pets. Lovelock goes on to argue that humans might be happier under robotic domination. [7]
In regards to more primitive technology, Lovelock condemns the concept of autonomous weapon systems capable of killing without human interference. The scientist also expresses his horror regarding nuclear weapons, but remains a proponent of nuclear energy itself. [6]
This section needs expansion. You can help by adding to it. (April 2020) |
In Nature , science journalist Tim Radford praises both Lovelock's career and the book, stating that Novacene and Lovelock's other books are "written persuasively". Radford reserves judgement on whether Lovelock's predictions will come true. [4] In The Guardian , author Steven Poole also praises the writing style, but believes that despite Lovelock's "speculation" there may remain "reasonable cause for alarm" in the event of an AI takeover. He also dismisses Lovelock's "ropey" criticism of logical reasoning, but overall considers Lovelock's "infectious, almost absurdist optimism" a welcome relief from environmental techno-pessimism. [7] In The Daily Telegraph , journalist Roger Lewis gives only two out of five stars to Lovelock's "rambling optimism". [8] Skeptics have categorized Lovelock's predictions as overconfident. [9]
Gaia philosophy is a broadly inclusive term for relating concepts about, humanity as an effect of the life of this planet.
James Ephraim Lovelock was an English independent scientist, environmentalist and futurist. He is best known for proposing the Gaia hypothesis, which postulates that the Earth functions as a self-regulating system.
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.
SimEarth: The Living Planet is a life simulation game, the second designed by Will Wright. and published in 1990 by Maxis. In SimEarth, the player controls the development of a planet. English scientist James Lovelock served as an advisor and his Gaia hypothesis of planet evolution was incorporated into the game. Versions were made for the Macintosh, Atari ST, Amiga, IBM PC, Super Nintendo Entertainment System, Sega CD, and TurboGrafx-16. It was re-released for the Wii Virtual Console. In 1996, several of Maxis' simulation games were re-released under the Maxis Collector Series with greater compatibility with Windows 95 and differing box art, including the addition of Classics beneath the title. SimEarth was re-released in 1997 under the Classics label.
The Earth immune system is a controversial proposal, claimed to be a consequence of the Gaia hypothesis. The Gaia hypothesis holds that the entire earth may be considered a single organism (Gaia). As a self-maintaining organism, Earth would have an immune system of some sort in order to maintain its health.
In progress
The Gaia hypothesis, also known as the Gaia theory, Gaia paradigm, or the Gaia principle, proposes that living organisms interact with their inorganic surroundings on Earth to form a synergistic and self-regulating, complex system that helps to maintain and perpetuate the conditions for life on the planet.
Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.
Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.
A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers remain popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
The zoo hypothesis speculates on the assumed behavior and existence of technologically advanced extraterrestrial life and the reasons they refrain from contacting Earth. It is one of many theoretical explanations for the Fermi paradox. The hypothesis states that extraterrestrial life intentionally avoids communication with Earth to allow for natural evolution and sociocultural development, and avoiding interplanetary contamination, similar to people observing animals at a zoo. The hypothesis seeks to explain the apparent absence of extraterrestrial life despite its generally accepted plausibility and hence the reasonable expectation of its existence. A variant on the zoo hypothesis suggested by the former MIT Haystack Observatory scientist John Allen Ball is the "laboratory" hypothesis, in which humanity is being subjected to experiments, with Earth serving as a giant laboratory.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.
Existential risk from artificial general intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
Gaia Vince is a freelance British environmental journalist, broadcaster and non-fiction author with British and Australian citizenship. She writes for The Guardian, and, in a column called Smart Planet, for BBC Online. She was previously news editor of Nature and online editor of New Scientist.
Planetary health is a multi- and transdisciplinary research paradigm, a new science for exceptional action, and a global movement. Planetary health refers to "the health of human civilization and the state of the natural systems on which it depends". In 2015, the Rockefeller Foundation–Lancet Commission on Planetary Health launched the concept which is currently being developed towards a new health science with over 25 areas of expertise.
Some scholars believe that advances in artificial intelligence, or AI, will eventually lead to a semi-apocalyptic post-scarcity and post-work economy where intelligent machines can outperform humans in almost every, if not every, domain. The questions of what such a world might look like, and whether specific scenarios constitute utopias or dystopias, are the subject of active debate.
Human Compatible: Artificial Intelligence and the Problem of Control is a 2019 non-fiction book by computer scientist Stuart J. Russell. It asserts that the risk to humanity from advanced artificial intelligence (AI) is a serious concern despite the uncertainty surrounding future progress in AI. It also proposes an approach to the AI control problem.