Calum Chace

Last updated

Calum Chace (born 20 March 1959) is an English writer and speaker, focusing on artificial intelligence. [1]

Contents

He is the author of Surviving AI, The Economic Singularity, and the philosophical science fiction novels Pandora's Brain, [2] and its sequel, Pandora's Oracle.

Education

Chace studied at Maidstone Grammar School in Kent, England. He later studied philosophy, politics, and economics (PPE) at Oxford University. His interest in AI stems from his reading of science fiction, which he describes as philosophy in fancy dress. [3]

Career

Prior to becoming a full-time writer and speaker in 2012, Chace had a 30-year career in journalism and business. He trained as a journalist with the BBC, and later he wrote a column for the FT. [4] He is now a contributor to Forbes magazine. [5] He moved into business, and ran a media practice at KPMG [3] before serving as director and CEO for a number of entrepreneurial businesses. [6]

He has published five books on artificial intelligence. [7]

In 2017, Chace co-founded the Economic Singularity Club, "a loose group of technologists, academics and writers who think the threat of mass technological unemployment is worth taking seriously". [8] In January 2019 the group published Stories from 2045, a collection of short stories by some of its members speculating on what the world might look like in 2045.

Publications

BooksPublished yearAuthor(s)
The Internet Consumer Bible [9] 2000Tess Read, Calum Chace & Simon Rowe
The Internet Start-Up Bible [10] 2000Tess Read, Calum Chace & Simon Rowe
Pandora's Brain [11] [12] [13] 2014Calum Chace
Surviving AI: The Promise and Peril of Artificial Intelligence [14] [13] [12] 2015Calum Chace
The Economic Singularity: Artificial intelligence and Fully Automated Luxury Capitalism [15] [16] 2016Calum Chace
Artificial Intelligence and the Two Singularities [17] 2018Calum Chace
Stories from 2045 [18] 2019Calum Chace (editor)
Pandora's Oracle [19] 2021Calum Chace

Talks

In July 2019, Chace was listed among the top 50 futurist speakers in the world. [20]

Economic singularity

Chace describes the economic singularity as the time when technological unemployment becomes a reality. He argues that "it is at least a serious possibility that within a generation, many or even most people will be unemployable because machines will be able to do whatever they could do for money better, cheaper and faster. We should be taking this possibility seriously and working out what we would do about it." [21]

“In the past, automation hasn’t caused lasting unemployment and has raised the level of wealth in the economy and created new jobs, but past examples of automation have replaced our muscle power and we had our cognitive abilities.” So what will happen when robots automate our cognitive work? [22] "When they start seeing cars driving around with no one driving them, people will realise how impressive computers are. If we don't have a plan, people will panic." [23]

“I think our best hope going forward is figuring out how to live in an economy of radical abundance, where machines do all the work, and we basically play.” [24] “A world where machines do all the jobs could be a world where humans do more important things, like playing, learning and having fun, but paying for that is going to be tricky.” [25]

Related Research Articles

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human capabilities across a wide range of cognitive tasks. This is in contrast to narrow AI, which is designed for specific tasks. AGI is considered one of various definitions of strong AI.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers remain popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Bill Hibbard is a scientist at the University of Wisconsin–Madison Space Science and Engineering Center working on visualization and machine intelligence. He is principal author of the Vis5D, Cave5D, and VisAD open-source visualization systems. Vis5D was the first system to produce fully interactive animated 3D displays of time-dynamic volumetric data sets and the first open-source 3D visualization system.

Weak artificial intelligence is artificial intelligence that implements a limited part of the mind, or, as narrow AI, is focused on one narrow task.

The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.

<span class="mw-page-title-main">Intelligence amplification</span> Use of information technology to augment human intelligence

Intelligence amplification (IA) refers to the effective use of information technology in augmenting human intelligence. The idea was first proposed in the 1950s and 1960s by cybernetics and early computer pioneers.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

<span class="mw-page-title-main">Progress in artificial intelligence</span> How AI-related technologies evolve

Progress in artificial intelligence (AI) refers to the advances, milestones, and breakthroughs that have been achieved in the field of artificial intelligence over time. AI is a multidisciplinary branch of computer science that aims to create machines and systems capable of performing tasks that typically require human intelligence. Artificial intelligence applications have been used in a wide range of fields including medical diagnosis, economic-financial applications, robot control, law, scientific discovery, video games, and toys. However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems, but the field was rarely credited for these successes at the time.

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent, goal directed beings to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.

Shane Legg is a machine learning researcher and entrepreneur. With Demis Hassabis and Mustafa Suleyman, he cofounded DeepMind Technologies, and works there as the chief AGI scientist. He is also known for his academic work on artificial general intelligence, including his thesis supervised by Marcus Hutter.

Daniel Hulme is a British businessman, academic and commentator, working in the field of Artificial Intelligence (AI), applied technology and ethics. He is the CEO and founder of Satalia that exited to WPP plc in 2021 where he is also Chief AI Officer. Hulme is also an angel investor in emerging technology companies.

Neuro-symbolic AI is a type of artificial intelligence that integrates neural and symbolic AI architectures to address the weaknesses of each, providing a robust AI capable of reasoning, learning, and cognitive modeling. As argued by Leslie Valiant and others, the effective construction of rich computational cognitive models demands the combination of symbolic reasoning and efficient machine learning. Gary Marcus, argued, "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning." Further, "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol manipulation in our toolkit. Too much useful knowledge is abstract to proceed without tools that represent and manipulate abstraction, and to date, the only known machinery that can manipulate such abstract knowledge reliably is the apparatus of symbol manipulation."

The Singularity Is Nearer: When We Merge with AI is a 2024 non-fiction book by futurist Ray Kurzweil. It is the sequel to his 2005 bestseller book The Singularity Is Near. The book was released on June 25, 2024. He reiterates two key dates from his previous book, which predicted that artificial intelligence (AI) would reach human intelligence by 2029 and merge with humans by 2045, an event he calls "The Singularity."

References

  1. "Life in 2028: how advances in AI could change our lives for the better - and worse". The National. Retrieved 2018-08-28.
  2. "Calum Chace : Author and speaker on artificial intelligence". 21stcentury.co.uk. Retrieved July 25, 2016.
  3. 1 2 Hayes, Dawn (2003-01-20). "School Daze". The Guardian. Retrieved 2018-08-28.
  4. Chace, Calum (7 December 2004). "You win some, you lose some..." Financial Times. Retrieved 2019-11-14.
  5. "Calum Chace - COGNITIVE WORLD". Forbes. Archived from the original on May 12, 2019. Retrieved 2019-11-14.
  6. "Weeding Technologies Limited - Company Profile - Endole". suite.endole.co.uk. Retrieved 2019-11-14.
  7. "Artificial Intelligence Book Of September 2016".
  8. Thornhill, John (28 January 2019). "Preparing for the D-Day of technological change will be vital". Financial Times. Retrieved 2019-02-14.
  9. Read; CHACE; Rowe (2000-12-07). The Internet Consumer Bible. London: Random House Business Books. ISBN   9780712671972.
  10. Read, Chace and Rowe; results, search; Rowe, Simon (2000-05-04). The Internet Start-Up Bible. London: Random House Business Books. ISBN   9780712669665.
  11. results, search (2014-02-04). Pandora's Brain. Place of publication not identified: Three Cs. ISBN   9780993211607.
  12. 1 2 Arthur, Charles (2015-11-07). "Artificial intelligence: 'Homo sapiens will be split into a handful of gods and the rest of us'". The Guardian. Retrieved 2018-08-28.
  13. 1 2 Kleinman, Zoe (2017-07-21). "AI demo picks out recipes from food photos". BBC News. Retrieved 2018-08-28.
  14. Surviving AI: The Promise and Peril of Artificial Intelligence, Three Cs, retrieved 2018-08-28
  15. results, search (2016-07-18). The Economic Singularity: Artificial intelligence and the death of capitalism. Three Cs. ISBN   9780993211645.
  16. "MPs want pupils to learn to rival robots – they should be equipped for a work-free world instead". www.newstatesman.com. 12 October 2016. Retrieved 2018-08-28.
  17. results, search (2018-04-28). Artificial Intelligence and the Two Singularities (1st ed.). Chapman and Hall/CRC. ISBN   9780815368533.
  18. Stories from 2045.
  19. Pandora's Oracle. Three Cs. 18 February 2021. Retrieved 2021-03-19 via www.amazon.com.
  20. "The Top Futurist Speakers to Have at Your Conference". ReadWrite. 2019-07-02. Retrieved 2019-11-07.
  21. Kelion, Leo (2018-04-02). "AI 'poses less risk to jobs than feared'" . Retrieved 2019-08-22.
  22. "MPs want pupils to learn to rival robots – they should be equipped for a work-free world instead". www.newstatesman.com. 12 October 2016. Retrieved 2019-08-22.
  23. Kleinman, Zoe (2017-05-26). "Workers' rights v robo jobs" . Retrieved 2019-08-22.
  24. Arthur, Charles (2015-11-07). "Artificial intelligence: 'Homo sapiens will be split into a handful of gods and the rest of us'". The Observer. ISSN   0029-7712 . Retrieved 2019-08-22.
  25. "Life in 2028: how advances in AI could change our lives for the better - and worse". The National. 7 April 2018. Retrieved 2019-08-22.