Centre for the Study of Existential Risk

Last updated

Centre for the Study of Existential Risk
Formation2012;12 years ago (2012)
Founders
Purpose Existential risk studies
Headquarters Cambridge, England
Parent organization
University of Cambridge
Website cser.ac.uk

The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. [1] The co-founders of the centre are Huw Price (Bertrand Russell Professor of Philosophy at Cambridge), Martin Rees (the Astronomer Royal and former President of the Royal Society) and Jaan Tallinn (co-founder of Skype, early investor to Anthropic). [2]

Contents

Areas of focus

Managing extreme technological risks

Risks are associated with emerging and future technological advances and impacts of human activity. Managing these extreme technological risks is an urgent task - but one that poses particular difficulties and has been comparatively neglected in academia. [3]

Global catastrophic biological risks

Extreme risks and the global environment

Risks from advanced artificial intelligence

Media coverage

CSER has been covered in many different newspapers (particularly in the United Kingdom), [29] [30] [31] mostly covering different topics of interest. CSER was profiled on the front cover of Wired, [32] and in the special Frankenstein issue of Science in 2018. [33]

Advisors

CSER Advisors include Cambridge academics such as:

And advisors such as:

See also

Related Research Articles

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

<span class="mw-page-title-main">Stuart J. Russell</span> British computer scientist and author (born 1962)

Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">Human extinction</span> Hypothetical end of the human species

Human extinction is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.

<span class="mw-page-title-main">Jaan Tallinn</span> Estonian programmer and investor

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.

<span class="mw-page-title-main">Partha Dasgupta</span> British economist (born 1942)

Sir Partha Sarathi Dasgupta is an Indian-British economist who is Frank Ramsey Professor Emeritus of Economics at the University of Cambridge, United Kingdom, and a fellow of St John's College, Cambridge.

<span class="mw-page-title-main">Future of Humanity Institute</span> Defunct Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

<span class="mw-page-title-main">Global catastrophic risk</span> Hypothetical global-scale disaster risk

A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk".

Huw Price is an Australian philosopher, formerly the Bertrand Russell Professor in the Faculty of Philosophy, Cambridge, and a Fellow of Trinity College, Cambridge.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

The Leverhulme Centre for the Future of Intelligence (CFI) is an interdisciplinary research centre within the University of Cambridge that studies artificial intelligence. It is funded by the Leverhulme Trust.

<span class="mw-page-title-main">Olle Häggström</span> Swedish mathematician

Olle Häggström is a professor of mathematical statistics at Chalmers University of Technology. Häggström earned his doctorate in 1994 at Chalmers University of Technology with Jeffrey Steif as supervisor. He became an associate professor in the same university in 1997, and professor of mathematical statistics at University of Gothenburg in 2000. In 2002 he was back at Chalmers University of Technology as professor. He mainly researches on probability theory such as Markov chains, percolation theory and other models in statistical mechanics.

Beth Victoria Lois Singler, born Beth Victoria White, is a British anthropologist specialising in artificial intelligence. She is known for her digital ethnographic research on the impact of apocalyptic stories on the conception of AI and robots, her comments on the societal implications of AI, as well as her public engagement work. The latter includes a series of four documentaries on whether robots could feel pain, human-robot companionship, AI ethics, and AI consciousness. She is currently the Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge.

<span class="mw-page-title-main">Risk of astronomical suffering</span> Risks of astronomical suffering

Risks of astronomical suffering, also called suffering risks or s-risks, are risks involving much more suffering than all that has occurred on Earth so far. They are sometimes categorized as a subclass of existential risks.

<span class="mw-page-title-main">Global catastrophe scenarios</span> Scenarios in which a global catastrophe creates harm

Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic, such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.

AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.

References

  1. Biba, Erin (1 June 2015). "Meet the Co-Founder of an Apocalypse Think Tank". Scientific American. 312 (6): 26. doi:10.1038/scientificamerican0615-26. PMID   26336680.
  2. Lewsey, Fred (25 November 2012). "Humanity's last invention and our uncertain future". Research News . Retrieved 24 December 2012.
  3. "Managing Extreme Technological Risks".
  4. "Existential Risk Research Network | X-Risk Research Network | www.x-risk.net".
  5. "Cambridge Conference on Catastrophic Risk 2016".
  6. "Cambridge Conference on Catastrophic Risk 2018".
  7. "Latest news | Humans for Survival".
  8. "The B. John Garrick Institute for the Risk Sciences".
  9. "PAIS researchers secure prestigious Leverhulme funding".
  10. "Appg-future-gens".
  11. "Events".
  12. "CSER Cambridge". YouTube. Retrieved 6 April 2019.
  13. "Biological Weapons Convention: Where Next?".
  14. Wintle, Bonnie C.; Boehm, Christian R.; Rhodes, Catherine; Molloy, Jennifer C.; Millett, Piers; Adam, Laura; Breitling, Rainer; Carlson, Rob; Casagrande, Rocco; Dando, Malcolm; Doubleday, Robert; Drexler, Eric; Edwards, Brett; Ellis, Tom; Evans, Nicholas G.; Hammond, Richard; Haseloff, Jim; Kahl, Linda; Kuiken, Todd; Lichman, Benjamin R.; Matthewman, Colette A.; Napier, Johnathan A.; Óhéigeartaigh, Seán S.; Patron, Nicola J.; Perello, Edward; Shapira, Philip; Tait, Joyce; Takano, Eriko; Sutherland, William J. (2017). "A transatlantic perspective on 20 emerging issues in biological engineering". eLife. 6. doi: 10.7554/eLife.30247 . PMC   5685469 . PMID   29132504.
  15. "BWC Press Conference".
  16. "Talk to Organisation for the Prohibition of Chemical Weapons".
  17. University of California (24 September 2015). "A 'Parking Lot Pitch' to the Pope" . Retrieved 6 April 2019 via YouTube.
  18. Dasgupta, Partha; Raven, Peter; McIvor, Anna, eds. (2019). Biological Extinction edited by Partha Dasgupta. Cambridge University Press. doi:10.1017/9781108668675. ISBN   9781108668675. S2CID   241969653.
  19. Amano, Tatsuya; Székely, Tamás; Sandel, Brody; Nagy, Szabolcs; Mundkur, Taej; Langendoen, Tom; Blanco, Daniel; Soykan, Candan U.; Sutherland, William J. (2017). "Successful conservation of global waterbird populations depends on effective governance" (PDF). Nature. 553 (7687): 199–202. doi:10.1038/nature25139. PMID   29258291. S2CID   205262876.
  20. Balmford, Andrew; Amano, Tatsuya; Bartlett, Harriet; Chadwick, Dave; Collins, Adrian; Edwards, David; Field, Rob; Garnsworthy, Philip; Green, Rhys; Smith, Pete; Waters, Helen; Whitmore, Andrew; Broom, Donald M.; Chara, Julian; Finch, Tom; Garnett, Emma; Gathorne-Hardy, Alfred; Hernandez-Medrano, Juan; Herrero, Mario; Hua, Fangyuan; Latawiec, Agnieszka; Misselbrook, Tom; Phalan, Ben; Simmons, Benno I.; Takahashi, Taro; Vause, James; Zu Ermgassen, Erasmus; Eisner, Rowan (2018). "The environmental costs and benefits of high-yield farming". Nature Sustainability. 1 (9): 477–485. doi:10.1038/s41893-018-0138-5. PMC   6237269 . PMID   30450426.
  21. Currie, Adrian (2018). "Geoengineering tensions" (PDF). Futures. 102: 78–88. doi:10.1016/j.futures.2018.02.002. hdl: 10871/35739 . S2CID   240258929.
  22. "Business School Rankings for the 21st Century".
  23. Berwick, Isabel (27 January 2019). "As business schools rethink what they do, so must the FT". Financial Times.
  24. McMillan, Robert (16 January 2015). "AI Has Arrived, and That Really Worries the World's Brightest Minds". Wired . Retrieved 24 April 2015.
  25. "Leverhulme Centre for the Future of Intelligence".
  26. "Decision & AI".
  27. maliciousaireport.com
  28. "Best Paper Award – Aies Conference".
  29. Connor, Steve (14 September 2013). "Can We Survive?". The New Zealand Herald.
  30. "CSER media coverage". Centre for the Study of Existential Risk. Archived from the original on 30 June 2014. Retrieved 19 June 2014.
  31. "Humanity's Last Invention and Our Uncertain Future". University of Cambridge Research News. 25 November 2012.
  32. Benson, Richard (12 February 2017). "Meet Earth's Guardians, the real-world X-men and women saving us from existential threats". Wired UK.
  33. Kupferschmidt, Kai (12 January 2018). "Taming the monsters of tomorrow". Science. 359 (6372): 152–155. Bibcode:2018Sci...359..152K. doi:10.1126/science.359.6372.152. PMID   29326256.
  34. "Team".