Centre for the Study of Existential Risk

Last updated

Centre for the Study of Existential Risk
Formation2012;11 years ago (2012)
Founders
PurposeThe study and mitigation of existential risk
Headquarters Cambridge, England
Parent organization
University of Cambridge
Website cser.ac.uk

The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. [1] The co-founders of the centre are Huw Price (Bertrand Russell Professor of Philosophy at Cambridge), Martin Rees (the Astronomer Royal and former President of the Royal Society) and Jaan Tallinn (co-founder of Skype, early investor to Anthropic). [2]

Contents

Areas of focus

Managing extreme technological risks

Risks are associated with emerging and future technological advances and impacts of human activity. Managing these extreme technological risks is an urgent task - but one that poses particular difficulties and has been comparatively neglected in academia. [3]

Global catastrophic biological risks

Extreme risks and the global environment

Risks from advanced artificial intelligence

Media coverage

CSER has been covered in many different newspapers (particularly in the United Kingdom), [29] [30] [31] mostly covering different topics of interest. CSER was profiled on the front cover of Wired, [32] and in the special Frankenstein issue of Science in 2018. [33]

Advisors

CSER Advisors include Cambridge academics such as:

And advisors such as:

See also

Related Research Articles

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

<span class="mw-page-title-main">Nick Bostrom</span> Swedish philosopher and writer (born 1973)

Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He is the founding director of the Future of Humanity Institute at Oxford University.

<span class="mw-page-title-main">Stuart J. Russell</span> British computer scientist and author (born 1962)

Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.

<span class="mw-page-title-main">Artificial general intelligence</span> Hypothetical human-level or stronger AI

An artificial general intelligence (AGI) is a hypothetical type of intelligent agent. If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. Creating AGI is a primary goal of some artificial intelligence research and of companies such as OpenAI, DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">Human extinction</span> Hypothetical end of the human species

Human extinction is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.

<span class="mw-page-title-main">Jaan Tallinn</span> Estonian programmer and investor

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa. Jaan Tallinn is a leading figure in the field of existential risk, having co-founded both the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, in the United Kingdom and the Future of Life Institute in Cambridge, Massachusetts, in the United States. Tallinn was an early investor and board member at DeepMind and various other artificial intelligence companies.

<span class="mw-page-title-main">Partha Dasgupta</span> British economist (born 1942)

Sir Partha Sarathi Dasgupta is an Indian-British economist who is Frank Ramsey Professor Emeritus of Economics at the University of Cambridge, United Kingdom, and a fellow of St John's College, Cambridge.

<span class="mw-page-title-main">Future of Humanity Institute</span> Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director is philosopher Nick Bostrom, and its research staff include futurist Anders Sandberg and Giving What We Can founder Toby Ord.

<span class="mw-page-title-main">Global catastrophic risk</span> Potentially harmful worldwide events

A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk."

Huw Price is an Australian philosopher, formerly the Bertrand Russell Professor in the Faculty of Philosophy, Cambridge, and a Fellow of Trinity College, Cambridge.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

<span class="mw-page-title-main">Existential risk from artificial general intelligence</span> Hypothesized risk to human existence

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.

The Leverhulme Centre for the Future of Intelligence (CFI) is an interdisciplinary research centre within the University of Cambridge that studies artificial intelligence. It is funded by the Leverhulme Trust.

Biotechnology risk is a form of existential risk from biological sources, such as genetically engineered biological agents. The release of such high-consequence pathogens could be

<span class="mw-page-title-main">Olle Häggström</span> Swedish mathematician

Olle Häggström is a professor of mathematical statistics at Chalmers University of Technology. Häggström earned his doctorate in 1994 at Chalmers University of Technology with Jeffrey Steif as supervisor. He became an associate professor in the same university in 1997, and professor of mathematical statistics at University of Gothenburg in 2000. In 2002 he was back at Chalmers University of Technology as professor. He mainly researches on probability theory such as Markov chains, percolation theory and other models in statistical mechanics.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

Beth Victoria Lois Singler, born Beth Victoria White, is a British anthropologist specialising in artificial intelligence. She is known for her digital ethnographic research on the impact of apocalyptic stories on the conception of AI and robots, her comments on the societal implications of AI, as well as her public engagement work. The latter includes a series of four documentaries on whether robots could feel pain, human-robot companionship, AI ethics, and AI consciousness. She is currently the Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge.

<span class="mw-page-title-main">Suffering risks</span> Risks of astronomical suffering

Suffering risks, or s-risks, are risks involving an astronomical amount of suffering, much more than all of the suffering having occurred on Earth. They are sometimes categorized as a subclass of existential risks.

<span class="mw-page-title-main">Global catastrophe scenarios</span> Scenarios in which a global catastrophe creates harm

Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic, such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.

References

  1. Biba, Erin (1 June 2015). "Meet the Co-Founder of an Apocalypse Think Tank". Scientific American. 312 (6): 26. doi:10.1038/scientificamerican0615-26. PMID   26336680.
  2. Lewsey, Fred (25 November 2012). "Humanity's last invention and our uncertain future". Research News . Retrieved 24 December 2012.
  3. "Managing Extreme Technological Risks".
  4. "Existential Risk Research Network | X-Risk Research Network | www.x-risk.net".
  5. "Cambridge Conference on Catastrophic Risk 2016".
  6. "Cambridge Conference on Catastrophic Risk 2018".
  7. "Latest news | Humans for Survival".
  8. "The B. John Garrick Institute for the Risk Sciences".
  9. "PAIS researchers secure prestigious Leverhulme funding".
  10. "Appg-future-gens".
  11. "Events".
  12. "CSER Cambridge". YouTube. Retrieved 6 April 2019.
  13. "Biological Weapons Convention: Where Next?".
  14. Wintle, Bonnie C.; Boehm, Christian R.; Rhodes, Catherine; Molloy, Jennifer C.; Millett, Piers; Adam, Laura; Breitling, Rainer; Carlson, Rob; Casagrande, Rocco; Dando, Malcolm; Doubleday, Robert; Drexler, Eric; Edwards, Brett; Ellis, Tom; Evans, Nicholas G.; Hammond, Richard; Haseloff, Jim; Kahl, Linda; Kuiken, Todd; Lichman, Benjamin R.; Matthewman, Colette A.; Napier, Johnathan A.; Óhéigeartaigh, Seán S.; Patron, Nicola J.; Perello, Edward; Shapira, Philip; Tait, Joyce; Takano, Eriko; Sutherland, William J. (2017). "A transatlantic perspective on 20 emerging issues in biological engineering". eLife. 6. doi: 10.7554/eLife.30247 . PMC   5685469 . PMID   29132504.
  15. "BWC Press Conference".
  16. "Talk to Organisation for the Prohibition of Chemical Weapons".
  17. University of California (24 September 2015). "A 'Parking Lot Pitch' to the Pope" . Retrieved 6 April 2019 via YouTube.
  18. Dasgupta, Partha; Raven, Peter; McIvor, Anna, eds. (2019). Biological Extinction edited by Partha Dasgupta. Cambridge University Press. doi:10.1017/9781108668675. ISBN   9781108668675. S2CID   241969653.
  19. Amano, Tatsuya; Székely, Tamás; Sandel, Brody; Nagy, Szabolcs; Mundkur, Taej; Langendoen, Tom; Blanco, Daniel; Soykan, Candan U.; Sutherland, William J. (2017). "Successful conservation of global waterbird populations depends on effective governance" (PDF). Nature. 553 (7687): 199–202. doi:10.1038/nature25139. PMID   29258291. S2CID   205262876.
  20. Balmford, Andrew; Amano, Tatsuya; Bartlett, Harriet; Chadwick, Dave; Collins, Adrian; Edwards, David; Field, Rob; Garnsworthy, Philip; Green, Rhys; Smith, Pete; Waters, Helen; Whitmore, Andrew; Broom, Donald M.; Chara, Julian; Finch, Tom; Garnett, Emma; Gathorne-Hardy, Alfred; Hernandez-Medrano, Juan; Herrero, Mario; Hua, Fangyuan; Latawiec, Agnieszka; Misselbrook, Tom; Phalan, Ben; Simmons, Benno I.; Takahashi, Taro; Vause, James; Zu Ermgassen, Erasmus; Eisner, Rowan (2018). "The environmental costs and benefits of high-yield farming". Nature Sustainability. 1 (9): 477–485. doi:10.1038/s41893-018-0138-5. PMC   6237269 . PMID   30450426.
  21. Currie, Adrian (2018). "Geoengineering tensions" (PDF). Futures. 102: 78–88. doi:10.1016/j.futures.2018.02.002. hdl: 10871/35739 . S2CID   240258929.
  22. "Business School Rankings for the 21st Century".
  23. Berwick, Isabel (27 January 2019). "As business schools rethink what they do, so must the FT". Financial Times.
  24. McMillan, Robert (16 January 2015). "AI Has Arrived, and That Really Worries the World's Brightest Minds". Wired . Retrieved 24 April 2015.
  25. "Leverhulme Centre for the Future of Intelligence".
  26. "Decision & AI".
  27. maliciousaireport.com
  28. "Best Paper Award – Aies Conference".
  29. Connor, Steve (14 September 2013). "Can We Survive?". The New Zealand Herald.
  30. "CSER media coverage". Centre for the Study of Existential Risk. Archived from the original on 30 June 2014. Retrieved 19 June 2014.
  31. "Humanity's Last Invention and Our Uncertain Future". University of Cambridge Research News. 25 November 2012.
  32. Benson, Richard (12 February 2017). "Meet Earth's Guardians, the real-world X-men and women saving us from existential threats". Wired UK.
  33. Kupferschmidt, Kai (12 January 2018). "Taming the monsters of tomorrow". Science. 359 (6372): 152–155. Bibcode:2018Sci...359..152K. doi:10.1126/science.359.6372.152. PMID   29326256.
  34. "Team".