Formation | 2012 |
---|---|
Founders | |
Purpose | Existential risk studies |
Headquarters | Cambridge, England |
Parent organization | University of Cambridge |
Website | cser |
The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. [1] The co-founders of the centre are Huw Price (Bertrand Russell Professor of Philosophy at Cambridge), Martin Rees (the Astronomer Royal and former President of the Royal Society) and Jaan Tallinn (co-founder of Skype, early investor to Anthropic). [2]
Risks are associated with emerging and future technological advances and impacts of human activity. Managing these extreme technological risks is an urgent task - but one that poses particular difficulties and has been comparatively neglected in academia. [3]
CSER has been covered in many different newspapers (particularly in the United Kingdom), [29] [30] [31] mostly covering different topics of interest. CSER was profiled on the front cover of Wired, [32] and in the special Frankenstein issue of Science in 2018. [33]
CSER Advisors include Cambridge academics such as:
And advisors such as:
Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.
Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.
Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.
Human extinction is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.
Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.
Sir Partha Sarathi Dasgupta is an Indian-British economist who is Frank Ramsey Professor Emeritus of Economics at the University of Cambridge, United Kingdom, and a fellow of St John's College, Cambridge.
The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk".
Huw Price is an Australian philosopher, formerly the Bertrand Russell Professor in the Faculty of Philosophy, Cambridge, and a Fellow of Trinity College, Cambridge.
The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
The Leverhulme Centre for the Future of Intelligence (CFI) is an interdisciplinary research centre within the University of Cambridge that studies artificial intelligence. It is funded by the Leverhulme Trust.
Olle Häggström is a professor of mathematical statistics at Chalmers University of Technology. Häggström earned his doctorate in 1994 at Chalmers University of Technology with Jeffrey Steif as supervisor. He became an associate professor in the same university in 1997, and professor of mathematical statistics at University of Gothenburg in 2000. In 2002 he was back at Chalmers University of Technology as professor. He mainly researches on probability theory such as Markov chains, percolation theory and other models in statistical mechanics.
Beth Victoria Lois Singler, born Beth Victoria White, is a British anthropologist specialising in artificial intelligence. She is known for her digital ethnographic research on the impact of apocalyptic stories on the conception of AI and robots, her comments on the societal implications of AI, as well as her public engagement work. The latter includes a series of four documentaries on whether robots could feel pain, human-robot companionship, AI ethics, and AI consciousness. She is currently the Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge.
Risks of astronomical suffering, also called suffering risks or s-risks, are risks involving much more suffering than all that has occurred on Earth so far. They are sometimes categorized as a subclass of existential risks.
Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic, such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.
AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.