Seth Baum | |
---|---|
Born | October 17th, 1980 |
Education | Ph.D in Geography |
Alma mater | Pennsylvania State University |
Occupation | Researcher |
Years active | 2001–present |
Known for | Global Catastrophic Risk Institute Existential risks research |
Seth Baum is an American researcher involved in the field of risk research. He is the executive director of the Global Catastrophic Risk Institute (GCRI), a think tank focused on existential risk. [1] He is also affiliated with the Blue Marble Space Institute of Science and the Columbia University Center for Research on Environmental Decisions. [2]
Baum obtained his BS in optics and mathematics in 2003 [3] at the University of Rochester, followed by an MS in Electrical Engineering, Northeastern University in 2006. [4]
In 2012, he obtained his PhD in Geography with his dissertation on climate change policy: "Discounting Across Space and Time in Climate Change Assessment" from Pennsylvania State University. [5] Later, he completed a post-doctoral fellowship with the Columbia University Center for Research on Environmental Decisions. Baum then steered his research interests into astrophysics and global risks, including global warming and nuclear war, [6] and the development of effective solutions for reducing them. [6] [7]
Furthermore, he is a Fellow of the Society for Risk Analysis.
As a graduate student in Northstrom, Boston, Baum contributed to the Whats Up magazine (now Spare Change News), from 2004 to 2007.
In 2011, Baum co-founded GCRI along with Tony Barrett, with the mission to "develop the best ways to confront humanity's gravest threats". The institute has since grown rapidly, publishing in peer-reviewed academic journals and media outlets. [8] As of 2016, its main work is on the "Integrated Assessment Project", which assesses all the global catastrophic risks in order to make them available for societal learning and decision making processes. GCRI is funded by "a mix of grants, private donations, and occasional consulting work. [9] [10]
Two years later, Baum hosted a regular blog on Scientific American [11] and has been interviewed about his work and research in the History Channel [12] and the O'Reilly Factor., [13] where he was asked about studying possible human contact with extraterrestrial life and the ethics involved. [14] He also started contributing regularly to The Huffington Post , writing about the Russo-Ukrainian War and the Syrian Civil War as possible scenarios for nuclear war. [15]
In 2016, after receiving a 100,000 dollar grant from the Future of Life Institute [16] his research interests shifted to AI safety [17] and the ethics of outer space. [7] That same year, he wrote a monthly column for the Bulletin of the Atomic Scientists, where he discussed AI threats, biological weapons and the risks of nuclear deterrence failure. [18]
Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.
Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests such as fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.
James J. Hughes is an American sociologist, bioethicist and futurist. He is the Executive Director of the Institute for Ethics and Emerging Technologies and is the Associate Provost for institutional research, assessment, and planning at University of Massachusetts Boston. He is the author of Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future and is currently writing a book on secular Buddhism and moral bioenhancement tentatively titled Cyborg Buddha: Using Neurotechnology to Become Better People.
The Institute for Ethics and Emerging Technologies (IEET) is a technoprogressive think tank that seeks to "promote ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies." It was incorporated in the United States in 2004, as a non-profit 501(c)(3) organization, by philosopher Nick Bostrom and bioethicist James Hughes.
Human extinction or omnicide is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.
The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk".
Effective altruism (EA) is a 21st-century philosophical and social movement that advocates impartially calculating benefits and prioritizing causes to provide the greatest good. It is motivated by "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis". People who pursue the goals of effective altruism, who are sometimes called effective altruists, follow a variety of approaches proposed by the movement, such as donating to selected charities and choosing careers with the aim of maximizing positive impact. The movement has achieved significant popularity outside of academia, spurring the creation of university-based institutes, research centers, advisory organizations and charities, which, collectively, have donated several hundreds of millions of dollars.
The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.
Alan Robock is an American climatologist. He is currently a Distinguished Professor in the Department of Environmental Sciences at Rutgers University, New Jersey. He advocates nuclear disarmament and, in 2010 and 2011, met with Fidel Castro during lecture trips to Cuba to discuss the dangers of nuclear weapons. Alan Robock was a 2007 IPCC author, a member of the organisation when it was awarded the Nobel Peace Prize, "for their efforts to build up and disseminate greater knowledge about man-made climate change, and to lay the foundations for the measures that are needed to counteract such change".
Our Final Invention: Artificial Intelligence and the End of the Human Era is a 2013 non-fiction book by the American author James Barrat. The book discusses the potential benefits and possible risks of human-level (AGI) or super-human (ASI) artificial intelligence. Those supposed risks include extermination of the human race.
Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe is a 2014 book by David Denkenberger and Joshua M. Pearce and published by Elsevier under their Academic Press.
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
Global Catastrophic Risks is a 2008 non-fiction book edited by philosopher Nick Bostrom and astronomer Milan M. Ćirković. The book is a collection of essays from 26 academics written about various global catastrophic and existential risks.
End Times: A Brief Guide to the End of the World is a 2019 non-fiction book by journalist Bryan Walsh. The book discusses various risks of human extinction, including asteroids, volcanoes, nuclear war, global warming, pathogens, biotech, AI, and extraterrestrial intelligence. The book includes interviews with astronomers, anthropologists, biologists, climatologists, geologists, and other scholars. The book advocates strongly for greater action.
Risks of astronomical suffering, also called suffering risks or s-risks, are risks involving much more suffering than all that has occurred on Earth so far. They are sometimes categorized as a subclass of existential risks.
Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and a primary motivation for efforts that aim to reduce existential risks to humanity.
Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic, such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.
{{cite journal}}
: Cite journal requires |journal=
(help)