Seth Baum | |
---|---|
Born | October 17th, 1980 |
Education | Ph.D in Geography |
Alma mater | Pennsylvania State University |
Occupation | Researcher |
Years active | 2001–present |
Known for | Global Catastrophic Risk Institute Existential risks research |
Seth Baum is an American researcher involved in the field of risk research. He is the executive director of the Global Catastrophic Risk Institute (GCRI), a think tank focused on existential risk. [1] He is also affiliated with the Blue Marble Space Institute of Science and the Columbia University Center for Research on Environmental Decisions. [2]
Baum obtained his BS in optics and mathematics in 2003 [3] at the University of Rochester, followed by an MS in Electrical Engineering, Northeastern University in 2006. [4]
In 2012, he obtained his PhD in Geography with his dissertation on climate change policy: "Discounting Across Space and Time in Climate Change Assessment" from Pennsylvania State University. [5] Later, he completed a post-doctoral fellowship with the Columbia University Center for Research on Environmental Decisions. Baum then steered his research interests into astrophysics and global risks, including global warming and nuclear war, [6] and the development of effective solutions for reducing them. [6] [7]
Furthermore, he is a Fellow of the Society for Risk Analysis.
As a graduate student in Northstrom, Boston, Baum contributed to the Whats Up magazine (now Spare Change News), from 2004 to 2007.
In 2011, Baum co-founded GCRI along with Tony Barrett, with the mission to "develop the best ways to confront humanity's gravest threats". The institute has since grown rapidly, publishing in peer-reviewed academic journals and media outlets. [8] As of 2016, its main work is on the "Integrated Assessment Project", which assesses all the global catastrophic risks in order to make them available for societal learning and decision making processes. GCRI is funded by "a mix of grants, private donations, and occasional consulting work. [9] [10]
Two years later, Baum hosted a regular blog on Scientific American [11] and has been interviewed about his work and research in the History Channel [12] and the O'Reilly Factor., [13] where he was asked about studying possible human contact with extraterrestrial life and the ethics involved. [14] He also started contributing regularly to The Huffington Post , writing about the Russo-Ukrainian War and the Syrian Civil War as possible scenarios for nuclear war. [15]
In 2016, after receiving a 100,000 dollar grant from the Future of Life Institute [16] his research interests shifted to AI safety [17] and the ethics of outer space. [7] That same year, he wrote a monthly column for the Bulletin of the Atomic Scientists, where he discussed AI threats, biological weapons and the risks of nuclear deterrence failure. [18]
Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.
Friendly artificial intelligence refers to hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list.
The Institute for Ethics and Emerging Technologies (IEET) is a technoprogressive think tank that seeks to "promote ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies." It was incorporated in the United States in 2004, as a non-profit 501(c)(3) organization, by philosopher Nick Bostrom and bioethicist James Hughes.
Human extinction is the hypothetical end of the human species due to either natural causes such as population decline from sub-replacement fertility, an asteroid impact, large-scale volcanism, or to anthropogenic (human) causes.
Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa. Jaan Tallinn is a leading figure in the field of existential risk, having co-founded both the Centre for the Study of Existential Risk (CSER), and the Future of Life Institute.
The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director is philosopher Nick Bostrom, and its research staff include futurist Anders Sandberg and Giving What We Can founder Toby Ord.
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI.
A global catastrophic risk or a doomsday scenario is a hypothetical future event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's potential is known as an "existential risk."
The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.
Our Final Invention: Artificial Intelligence and the End of the Human Era is a 2013 non-fiction book by the American author James Barrat. The book discusses the potential benefits and possible risks of human-level or super-human artificial intelligence. Those supposed risks include extermination of the human race.
Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe is a 2014 book by David Denkenberger and Joshua M. Pearce and published by Elsevier under their Academic Press.
Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe.
Global Catastrophic Risks is a 2008 non-fiction book edited by philosopher Nick Bostrom and astronomer Milan M. Ćirković. The book is a collection of essays from 26 academics written about various global catastrophic and existential risks.
End Times: A Brief Guide to the End of the World is a 2019 non-fiction book by journalist Bryan Walsh. The book discusses various risks of human extinction, including asteroids, volcanoes, nuclear war, global warming, pathogens, biotech, AI, and extraterrestrial intelligence. The book includes interviews with astronomers, anthropologists, biologists, climatologists, geologists, and other scholars. The book advocates strongly for greater action.
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
Suffering risks, known as s-risks for short, are future events with the potential capacity to produce a huge amount of suffering. These events may generate more suffering than has ever existed on Earth, in the entirety of its existence. Sources of possible s-risks include embodied artificial intelligence and superintelligence, as well as space colonization, which could potentially lead to "constant and catastrophic wars" and an immense increase in wild animal suffering by introducing wild animals, who "generally lead short, miserable lives full of sometimes the most brutal suffering", to other planets, either intentionally, or inadvertently.
Longtermism is an ethical stance which gives priority to improving the long-term future. It is an important concept in effective altruism and serves as a primary motivation for efforts that claim to reduce existential risks to humanity.
Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic, such as global warming, environmental degradation, engineered pandemics, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.
What We Owe the Future is a 2022 book by the Scottish philosopher and ethicist William MacAskill, an associate professor in philosophy at the University of Oxford. It argues for effective altruism and the philosophy of longtermism, which MacAskill defines as "the idea that positively influencing the long-term future is a key moral priority of our time."
{{cite journal}}
: Cite journal requires |journal=
(help)