Seth Baum

Last updated
Seth Baum
Seth Baum.jpg
BornOctober 17th, 1980
EducationPh.D in Geography
Alma mater Pennsylvania State University
OccupationResearcher
Years active2001–present
Known forGlobal Catastrophic Risk Institute
Existential risks research

Seth Baum is an American researcher involved in the field of risk research. He is the executive director of the Global Catastrophic Risk Institute (GCRI), a think tank focused on existential risk. [1] He is also affiliated with the Blue Marble Space Institute of Science and the Columbia University Center for Research on Environmental Decisions. [2]

Contents

Academic career

Baum obtained his BS in optics and mathematics in 2003 [3] at the University of Rochester, followed by an MS in Electrical Engineering, Northeastern University in 2006. [4]

In 2012, he obtained his PhD in Geography with his dissertation on climate change policy: "Discounting Across Space and Time in Climate Change Assessment" from Pennsylvania State University. [5] Later, he completed a post-doctoral fellowship with the Columbia University Center for Research on Environmental Decisions. Baum then steered his research interests into astrophysics and global risks, including global warming and nuclear war, [6] and the development of effective solutions for reducing them. [6] [7]

Furthermore, he is a Fellow of the Society for Risk Analysis.

Work

As a graduate student in Northstrom, Boston, Baum contributed to the Whats Up magazine (now Spare Change News), from 2004 to 2007.

In 2011, Baum co-founded GCRI along with Tony Barrett, with the mission to "develop the best ways to confront humanity's gravest threats". The institute has since grown rapidly, publishing in peer-reviewed academic journals and media outlets. [8] As of 2016, its main work is on the "Integrated Assessment Project", which assesses all the global catastrophic risks in order to make them available for societal learning and decision making processes. GCRI is funded by "a mix of grants, private donations, and occasional consulting work. [9] [10]

Two years later, Baum hosted a regular blog on Scientific American [11] and has been interviewed about his work and research in the History Channel [12] and the O'Reilly Factor., [13] where he was asked about studying possible human contact with extraterrestrial life and the ethics involved. [14] He also started contributing regularly to The Huffington Post , writing about the Russo-Ukrainian War and the Syrian Civil War as possible scenarios for nuclear war. [15]

In 2016, after receiving a 100,000 dollar grant from the Future of Life Institute [16] his research interests shifted to AI safety [17] and the ethics of outer space. [7] That same year, he wrote a monthly column for the Bulletin of the Atomic Scientists, where he discussed AI threats, biological weapons and the risks of nuclear deterrence failure. [18]

See also

Related Research Articles

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests such as fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

<span class="mw-page-title-main">James Hughes (sociologist)</span> American sociologist and bioethicist

James J. Hughes is an American sociologist, bioethicist and futurist. He is the Executive Director of the Institute for Ethics and Emerging Technologies and is the Associate Provost for institutional research, assessment, and planning at University of Massachusetts Boston. He is the author of Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future and is currently writing a book on secular Buddhism and moral bioenhancement tentatively titled Cyborg Buddha: Using Neurotechnology to Become Better People.

The Institute for Ethics and Emerging Technologies (IEET) is a technoprogressive think tank that seeks to "promote ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies." It was incorporated in the United States in 2004, as a non-profit 501(c)(3) organization, by philosopher Nick Bostrom and bioethicist James Hughes.

<span class="mw-page-title-main">Human extinction</span> Hypothetical end of the human species

Human extinction or omnicide is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.

<span class="mw-page-title-main">Future of Humanity Institute</span> Defunct Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

<span class="mw-page-title-main">Global catastrophic risk</span> Hypothetical global-scale disaster risk

A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk".

Effective altruism (EA) is a 21st-century philosophical and social movement that advocates impartially calculating benefits and prioritizing causes to provide the greatest good. It is motivated by "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis". People who pursue the goals of effective altruism, who are sometimes called effective altruists, follow a variety of approaches proposed by the movement, such as donating to selected charities and choosing careers with the aim of maximizing positive impact. The movement has achieved significant popularity outside of academia, spurring the creation of university-based institutes, research centers, advisory organizations and charities, which, collectively, have donated several hundreds of millions of dollars.

The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.

<span class="mw-page-title-main">Alan Robock</span> American climatologist

Alan Robock is an American climatologist. He is currently a Distinguished Professor in the Department of Environmental Sciences at Rutgers University, New Jersey. He advocates nuclear disarmament and, in 2010 and 2011, met with Fidel Castro during lecture trips to Cuba to discuss the dangers of nuclear weapons. Alan Robock was a 2007 IPCC author, a member of the organisation when it was awarded the Nobel Peace Prize, "for their efforts to build up and disseminate greater knowledge about man-made climate change, and to lay the foundations for the measures that are needed to counteract such change".

<i>Our Final Invention</i> 2013 book by James Barrat

Our Final Invention: Artificial Intelligence and the End of the Human Era is a 2013 non-fiction book by the American author James Barrat. The book discusses the potential benefits and possible risks of human-level (AGI) or super-human (ASI) artificial intelligence. Those supposed risks include extermination of the human race.

<i>Feeding Everyone No Matter What</i> 2014 book by David Denkenberger and Joshua M. Pearce

Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe is a 2014 book by David Denkenberger and Joshua M. Pearce and published by Elsevier under their Academic Press.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<i>Global Catastrophic Risks</i> (book) 2008 non-fiction book

Global Catastrophic Risks is a 2008 non-fiction book edited by philosopher Nick Bostrom and astronomer Milan M. Ćirković. The book is a collection of essays from 26 academics written about various global catastrophic and existential risks.

<span class="mw-page-title-main">End Times (book)</span> 2019 book by Bryan Walsh

End Times: A Brief Guide to the End of the World is a 2019 non-fiction book by journalist Bryan Walsh. The book discusses various risks of human extinction, including asteroids, volcanoes, nuclear war, global warming, pathogens, biotech, AI, and extraterrestrial intelligence. The book includes interviews with astronomers, anthropologists, biologists, climatologists, geologists, and other scholars. The book advocates strongly for greater action.

<span class="mw-page-title-main">Risk of astronomical suffering</span>

Risks of astronomical suffering, also called suffering risks or s-risks, are risks involving much more suffering than all that has occurred on Earth so far. They are sometimes categorized as a subclass of existential risks.

<span class="mw-page-title-main">Longtermism</span> Philosophical view which prioritises the long-term future

Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and a primary motivation for efforts that aim to reduce existential risks to humanity.

<span class="mw-page-title-main">Global catastrophe scenarios</span> Scenarios in which a global catastrophe creates harm

Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic, such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.

References

  1. "I am Seth Baum, AMA! - Effective Altruism Forum". effective-altruism.com. Retrieved 2016-09-02.
  2. "People | Global Catastrophic Risk Institute". gcrinstitute.org. 9 September 2012. Retrieved 2016-09-06.
  3. "Seth D. Baum" (PDF). sethbaum.com. Retrieved 2020-01-12.
  4. "Seth Baum - Center for Research on Environmental Decisions". cred.columbia.edu. Retrieved 2016-10-12.
  5. Seth D. Baum, 2012. Discounting Across Space and Time in Climate Change Assessment. Doctor of Philosophy Dissertation, Department of Geography, The Pennsylvania State University.
  6. 1 2 Baum, Seth D. (2015-09-01). "Confronting the threat of nuclear winter". Futures. Confronting Future Catastrophic Threats To Humanity. 72: 69–79. doi:10.1016/j.futures.2015.03.004. S2CID   18895356.
  7. 1 2 Baum, Seth D. (2016-05-20). "The Ethics of Outer Space: A Consequentialist Perspective". Rochester, NY: Social Science Research Network. SSRN   2807362.{{cite journal}}: Cite journal requires |journal= (help)
  8. "Publications | Global Catastrophic Risk Institute". gcrinstitute.org. 26 November 2012. Retrieved 2016-10-17.
  9. About GCRI. http://gcrinstitute.org/about/
  10. "Integrated Assessment Project | Global Catastrophic Risk Institute". gcrinstitute.org. Retrieved 2016-10-17.
  11. Baum, Seth. "When Global Catastrophes Collide: The Climate Engineering Double Catastrophe". Scientific American Blog Network. Retrieved 2016-10-17.
  12. "IEET Affiliate Scholar Seth Baum interviewed on the History Channel". ieet.org. Retrieved 2016-10-12.
  13. "Alien Invasion Over Global Warming?". Fox News. 2011-09-06. Retrieved 2016-10-12.
  14. Roberts, Sam (2012-02-12). "NEWS ANALYSIS; What Do You Say to an Alien?". The New York Times. ISSN   0362-4331 . Retrieved 2016-10-18.
  15. Director, Seth Baum Executive; Institute, Global Catastrophic Risk (2014-03-07). "Best And Worst Case Scenarios for Ukraine Crisis: World Peace And Nuclear War | Huffington Post". The Huffington Post. Retrieved 2016-10-17.
  16. "First AI Grant Recipients - Future of Life Institute". Future of Life Institute. Retrieved 2016-10-17.
  17. Barrett, Anthony M.; Baum, Seth D. (2016-05-23). "A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis". Journal of Experimental & Theoretical Artificial Intelligence. 29 (2): 397–414. arXiv: 1607.07730 . doi:10.1080/0952813X.2016.1186228. ISSN   0952-813X. S2CID   928824.
  18. "Breaking down the risk of nuclear deterrence failure". 2015-07-27. Retrieved 2016-09-06.