Seth Baum

Last updated
Seth Baum
Seth Baum.jpg
BornOctober 17th, 1980
EducationPh.D in Geography
Alma mater Pennsylvania State University
OccupationResearcher
Years active2001–present
Known forGlobal Catastrophic Risk Institute
Existential risks research

Seth Baum is an American researcher involved in the field of risk research. He is the executive director of the Global Catastrophic Risk Institute (GCRI), a think tank focused on existential risk. [1] He is also affiliated with the Blue Marble Space Institute of Science and the Columbia University Center for Research on Environmental Decisions. [2]

Contents

Academic career

Baum obtained his BS in optics and mathematics in 2003 [3] at the University of Rochester, followed by an MS in Electrical Engineering, Northeastern University in 2006. [4]

In 2012, he obtained his PhD in Geography with his dissertation on climate change policy: "Discounting Across Space and Time in Climate Change Assessment" from Pennsylvania State University. [5] Later, he completed a post-doctoral fellowship with the Columbia University Center for Research on Environmental Decisions. Baum then steered his research interests into astrophysics and global risks, including global warming and nuclear war, [6] and the development of effective solutions for reducing them. [6] [7]

Furthermore, he is a Fellow of the Society for Risk Analysis.

Work

As a graduate student in Northstrom, Boston, Baum contributed to the Whats Up magazine (now Spare Change News), from 2004 to 2007.

In 2011, Baum co-founded GCRI along with Tony Barrett, with the mission to "develop the best ways to confront humanity's gravest threats". The institute has since grown rapidly, publishing in peer-reviewed academic journals and media outlets. [8] As of 2016, its main work is on the "Integrated Assessment Project", which assesses all the global catastrophic risks in order to make them available for societal learning and decision making processes. GCRI is funded by "a mix of grants, private donations, and occasional consulting work. [9] [10]

Two years later, Baum hosted a regular blog on Scientific American [11] and has been interviewed about his work and research in the History Channel [12] and the O'Reilly Factor., [13] where he was asked about studying possible human contact with extraterrestrial life and the ethics involved. [14] He also started contributing regularly to The Huffington Post , writing about the Russo-Ukrainian War and the Syrian Civil War as possible scenarios for nuclear war. [15]

In 2016, after receiving a 100,000 dollar grant from the Future of Life Institute [16] his research interests shifted to AI safety [17] and the ethics of outer space. [7] That same year, he wrote a monthly column for the Bulletin of the Atomic Scientists, where he discussed AI threats, biological weapons and the risks of nuclear deterrence failure. [18]

See also

Related Research Articles

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

<span class="mw-page-title-main">Friendly artificial intelligence</span> AI to benefit humanity

Friendly artificial intelligence refers to hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Nick Bostrom</span> Swedish philosopher and writer

Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list.

<span class="mw-page-title-main">Institute for Ethics and Emerging Technologies</span> Technoprogressive think tank

The Institute for Ethics and Emerging Technologies (IEET) is a technoprogressive think tank that seeks to "promote ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies." It was incorporated in the United States in 2004, as a non-profit 501(c)(3) organization, by philosopher Nick Bostrom and bioethicist James Hughes.

<span class="mw-page-title-main">Human extinction</span> Hypothetical end of the human species

Human extinction is the hypothetical end of the human species due to either natural causes such as population decline from sub-replacement fertility, an asteroid impact, large-scale volcanism, or to anthropogenic (human) causes.

<span class="mw-page-title-main">Jaan Tallinn</span> Estonian programmer and investor

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa. Jaan Tallinn is a leading figure in the field of existential risk, having co-founded both the Centre for the Study of Existential Risk (CSER), and the Future of Life Institute.

<span class="mw-page-title-main">Future of Humanity Institute</span> Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) is an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director is philosopher Nick Bostrom, and its research staff include futurist Anders Sandberg and Giving What We Can founder Toby Ord.

<span class="mw-page-title-main">Ethics of artificial intelligence</span> Ethical issues specific to AI

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI.

<span class="mw-page-title-main">Global catastrophic risk</span> Potentially harmful worldwide events

A global catastrophic risk or a doomsday scenario is a hypothetical future event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's potential is known as an "existential risk."

The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price, Martin Rees and Jaan Tallinn.

<i>Our Final Invention</i> 2013 book by James Barrat

Our Final Invention: Artificial Intelligence and the End of the Human Era is a 2013 non-fiction book by the American author James Barrat. The book discusses the potential benefits and possible risks of human-level or super-human artificial intelligence. Those supposed risks include extermination of the human race.

<i>Feeding Everyone No Matter What</i> Book on crop-destroying catastrophes (2014)

Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe is a 2014 book by David Denkenberger and Joshua M. Pearce and published by Elsevier under their Academic Press.

<span class="mw-page-title-main">Existential risk from artificial general intelligence</span> Hypothesized risk to human existence

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe.

<i>Global Catastrophic Risks</i> (book) 2008 non-fiction book

Global Catastrophic Risks is a 2008 non-fiction book edited by philosopher Nick Bostrom and astronomer Milan M. Ćirković. The book is a collection of essays from 26 academics written about various global catastrophic and existential risks.

<span class="mw-page-title-main">End Times (book)</span> 2019 book by Bryan Walsh

End Times: A Brief Guide to the End of the World is a 2019 non-fiction book by journalist Bryan Walsh. The book discusses various risks of human extinction, including asteroids, volcanoes, nuclear war, global warming, pathogens, biotech, AI, and extraterrestrial intelligence. The book includes interviews with astronomers, anthropologists, biologists, climatologists, geologists, and other scholars. The book advocates strongly for greater action.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

<span class="mw-page-title-main">Suffering risks</span> Risks of astronomical suffering

Suffering risks, known as s-risks for short, are future events with the potential capacity to produce a huge amount of suffering. These events may generate more suffering than has ever existed on Earth, in the entirety of its existence. Sources of possible s-risks include embodied artificial intelligence and superintelligence, as well as space colonization, which could potentially lead to "constant and catastrophic wars" and an immense increase in wild animal suffering by introducing wild animals, who "generally lead short, miserable lives full of sometimes the most brutal suffering", to other planets, either intentionally, or inadvertently.

<span class="mw-page-title-main">Longtermism</span> Philosophical view which prioritises the long-term future

Longtermism is an ethical stance which gives priority to improving the long-term future. It is an important concept in effective altruism and serves as a primary motivation for efforts that claim to reduce existential risks to humanity.

<span class="mw-page-title-main">Global catastrophe scenarios</span> Scenarios in which a global catastrophe creates harm

Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic, such as global warming, environmental degradation, engineered pandemics, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.

<i>What We Owe the Future</i> 2022 book about longtermism by William MacAskill

What We Owe the Future is a 2022 book by the Scottish philosopher and ethicist William MacAskill, an associate professor in philosophy at the University of Oxford. It argues for effective altruism and the philosophy of longtermism, which MacAskill defines as "the idea that positively influencing the long-term future is a key moral priority of our time."

References

  1. "I am Seth Baum, AMA! - Effective Altruism Forum". effective-altruism.com. Retrieved 2016-09-02.
  2. "People | Global Catastrophic Risk Institute". gcrinstitute.org. 9 September 2012. Retrieved 2016-09-06.
  3. "Seth D. Baum" (PDF). sethbaum.com. Retrieved 2020-01-12.
  4. "Seth Baum - Center for Research on Environmental Decisions". cred.columbia.edu. Retrieved 2016-10-12.
  5. Seth D. Baum, 2012. Discounting Across Space and Time in Climate Change Assessment. Doctor of Philosophy Dissertation, Department of Geography, The Pennsylvania State University.
  6. 1 2 Baum, Seth D. (2015-09-01). "Confronting the threat of nuclear winter". Futures. Confronting Future Catastrophic Threats To Humanity. 72: 69–79. doi:10.1016/j.futures.2015.03.004. S2CID   18895356.
  7. 1 2 Baum, Seth D. (2016-05-20). "The Ethics of Outer Space: A Consequentialist Perspective". Rochester, NY: Social Science Research Network. SSRN   2807362.{{cite journal}}: Cite journal requires |journal= (help)
  8. "Publications | Global Catastrophic Risk Institute". gcrinstitute.org. 26 November 2012. Retrieved 2016-10-17.
  9. About GCRI. http://gcrinstitute.org/about/
  10. "Integrated Assessment Project | Global Catastrophic Risk Institute". gcrinstitute.org. Retrieved 2016-10-17.
  11. Baum, Seth. "When Global Catastrophes Collide: The Climate Engineering Double Catastrophe". Scientific American Blog Network. Retrieved 2016-10-17.
  12. "IEET Affiliate Scholar Seth Baum interviewed on the History Channel". ieet.org. Retrieved 2016-10-12.
  13. "Alien Invasion Over Global Warming?". Fox News. 2011-09-06. Retrieved 2016-10-12.
  14. Roberts, Sam (2012-02-12). "NEWS ANALYSIS; What Do You Say to an Alien?". The New York Times. ISSN   0362-4331 . Retrieved 2016-10-18.
  15. Director, Seth Baum Executive; Institute, Global Catastrophic Risk (2014-03-07). "Best And Worst Case Scenarios for Ukraine Crisis: World Peace And Nuclear War | Huffington Post". The Huffington Post. Retrieved 2016-10-17.
  16. "First AI Grant Recipients - Future of Life Institute". Future of Life Institute. Retrieved 2016-10-17.
  17. Barrett, Anthony M.; Baum, Seth D. (2016-05-23). "A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis". Journal of Experimental & Theoretical Artificial Intelligence. 29 (2): 397–414. arXiv: 1607.07730 . doi:10.1080/0952813X.2016.1186228. ISSN   0952-813X. S2CID   928824.
  18. "Breaking down the risk of nuclear deterrence failure". 2015-07-27. Retrieved 2016-09-06.