Future of Humanity Institute

Last updated

Future of Humanity Institute
Formation2005;19 years ago (2005)
DissolvedApril 16, 2024;2 days ago (2024-04-16)
PurposeResearch big-picture questions about humanity and its prospects
Headquarters Oxford, England
Director
Nick Bostrom
Parent organization
Faculty of Philosophy, University of Oxford
Website futureofhumanityinstitute.org

The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. [1] Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord. [2]

Contents

Sharing an office and working closely with the Centre for Effective Altruism, the institute's stated objective was to focus research where it can make the greatest positive difference for humanity in the long term. [3] [4] It engaged in a mix of academic and outreach activities, seeking to promote informed discussion and public engagement in government, businesses, universities, and other organizations. The centre's largest research funders included Amlin, Elon Musk, the European Research Council, Future of Life Institute, and Leverhulme Trust. [5]

The Institute was closed down on 16 April 2024, having "faced increasing administrative headwinds within the Faculty of Philosophy". [6] [7]

History

Nick Bostrom established the institute in November 2005 as part of the Oxford Martin School, then the James Martin 21st Century School. [1] Between 2008 and 2010, FHI hosted the Global Catastrophic Risks conference, wrote 22 academic journal articles, and published 34 chapters in academic volumes. FHI researchers have been mentioned over 5,000 times in the media [8] and have given policy advice at the World Economic Forum, to the private and non-profit sector (such as the Macarthur Foundation, and the World Health Organization), as well as to governmental bodies in Sweden, Singapore, Belgium, the United Kingdom, and the United States.

Bostrom and bioethicist Julian Savulescu also published the book Human Enhancement in March 2009. [9] Most recently, FHI has focused on the dangers of advanced artificial intelligence (AI). In 2014, its researchers published several books on AI risk, including Stuart Armstrong's Smarter Than Us and Bostrom's Superintelligence: Paths, Dangers, Strategies . [10] [11]

In 2018, Open Philanthropy recommended a grant of up to approximately £13.4 million to FHI over three years, with a large portion conditional on successful hiring. [12]

Existential risk

The largest topic FHI has spent time exploring is global catastrophic risk, and in particular existential risk. In a 2002 paper, Bostrom defined an "existential risk" as one "where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential". [13] This includes scenarios where humanity is not directly harmed, but it fails to colonize space and make use of the observable universe's available resources in humanly valuable projects, as discussed in Bostrom's 2003 paper, "Astronomical Waste: The Opportunity Cost of Delayed Technological Development". [14]

Bostrom and Milan Ćirković's 2008 book Global Catastrophic Risks collects essays on a variety of such risks, both natural and anthropogenic. Possible catastrophic risks from nature include super-volcanism, impact events, and energetic astronomical events such as gamma-ray bursts, cosmic rays, solar flares, and supernovae. These dangers are characterized as relatively small and relatively well understood, though pandemics may be exceptions as a result of being more common, and of dovetailing with technological trends. [15] [4]

Synthetic pandemics via weaponized biological agents are given more attention by FHI. Technological outcomes the institute is particularly interested in include anthropogenic climate change, nuclear warfare and nuclear terrorism, molecular nanotechnology, and artificial general intelligence. In expecting the largest risks to stem from future technologies, and from advanced artificial intelligence in particular, FHI agrees with other existential risk reduction organizations, such as the Centre for the Study of Existential Risk and the Machine Intelligence Research Institute. [16] [17] FHI researchers have also studied the impact of technological progress on social and institutional risks, such as totalitarianism, automation-driven unemployment, and information hazards. [18]

In 2020, FHI Senior Research Fellow Toby Ord published his book The Precipice: Existential Risk and the Future of Humanity , in which he argues that safeguarding humanity's future is among the most important moral issues of our time. [19] [20]

Anthropic reasoning

FHI devotes much of its attention to exotic threats that have been little explored by other organizations, and to methodological considerations that inform existential risk reduction and forecasting. The institute has particularly emphasized anthropic reasoning in its research, as an under-explored area with general epistemological implications.

Anthropic arguments FHI has studied include the doomsday argument, which claims that humanity is likely to go extinct soon because it is unlikely that one is observing a point in human history that is extremely early. Instead, present-day humans are likely to be near the middle of the distribution of humans that will ever live. [15] Bostrom has also popularized the simulation argument.

A recurring theme in FHI's research is the Fermi paradox, the surprising absence of observable alien civilizations. Robin Hanson has argued that there must be a "Great Filter" preventing space colonization to account for the paradox. That filter may lie in the past, if intelligence is much more rare than current biology would predict; or it may lie in the future, if existential risks are even larger than is currently recognized.

Human enhancement and rationality

Closely linked to FHI's work on risk assessment, astronomical waste, and the dangers of future technologies is its work on the promise and risks of human enhancement. The modifications in question may be biological, digital, or sociological, and an emphasis is placed on the most radical hypothesized changes, rather than on the likeliest short-term innovations. FHI's bioethics research focuses on the potential consequences of gene therapy, life extension, brain implants and brain–computer interfaces, and mind uploading. [21]

FHI's focus has been on methods for assessing and enhancing human intelligence and rationality, as a way of shaping the speed and direction of technological and social progress. FHI's work on human irrationality, as exemplified in cognitive heuristics and biases, includes an ongoing collaboration with Amlin to study the systemic risk arising from biases in modeling. [22] [23]

Selected publications

See also

Related Research Articles

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a positive feedback loop of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which ultimately results in a powerful superintelligence that qualitatively far surpasses all human intelligence.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the Future of Humanity Institute at Oxford University.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks. This is in contrast to narrow AI, which is designed for specific tasks. AGI is considered one of various definitions of strong AI.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

The Great Filter is the idea that, in the development of life from the earliest stages of abiogenesis to reaching the highest levels of development on the Kardashev scale, there is a barrier to development that makes detectable extraterrestrial life exceedingly rare. The Great Filter is one possible resolution of the Fermi paradox.

<span class="mw-page-title-main">Human extinction</span> Hypothetical end of the human species

Human extinction is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.

<span class="mw-page-title-main">Anders Sandberg</span> Swedish computer scientist, futurist, transhumanist, and philosopher

Anders Sandberg is a Swedish researcher, futurist and transhumanist. He holds a PhD in computational neuroscience from Stockholm University, and is a former senior research fellow at the Future of Humanity Institute at the University of Oxford.

Differential technological development is a strategy of technology governance aiming to decrease risks from emerging technologies by influencing the sequence in which they are developed. On this strategy, societies would strive to delay the development of harmful technologies and their applications, while accelerating the development of beneficial technologies, especially those that offer protection against the harmful ones.

<span class="mw-page-title-main">Global catastrophic risk</span> Potentially harmful worldwide events

A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk."

In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term was first defined by Nick Bostrom.

<span class="mw-page-title-main">Toby Ord</span> Australian philosopher (born 1979)

Toby David Godfrey Ord is an Australian philosopher. In 2009 he founded Giving What We Can, an international society whose members pledge to donate at least 10% of their income to effective charities, and is a key figure in the effective altruism movement, which promotes using reason and evidence to help the lives of others as much as possible.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

Existential risk from artificial general intelligence is the idea that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.

<i>Global Catastrophic Risks</i> (book) 2008 non-fiction book

Global Catastrophic Risks is a 2008 non-fiction book edited by philosopher Nick Bostrom and astronomer Milan M. Ćirković. The book is a collection of essays from 26 academics written about various global catastrophic and existential risks.

<i>The Precipice: Existential Risk and the Future of Humanity</i> 2020 book about existential risks by Toby Ord

The Precipice: Existential Risk and the Future of Humanity is a 2020 non-fiction book by the Australian philosopher Toby Ord, a senior research fellow at the Future of Humanity Institute in Oxford. It argues that humanity faces unprecedented risks over the next few centuries and examines the moral significance of safeguarding humanity's future.

<span class="mw-page-title-main">Longtermism</span> Philosophical view which prioritises the long-term future

Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and serves as a primary motivation for efforts that claim to reduce existential risks to humanity.

<span class="mw-page-title-main">Global catastrophe scenarios</span> Scenarios in which a global catastrophe creates harm

Scenarios in which a global catastrophic risk creates harm have been widely discussed. Some sources of catastrophic risk are anthropogenic, such as global warming, environmental degradation, and nuclear war. Others are non-anthropogenic or natural, such as meteor impacts or supervolcanoes. The impact of these scenarios can vary widely, depending on the cause and the severity of the event, ranging from temporary economic disruption to human extinction. Many societal collapses have already happened throughout human history.

References

  1. 1 2 "Humanity's Future: Future of Humanity Institute". Oxford Martin School. Archived from the original on 17 March 2014. Retrieved 28 March 2014.
  2. "Staff". Future of Humanity Institute. Retrieved 28 March 2014.
  3. "About FHI". Future of Humanity Institute. Archived from the original on 1 December 2015. Retrieved 28 March 2014.
  4. 1 2 Ross Andersen (25 February 2013). "Omens". Aeon Magazine. Retrieved 28 March 2014.
  5. "Support FHI". Future of Humanity Institute. 2021. Archived from the original on 20 October 2021. Retrieved 23 July 2022.
  6. "Future of Humanity Institute". web.archive.org. 17 April 2024. Retrieved 17 April 2024.
  7. Maiberg, Emanuel (17 April 2024). "Institute That Pioneered AI 'Existential Risk' Research Shuts Down". 404 Media . Retrieved 17 April 2024.
  8. "Google News". Google News. Retrieved 30 March 2015.
  9. Nick Bostrom (18 July 2007). Achievements Report: 2008-2010 (PDF) (Report). Future of Humanity Institute. Archived from the original (PDF) on 21 December 2012. Retrieved 31 March 2014.
  10. Mark Piesing (17 May 2012). "AI uprising: humans will be outsourced, not obliterated". Wired. Retrieved 31 March 2014.
  11. Coughlan, Sean (24 April 2013). "How are humans going to become extinct?". BBC News. Retrieved 29 March 2014.
  12. Open Philanthropy Project (July 2018). "Future of Humanity Institute — Work on Global Catastrophic Risks".
  13. Nick Bostrom (March 2002). "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology. 15 (3): 308–314. Retrieved 31 March 2014.
  14. Nick Bostrom (November 2003). "Astronomical Waste: The Opportunity Cost of Delayed Technological Development". Utilitas. 15 (3): 308–314. CiteSeerX   10.1.1.429.2849 . doi:10.1017/s0953820800004076. S2CID   15860897 . Retrieved 31 March 2014.
  15. 1 2 Ross Andersen (6 March 2012). "We're Underestimating the Risk of Human Extinction". The Atlantic. Retrieved 29 March 2014.
  16. Kate Whitehead (16 March 2014). "Cambridge University study centre focuses on risks that could annihilate mankind". South China Morning Post. Retrieved 29 March 2014.
  17. Jenny Hollander (September 2012). "Oxford Future of Humanity Institute knows what will make us extinct". Bustle. Retrieved 31 March 2014.
  18. Nick Bostrom. "Information Hazards: A Typology of Potential Harms from Knowledge" (PDF). Future of Humanity Institute. Retrieved 31 March 2014.
  19. Ord, Toby. "The Precipice: Existential Risk and the Future of Humanity". The Precipice Website. Retrieved 18 October 2020.
  20. Chivers, Tom (7 March 2020). "How close is humanity to destroying itself?". The Spectator. Retrieved 18 October 2020.
  21. Anders Sandberg and Nick Bostrom. "Whole Brain Emulation: A Roadmap" (PDF). Future of Humanity Institute. Retrieved 31 March 2014.
  22. "Amlin and Oxford University launch major research project into the Systemic Risk of Modelling" (Press release). Amlin. 11 February 2014. Archived from the original on 13 April 2014. Retrieved 31 March 2014.
  23. "Amlin and Oxford University to collaborate on modelling risk study". Continuity, Insurance & Risk Magazine. 11 February 2014. Retrieved 31 March 2014.