Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and a primary motivation for efforts that aim to reduce existential risks to humanity. [1] [2]
The key argument for longtermism has been summarized as follows: "future people matter morally just as much as people alive today; ... there may well be more people alive in the future than there are in the present or have been in the past; and ... we can positively affect future peoples' lives." [3] [4] These three ideas taken together suggest, to those advocating longtermism, that it is the responsibility of those living now to ensure that future generations get to survive and flourish. [4]
Philosopher William MacAskill defines longtermism as "the view that positively influencing the longterm future is a key moral priority of our time". [1] [5] : 4 He distinguishes it from strong longtermism, "the view that positively influencing the longterm future is the key moral priority of our time". [6] [2]
In his book The Precipice: Existential Risk and the Future of Humanity , philosopher Toby Ord describes longtermism as follows: "longtermism ... is especially concerned with the impacts of our actions upon the longterm future. It takes seriously the fact that our own generation is but one page in a much longer story, and that our most important role may be how we shape—or fail to shape—that story. Working to safeguard humanity's potential is one avenue for such a lasting impact and there may be others too." [7] : 52–53 In addition, Ord notes that "longtermism is animated by a moral re-orientation toward the vast future that existential risks threaten to foreclose." [7] : 52–53
Because it is generally infeasible to use traditional research techniques such as randomized controlled trials to analyze existential risks, researchers such as Nick Bostrom have used methods such as expert opinion elicitation to estimate their importance. [8] Ord offered probability estimates for a number of existential risks in The Precipice. [7] : 167
The term "longtermism" was coined around 2017 by Oxford philosophers William MacAskill and Toby Ord. The view draws inspiration from the work of Nick Bostrom, Nick Beckstead, and others. [6] [1] While its coinage is relatively new, some aspects of longtermism have been thought about for centuries. The oral constitution of the Iroquois Confederacy, the Gayanashagowa, encourages all decision-making to “have always in view not only the present but also the coming generations”. [9] This has been interpreted to mean that decisions should be made so as to be of benefit to the seventh generation in the future. [10] These ideas have re-emerged in contemporary thought with thinkers such as Derek Parfit in his 1984 book Reasons and Persons , and Jonathan Schell in his 1982 book The Fate of the Earth .
Longtermist ideas have given rise to a community of individuals and organizations working to protect the interests of future generations. [11] Organizations working on longtermist topics include Cambridge University's Centre for the Study of Existential Risk, the Future of Life Institute, the Global Priorities Institute, the Stanford Existential Risks Initiative, [12] 80,000 Hours, [13] Open Philanthropy, [14] The Forethought Foundation, [15] and Longview Philanthropy. [16]
Researchers studying longtermism believe that we can improve the long-term future in two ways: "by averting permanent catastrophes, thereby ensuring civilisation’s survival; or by changing civilisation’s trajectory to make it better while it lasts. Broadly, ensuring survival increases the quantity of future life; trajectory changes increase its quality". [5] : 35–36 [17]
An existential risk is "a risk that threatens the destruction of humanity’s longterm potential", [7] : 59 including risks which cause human extinction or permanent societal collapse. Examples of these risks include nuclear war, natural and engineered pandemics, climate change and civilizational collapse, stable global totalitarianism, and emerging technologies like artificial intelligence and nanotechnology. [7] : 213–214 Reducing any of these risks may significantly improve the future over long timescales by increasing the number and quality of future lives. [17] [18] Consequently, advocates of longtermism argue that humanity is at a crucial moment in its history where the choices made this century may shape its entire future. [7] : 3–4
Proponents of longtermism have pointed out that humanity spends less than 0.001% of the gross world product annually on longtermist causes (i.e., activities explicitly meant to positively influence the long-term future of humanity). [19] This is less than 5% of the amount that is spent annually on ice cream in the U.S., leading Toby Ord to argue that humanity “start by spending more on protecting our future than we do on ice cream, and decide where to go from there”. [7] : 58, 63
Existential risks are extreme examples of what researchers call a "trajectory change". [17] However, there might be other ways to positively influence how the future will unfold. Economist Tyler Cowen argues that increasing the rate of economic growth is a top moral priority because it will make future generations wealthier. [20] Other researchers think that improving institutions like national governments and international governance bodies could bring about positive trajectory changes. [21]
Another way to achieve a trajectory change is by changing societal values. [22] William MacAskill argues that humanity should not expect positive value changes to happen by default. [4] He uses the abolition of slavery as an example, which historians like Christopher Leslie Brown consider to be a historical contingency rather than an inevitable event. [4] Brown has argued that a moral revolution made slavery unacceptable at a time when it was still hugely profitable. [23] MacAskill suggests that abolition may be a turning point in the entirety of human history, with the practice unlikely to return. [24] For this reason, bringing about positive value changes in society may be one way in which the present generation can positively influence the long-run future.
Longtermists argue that we live at a pivotal moment in human history. Derek Parfit wrote that we "live during the hinge of history" [25] and William MacAskill states that "the world’s long-run fate depends in part on the choices we make in our lifetimes" [5] : 6 since "society has not yet settled down into a stable state, and we are able to influence which stable state we end up in". [5] : 28
According to Fin Moorhouse, for most of human history, it was not clear how to positively influence the very long-run future. [26] However, two relatively recent developments may have changed this. Developments in technology, such as nuclear weapons, have, for the first time, given humanity the power to annihilate itself, which would impact the long-term future by preventing the existence and flourishing of future generations. [26] At the same time, progress made in the physical and social sciences has given humanity the ability to more accurately predict (at least some) of the long-term effects of the actions taken in the present. [26]
MacAskill also notes that our present time is highly unusual in that "we live in an era that involves an extraordinary amount of change" [5] : 26 —both relative to the past (where rates of economic and technological progress were very slow) and to the future (since current growth rates cannot continue for long before hitting physical limits). [5] : 26–28
Longtermism has been defended by appealing to various moral theories. [27] Utilitarianism may motivate longtermism given the importance it places on pursuing the greatest good for the greatest number, with future generations expected to be the vast majority of all people to ever exist. [2] [28] Consequentialist moral theories such as utilitarianism may generally be sympathetic to longtermism since whatever the theory considers morally valuable, there is likely going to be much more of it in the future than in the present. [29]
However, other non-consequentialist moral frameworks may also inspire longtermism. For instance, Toby Ord considers the responsibility that the present generation has towards future generations as grounded in the hard work and sacrifices made by past generations. [7] He writes: [7] : 42
Because the arrow of time makes it so much easier to help people who come after you than people who come before, the best way of understanding the partnership of the generations may be asymmetrical, with duties all flowing forwards in time—paying it forwards. On this view, our duties to future generations may thus be grounded in the work our ancestors did for us when we were future generations.
In his book What We Owe the Future , William MacAskill discusses how individuals can shape the course of history. He introduces a three-part framework for thinking about effects on the future, which states that the long-term value of an outcome we may bring about depends on its significance, persistence, and contingency. [5] : 31–33 He explains that significance "is the average value added by bringing about a certain state of affairs", persistence means "how long that state of affairs lasts, once it has been brought about", and contingency "refers to the extent to which the state of affairs depends on an individual’s action". [5] : 32 Moreover, MacAskill acknowledges the pervasive uncertainty, both moral and empirical, that surrounds longtermism and offers four lessons to help guide attempts to improve the long-term future: taking robustly good actions, building up options, learning more, and avoiding causing harm. [5]
Population ethics plays an important part in longtermist thinking. Many advocates of longtermism accept the total view of population ethics, on which bringing more happy people into existence is good, all other things being equal. Accepting such a view makes the case for longtermism particularly strong because the fact that there could be huge numbers of future people means that improving their lives and, crucially, ensuring that those lives happen at all, has enormous value. [2] [30]
Longtermism is often discussed in relation to the interests of future generations of humans. However, some proponents of longtermism also put high moral value on the interests of non-human beings. [31] From this perspective, expanding humanity's moral circle to other sentient beings may be a particularly important longtermist cause area, notably because a moral norm of caring about the suffering of non-human life might persist for a very long time if it becomes widespread. [22]
Effective altruism promotes the idea of moral impartiality, suggesting that people’s worth does not diminish simply because they live in a different location. Longtermists like MacAskill extend this principle by proposing that "distance in time is like distance in space". [32] [33] Longtermists generally reject the notion of a pure time preference, which values future benefits less simply because they occur later.
When evaluating future benefits, economists typically use the concept of a social discount rate, which posits that the value of future benefits decreases exponentially with how far they are in time. In the standard Ramsey model used in economics, the social discount rate is given by:
where is the elasticity of marginal utility of consumption, is the growth rate, and combines the "catastrophe rate" (discounting for the risk that future benefits won't occur) and pure time preference (valuing future benefits intrinsically less than present ones). [7] : 240–245
Toby Ord argues that a nonzero pure time preference applied to normative ethics is arbitrary and illegitimate. Economist Frank Ramsey, who devised the discounting model, also believed that while pure time preference might describe how people behave (favoring immediate benefits), it does not offer normative guidance on what they should value ethically. Furthermore, only applies to monetary benefits, not moral benefits, since it is based on diminishing marginal utility of consumption. Ord also considers that modeling the uncertainty that the benefit will occur with an exponential decrease poorly reflects the reality of changing risks over time, particularly as some catastrophic risks may diminish or be mitigated in the long term. [7] : 240–245
In contrast, Andreas Mogensen argues that a positive rate of pure time preference can be justified on the basis of kinship. That is, common-sense morality allows us to be partial to those more closely related to us, so "we can permissibly weight the welfare of each succeeding generation less than that of the generation preceding it." [34] : 9 This view is called temporalism and states that "temporal proximity (...) strengthens certain moral duties, including the duty to save". [35]
One objection to longtermism is that it relies on predictions of the effects of our actions over very long time horizons, which is difficult at best and impossible at worst. [36] In response to this challenge, researchers interested in longtermism have sought to identify "value lock in" events—events, such as human extinction, which we may influence in the near-term but that will have very long-lasting, predictable future effects. [2]
Another concern is that longtermism may lead to deprioritizing more immediate issues. For example, some critics have argued that considering humanity's future in terms of the next 10,000 or 10 million years might lead to downplaying the nearer-term effects of climate change. [37] They also worry that the most radical forms of strong longtermism could in theory justify atrocities in the name of attaining "astronomical" amounts of future value. [2] Anthropologist Vincent Ialenti has argued that avoiding this will require societies to adopt a "more textured, multifaceted, multidimensional longtermism that defies insular information silos and disciplinary echo chambers". [38]
Advocates of longtermism reply that the kinds of actions that are good for the long-term future are often also good for the present. [30] An example of this is pandemic preparedness. Preparing for the worst case pandemics—those which could threaten the survival of humanity—may also help to improve public health in the present. For example, funding research and innovation in antivirals, vaccines, and personal protective equipment, as well as lobbying governments to prepare for pandemics, may help prevent smaller scale health threats for people today. [39]
A further objection to longtermism is that it relies on accepting low probability bets of extremely big payoffs rather than more certain bets of lower payoffs (provided that the expected value is higher). From a longtermist perspective, it seems that if the probability of some existential risk is very low, and the value of the future is very high, then working to reduce the risk, even by tiny amounts, has extremely high expected value. [2] An illustration of this problem is Pascal’s mugging, which involves the exploitation of an expected value maximizer via their willingness to accept such low probability bets of large payoffs. [40]
Advocates of longtermism have adopted a variety of responses to this concern. Some argue that, while unintuitive, it is ethically correct to favor infinitesimal probabilities of arbitrarily high-impact outcomes over moderate probabilities with moderately impactful outcomes. [41] Others argue that longtermism need not rely on tiny probabilities as the probabilities of existential risks are within the normal range of risks that people seek to mitigate against— for example, wearing a seatbelt in case of a car crash. [2]
Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.
Human extinction or omnicide is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.
Social discount rate (SDR) is the discount rate used in computing the value of funds spent on social projects. Discount rates are used to put a present value on costs and benefits that will occur at a later date. Determining this rate is not always easy and can be the subject of discrepancies in the true net benefit to certain projects, plans and policies. The discount rate is considered as a critical element in cost–benefit analysis when the costs and the benefits differ in their distribution over time, this usually occurs when the project that is being studied is over a long period of time.
The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.
Population ethics is the philosophical study of the ethical problems arising when our actions affect who is born and how many people are born in the future. An important area within population ethics is population axiology, which is "the study of the conditions under which one state of affairs is better than another, when the states of affairs in question may differ over the numbers and the identities of the persons who ever live."
A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk".
Giving What We Can (GWWC) is a group of charities promoting effective giving whose members pledge to give at least 10% of their income to effective charities. It was founded at Oxford University in 2009 by the philosopher Toby Ord, physician-in-training Bernadette Young, and fellow philosopher William MacAskill.
Toby David Godfrey Ord is an Australian philosopher. In 2009 he founded Giving What We Can, an international society whose members pledge to donate at least 10% of their income to effective charities, and is a key figure in the effective altruism movement, which promotes using reason and evidence to help the lives of others as much as possible.
Scope neglect or scope insensitivity is a cognitive bias that occurs when the valuation of a problem is not valued with a multiplicative relationship to its size. Scope neglect is a specific form of extension neglect.
Effective altruism (EA) is a 21st-century philosophical and social movement that advocates impartially calculating benefits and prioritizing causes to provide the greatest good. It is motivated by "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis". People who pursue the goals of effective altruism, who are sometimes called effective altruists, follow a variety of approaches proposed by the movement, such as donating to selected charities and choosing careers with the aim of maximizing positive impact. The movement has achieved significant popularity outside of academia, spurring the creation of university-based institutes, research centers, advisory organizations and charities, which, collectively, have donated several hundreds of millions of dollars.
80,000 Hours is a London-based nonprofit organisation that conducts research on which careers have the largest positive social impact and provides career advice based on that research. It provides this advice on their website, YouTube channel and podcast, and through one-on-one advice sessions. The organisation is part of the Centre for Effective Altruism, affiliated with the Oxford Uehiro Centre for Practical Ethics. The organisation's name refers to the typical amount of time someone spends working over a lifetime.
William David MacAskill is a Scottish philosopher and author, as well as one of the originators of the effective altruism movement. He was a Research Fellow at the Global Priorities Institute at the University of Oxford, co-founded Giving What We Can, the Centre for Effective Altruism and 80,000 Hours, and is the author of Doing Good Better (2015) and What We Owe the Future (2022), and the co-author of Moral Uncertainty (2020).
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
The Centre for Effective Altruism (CEA) is an Oxford-based organisation that builds and supports the effective altruism community. It was founded in 2012 by William MacAskill and Toby Ord, both philosophers at the University of Oxford. CEA is part of Effective Ventures, a federation of projects working to have a large positive impact in the world.
Hilary Greaves is a British philosopher, currently serving as professor of philosophy at the University of Oxford. From 2017 to 2022, she was the founding director of the Global Priorities Institute, a research centre for effective altruism at the university supported by the Open Philanthropy Project.
The Precipice: Existential Risk and the Future of Humanity is a 2020 non-fiction book by the Australian philosopher Toby Ord, a senior research fellow at the Future of Humanity Institute in Oxford. It argues that humanity faces unprecedented risks over the next few centuries and examines the moral significance of safeguarding humanity's future.
What We Owe the Future is a 2022 book by the Scottish philosopher and ethicist William MacAskill, an associate professor in philosophy at the University of Oxford. It advocates for effective altruism and the philosophy of longtermism, which MacAskill defines as "the idea that positively influencing the long-term future is a key moral priority of our time." His argument is based on the premises that future people count, there could be many of them, and we can make their lives better.
Émile P. Torres is an American philosopher, intellectual historian, author, and postdoctoral researcher at Case Western Reserve University. Their research focuses on eschatology, existential risk, and human extinction. Along with computer scientist Timnit Gebru, Torres coined the acronym neologism "TESCREAL" to criticize what they see as a group of related philosophies: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.
TESCREAL is an acronym neologism proposed by computer scientist Timnit Gebru and philosopher Émile P. Torres that stands for "transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism". Gebru and Torres argue that these ideologies should be treated as an "interconnected and overlapping" group with shared origins. They say this is a movement that allows its proponents to use the threat of human extinction to justify expensive or detrimental projects and consider it pervasive in social and academic circles in Silicon Valley centered around artificial intelligence. As such, the acronym is sometimes used to criticize a perceived belief system associated with Big Tech.
Existential risk studies (ERS) is a field of studies focused on the definition and theorization of "existential risks", its ethical implications and the related strategies of long-term survival. Existential risks are diversely defined as global kinds of calamity that have the capacity of inducing the extinction of intelligent earthling life, such as humans, or, at least, a severe limitation of their potential, as defined by ERS theorists. The field development and expansion can be divided in waves according to its conceptual changes as well as its evolving relationship with related fields and theories, such as futures studies, disaster studies, AI safety, effective altruism and longtermism.