The reversal test is a heuristic designed to spot and eliminate status quo bias, an emotional bias irrationally favouring the current state of affairs. The test is applicable to the evaluation of any decision involving a potential deviation from the status quo along some continuous dimension. The reversal test was introduced in the context of the bioethics of human enhancement by Nick Bostrom and Toby Ord. [1]
Bostrom and Ord introduced the so-called reversal test as to provide a general answer to the question of how one can, given that humans might suffer from irrational status quo bias, distinguish between valid criticisms of a proposed increase in some human trait and criticisms merely motivated by resistance to change. [1] The reversal test attempts to do this by asking whether it would be a good thing if the trait was decreased.
An example given is that if someone objects to enhancement of human intelligence on the grounds that it might e.g. allow for more sophisticated weapons to emerge, to which the objector may ask in turn: "shouldn't we decrease intelligence then?" [lower-alpha 1] Clearly, the later suggestion is more easily read as a reductio of the former, and ideally the test will help reveal whether status quo bias was an important causal factor in the formation of the initial judgment. Or to define the test in their own words:
When a proposal to change a certain parameter is thought to have bad overall consequences, consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from status quo bias. [4]
A further elaboration on the reversal test is suggested as the double reversal test: [1]
"Double Reversal Test: Suppose it is thought that increasing a certain parameter and decreasing it would both have bad overall consequences. Consider a scenario in which a natural factor threatens to move the parameter in one direction and ask whether it would be good to counterbalance this change by an intervention to preserve the status quo. If so, consider a later time when the naturally occurring factor is about to vanish and ask whether it would be a good idea to intervene to reverse the first intervention. If not, then there is a strong prima facie case for thinking that it would be good to make the first intervention even in the absence of the natural countervailing factor." (p. 673)
As an example, consider the parameter to be life expectancy, moving in the downward direction because of a sudden natural disease. We might intervene to invest in better health infrastructure to preserve the current life expectancy. Now if the disease is cured, the double reversal test asks: should we reverse our investment and defund the health services we created since the disease, now that it's gone? If not, perhaps we should invest in health infrastructure even if there never is a disease in the first place.
In this case the status quo bias is turned against itself, greatly reducing its impact on the reasoning. It also purports to handle arguments of evolutionary adaptation, transition costs, risk, and societal ethics that can counter the other test.
Alfred Nordmann argues that the reversal test merely erects a straw-man argument in favour of enhancement. He claims that the tests are limited to approaches that are consequentialist and deontological. He adds that one cannot view humans as sets of parameters that can be optimized separately or without regard to their history. [5]
Christian Weidemann argues that the double reversal test can muddy the water; guaranteeing and weighing transition costs versus benefits might be the relevant practical ethical question for much human enhancement analysis. [6]
Eugenics is a set of beliefs and practices that aim to improve the genetic quality of a human population. Historically, eugenicists have altered various human gene frequencies by inhibiting the fertility of people and groups purported to be inferior or promoting that of those purported to be superior. Since the early 2020s, the term has seen a revival in bioethical discussions on the usage of new technologies such as CRISPR and genetic screening, with ongoing debate around whether these technologies should be considered eugenics or not.
David Pearce is a British transhumanist philosopher. He is the co-founder of the World Transhumanist Association, currently rebranded and incorporated as Humanity+. Pearce approaches ethical issues from a lexical negative utilitarian perspective.
Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.
In philosophy and neuroscience, neuroethics is the study of both the ethics of neuroscience and the neuroscience of ethics. The ethics of neuroscience concerns the ethical, legal and social impact of neuroscience, including the ways in which neurotechnology can be used to predict or alter human behavior and "the implications of our mechanistic understanding of brain function for society... integrating neuroscientific knowledge with ethical and social thought".
A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
A status quo bias is a cognitive bias which results from a preference for the maintenance of one's existing state of affairs. The current baseline is taken as a reference point, and any change from that baseline is perceived as a loss or gain. Corresponding to different alternatives, this current baseline or default option is perceived and evaluated by individuals as a positive.
The Institute for Ethics and Emerging Technologies (IEET) is a technoprogressive think tank that seeks to "promote ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies." It was incorporated in the United States in 2004, as a non-profit 501(c)(3) organization, by philosopher Nick Bostrom and bioethicist James Hughes.
Human extinction is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.
Anders Sandberg is a Swedish researcher, futurist and transhumanist. He holds a PhD in computational neuroscience from Stockholm University, and is a former senior research fellow at the Future of Humanity Institute at the University of Oxford.
The experience machine or pleasure machine is a thought experiment put forward by philosopher Robert Nozick in his 1974 book Anarchy, State, and Utopia. It is an attempt to refute ethical hedonism by imagining a choice between everyday reality and an apparently preferable simulated reality.
The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
Toby David Godfrey Ord is an Australian philosopher. In 2009 he founded Giving What We Can, an international society whose members pledge to donate at least 10% of their income to effective charities, and is a key figure in the effective altruism movement, which promotes using reason and evidence to help the lives of others as much as possible.
The philosophy of medicine is a branch of philosophy that explores issues in theory, research, and practice within the field of health sciences. More specifically in topics of epistemology, metaphysics, and medical ethics, which overlaps with bioethics. Philosophy and medicine, both beginning with the ancient Greeks, have had a long history of overlapping ideas. It was not until the nineteenth century that the professionalization of the philosophy of medicine came to be. In the late twentieth century, debates among philosophers and physicians ensued of whether the philosophy of medicine should be considered a field of its own from either philosophy or medicine. A consensus has since been reached that it is in fact a distinct discipline with its set of separate problems and questions. In recent years there have been a variety of university courses, journals, books, textbooks and conferences dedicated to the philosophy of medicine.
Existential risk from artificial general intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
S. Matthew Liao is an American philosopher specializing in bioethics and normative ethics. He is internationally known for his work on topics including children’s rights and human rights, novel reproductive technologies, neuroethics, and the ethics of artificial intelligence. Liao currently holds the Arthur Zitrin Chair of Bioethics, and is the Director of the Center for Bioethics and Affiliated Professor in the Department of Philosophy at New York University. He has previously held appointments at Oxford, Johns Hopkins, Georgetown, and Princeton.
Bioconservatism is a philosophical and ethical stance that emphasizes caution and restraint in the use of biotechnologies, particularly those involving genetic manipulation and human enhancement. The term "bioconservatism" is a portmanteau of the words biology and conservatism.
The ethics of uncertain sentience refers to questions surrounding the treatment of and moral obligations towards individuals whose sentience—the capacity to subjectively sense and feel—and resulting ability to experience pain is uncertain; the topic has been particularly discussed within the field of animal ethics, with the precautionary principle frequently invoked in response.
Longtermism is the ethical view that positively influencing the long-term future is a key moral priority of our time. It is an important concept in effective altruism and a primary motivation for efforts that aim to reduce existential risks to humanity.
The "trip to reality" rebuttal to Nozick's experience machine thought experiment (where one's entire current life is shown to be a simulation and one is offered to return to reality) can also be seen as a form of reversal test. [3]