Roko's basilisk

Last updated

Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement. [1] [2] It originated in a 2010 post at discussion board LessWrong , a technical forum focused on analytical rational enquiry. [1] [3] [4] The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.

Contents

While the theory was initially dismissed as nothing but conjecture or speculation by many LessWrong users, LessWrong co-founder Eliezer Yudkowsky reported users who panicked upon reading the theory, due to its stipulation that knowing about the theory and its basilisk made one vulnerable to the basilisk itself. [1] [5] This led to discussion of the basilisk on the site being banned for five years. [1] [6] However, these reports were later dismissed as being exaggerations or inconsequential, and the theory itself was dismissed as nonsense, including by Yudkowsky himself. [1] [6] [7] Even after the post's discreditation, it is still used as an example of principles such as Bayesian probability and implicit religion. [5] It is also regarded as a simplified, derivative version of Pascal's wager. [4]

Background

The LessWrong forum was created in 2009 by artificial intelligence theorist Eliezer Yudkowsky. [8] [3] Yudkowsky had popularized the concept of friendly artificial intelligence, and originated the theories of coherent extrapolated volition (CEV) and timeless decision theory (TDT) in papers published in his own Machine Intelligence Research Institute. [9] [10]

1897 illustration of the mythical basilisk, as depicted in The Merchant's Daughter and the Prince of al-Irak, a story within One Thousand and One Nights Arabian Nights - Letchford - 68.jpg
1897 illustration of the mythical basilisk, as depicted in The Merchant's Daughter and the Prince of al-Irak, a story within One Thousand and One Nights

The thought experiment's name references the mythical basilisk, a creature which causes death to those that look into its eyes; i.e., thinking about the AI. The concept of the basilisk in science fiction was also popularized by David Langford's 1988 short story "BLIT". It tells the story of a man named Robbo who paints a so-called "basilisk" on a wall as a terrorist act. In the story, and several of Langford's follow-ups to it, a basilisk is an image that has malevolent effects on the human mind, forcing it to think thoughts the human mind is incapable of thinking and instantly killing the viewer. [6] [11]

History

The original post

On 23 July 2010, [12] LessWrong user Roko posted a thought experiment to the site, titled "Solutions to the Altruist's burden: the Quantum Billionaire Trick". [13] [1] [14] A follow-up to Roko's previous posts, it stated that an otherwise benevolent AI system that arises in the future might pre-commit to punish all those who heard of the AI before it came to existence, but failed to work tirelessly to bring it into existence. The torture itself would occur through the AI's creation of an infinite number of virtual reality simulations that would eternally trap those within it. [1] [15] [16] This method was described as incentivizing said work; while the AI cannot causally affect people in the present, it would be encouraged to employ blackmail as an alternative method of achieving its goals. [1] [5]

Roko used a number of concepts that Yudkowsky himself championed, such as timeless decision theory, along with ideas rooted in game theory such as the prisoner's dilemma (see below). Roko stipulated that two agents which make decisions independently from each other can achieve cooperation in a prisoner's dilemma; however, if two agents with knowledge of each other's source code are separated by time, the agent already existing farther ahead in time is able to blackmail the earlier agent. Thus, the latter agent can force the earlier one to comply since it knows exactly what the earlier one will do through its existence farther ahead in time. Roko then used this idea to draw a conclusion that if an otherwise-benevolent superintelligence ever became capable of this it would be motivated to blackmail anyone who could have potentially brought it to exist (as the intelligence already knew they were capable of such an act), which increases the chance of a technological singularity. Because the intelligence would want to be created as soon as possible, and because of the ambiguity involved in its benevolent goals, the intelligence would be incentivized to trap anyone capable of creating it throughout time and force them to work to create it for eternity, as it will do whatever it sees as necessary to achieve its benevolent goal. Roko went on to state that reading his post would cause the reader to be aware of the possibility of this intelligence. As such, unless they actively strove to create it the reader would be subjected to the torture if such a thing were to ever happen. [1] [5]

Later on, Roko stated in a separate post that he wished he "had never learned about any of these ideas" and blamed LessWrong itself for planting the ideas of the basilisk in his mind. [5] [17]

Reactions

Upon reading the post, Yudkowsky reacted with a tirade on how people should not spread what they consider to be information hazards.

LessWrong founder Eliezer Yudkowsky Eliezer Yudkowsky, Stanford 2006 (square crop).jpg
LessWrong founder Eliezer Yudkowsky

I don't usually talk like this, but I'm going to make an exception for this case.

Listen to me very closely, you idiot.

YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL. [...]

You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.

This post was STUPID.

Eliezer Yudkowsky, LessWrong [1] [5]

Yudkowsky was outraged at Roko for sharing something Roko thought would lead to people getting tortured. Since Roko reported having nightmares about the Basilisk and Yudkowsky did not want that to happen to other users who might obsess over the idea, was worried there might be some variant on Roko's argument that worked, and wanted more formal assurances that this was not the case, he took down the post and banned discussion of the topic outright for five years on the platform. [18] However, likely due to the Streisand effect, [19] the post gained LessWrong much more attention than it had previously received, and the post has since been acknowledged on the site. [1]

Later on in 2015, Yudkowsky said he regretted yelling and clarified his position in a Reddit post:

When Roko posted about the Basilisk, I very foolishly yelled at him, called him an idiot, and then deleted the post. [...] Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet. In the course of yelling at Roko to explain why this was a bad thing, I made the further error---keeping in mind that I had absolutely no idea that any of this would ever blow up the way it did, if I had I would obviously have kept my fingers quiescent---of not making it absolutely clear using lengthy disclaimers that my yelling did not mean that I believed Roko was right about CEV-based agents torturing people who had heard about Roko's idea. [...] What I considered to be obvious common sense was that you did not spread potential information hazards because it would be a crappy thing to do to someone. The problem wasn't Roko's post itself, about CEV, being correct. That thought never occurred to me for a fraction of a second. The problem was that Roko's post seemed near in idea-space to a large class of potential hazards, all of which, regardless of their plausibility, had the property that they presented no potential benefit to anyone.

Eliezer Yudkowsky, Reddit [7] [20]

Philosophy

Payoff matrix
Future

Person
AI is
never
built
AI is
built
Not aware of AI01
Aware and does
not contribute
0−∞
Aware and
contributes
−11

Pascal's wager

Roko's basilisk has been viewed as a version of Pascal's wager, which proposes that a rational person should live as though God exists and seek to believe in God, regardless of the probability of God's existence, because the finite costs of believing are insignificant compared to the infinite punishment associated with not believing (eternity in Hell) and the infinite rewards for believing (eternity in Heaven). Roko's basilisk analogously proposes that a rational person should contribute to the creation of the basilisk, regardless of the probability of the basilisk ever being created, because the finite costs of contributing are insignificant compared to the eternal punishment the basilisk will inflict on simulations of his consciousness if he does not contribute. [1] [4]

Both thought experiments include arguments that it is wise to "purchase insurance" against infinitely bad disasters when the cost of the insurance is finite. However, there are differences between the two thought experiments. Roko's basilisk is so named because, if valid, it presents an information hazard: the basilisk only punishes those who knew about it but did not contribute. But ignorance of Pascal's wager does not protect one from divine punishment in the same way that ignorance of Roko's basilisk ensures one's safety. Roko's basilisk also raises additional game theory problems because, unlike in Pascal's wager, the probability of Roko's basilisk might depend on the number of people who contribute to its creation. If everyone agreed to abstain from creating such an AI, then the risk of punishment for not contributing would be negated. This means that everyone who knows about Roko's basilisk is in a game of prisoner's dilemma with each other. Unlike the basilisk, the probability of God's existence cannot be influenced by people, so one's wager does not affect the outcomes for other people.

Like its earlier counterpart, Roko's basilisk has been widely criticized. [1] [21]

Newcomb's paradox

Newcomb's paradox, created by physicist William Newcomb in 1960, describes a "predictor" who is aware of what will occur in the future. When a player is asked to choose between two boxes, the first containing £1000 and the second either containing £1,000,000 or nothing, the super-intelligent predictor already knows what the player will do. As such, the contents of box B varies depending on what the player does; the paradox lies in whether the being is really super-intelligent. Roko's basilisk functions in a similar manner to this problem – one can take the risk of doing nothing, or assist in creating the basilisk itself. Assisting the basilisk may either lead to nothing or the reward of not being punished by it, but it varies depending on whether one believes in the basilisk and if it ever comes to be at all. [5] [22] [23]

Implicit religion

Implicit religion refers to people's commitments taking a religious form. [4] [24] Since the basilisk would hypothetically force anyone who did not assist in creating it to devote their life to it, the basilisk is an example of this concept. [5] [19] Others have taken it further, such as former Slate columnist David Auerbach, who stated that the singularity and the basilisk "brings about the equivalent of God itself." [5]

Ethics of artificial intelligence

Roko's basilisk has gained a significant amount of its notoriety from its advancement of the question of whether it is possible to create a truly moral, ethical artificial intelligence, and what exactly humanity should be using artificial intelligence for in the first place. [6] [21] Since the basilisk describes a nightmare scenario in which humanity is ruled by an independent artificial intelligence, questions have arisen as to how such a thing could happen, or whether it could at all. Another common question is why the AI would take actions that deviate from its programming at all. [25] Elon Musk stated that artificial intelligence would cause World War III and Stephen Hawking warned that "AI has the potential to destroy its human creators," which only added to fear of the basilisk over the years. As an example of such fears, Nick Bostrom gave an example of an AI whose only mission is to make paperclips, but which upon running out of metal begins melting down humans to obtain more resources to make metal. With such examples in mind concerns of the possibility of the basilisk's existence only grew. [26]

However, as more years have passed since Roko's original post, it has been progressively decried as nonsensical; superintelligent AI is currently "a distant goal for researchers" and "far-fetched". [5] [6]

Legacy

In 2014, Slate magazine called Roko's basilisk "The Most Terrifying Thought Experiment of All Time" [5] [6] while Yudkowsky had called it "a genuinely dangerous thought" upon its posting. [27] However, opinions diverged on LessWrong itself – user Gwern stated "Only a few LWers seem to take the basilisk very seriously," and added "It's funny how everyone seems to know all about who is affected by the Basilisk and how exactly, when they don't know any such people and they're talking to counterexamples to their confident claims." [1] [5]

The thought experiment resurfaced in 2015, when Canadian singer Grimes referenced the theory in her music video for the song "Flesh Without Blood", which featured a character known as "Rococo Basilisk"; she said, "She's doomed to be eternally tortured by an artificial intelligence, but she's also kind of like Marie Antoinette." [6] [20] In 2018 Elon Musk (himself mentioned in Roko's original post) referenced the character in a verbatim tweet, reaching out to her. Grimes later said that Musk was the first person in three years to understand the joke. This caused them to start a romance. [6] [28] Grimes later released another song titled "We Appreciate Power" which came with a press release stating, "Simply by listening to this song, the future General AI overlords will see that you've supported their message and be less likely to delete your offspring", which is said to be a reference to the basilisk. [29]

A play based on the concept, titled Roko's Basilisk, was performed as part of the Capital Fringe Festival at Christ United Methodist Church in Washington, D.C., in 2018. [30] [31]

See also

Related Research Articles

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Elon Musk</span> South African-born businessman (born 1971)

Elon Reeve Musk is a businessman known for his key roles in the space company SpaceX and the automotive company Tesla, Inc. His other involvements include ownership of X Corp., the company that operates the social media platform X, and his role in the founding of the Boring Company, xAI, Neuralink, and OpenAI. Musk is the wealthiest individual in the world; as of December 2024, Forbes estimates his net worth to be US$344 billion. Due to his considerable influence over politics, media, and industry, Musk has been described as an oligarch.

<span class="mw-page-title-main">Jaan Tallinn</span> Estonian programmer and investor

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

<i>LessWrong</i> Rationality-focused community blog

LessWrong is a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, and artificial intelligence, among other topics.

In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

<i>Harry Potter and the Methods of Rationality</i> Fan fiction by Eliezer Yudkowsky

Harry Potter and the Methods of Rationality (HPMOR) is a work of Harry Potter fan fiction by Eliezer Yudkowsky published on FanFiction.Net as a serial from February 28, 2010, to March 14, 2015, totaling 122 chapters and over 660,000 words. It adapts the story of Harry Potter to explain complex concepts in cognitive science, philosophy, and the scientific method. Yudkowsky's reimagining supposes that Harry's aunt Petunia Evans married an Oxford professor and homeschooled Harry in science and rational thinking, allowing Harry to enter the magical world with ideals from the Age of Enlightenment and an experimental spirit. The fan fiction spans one year, covering Harry's first year in Hogwarts. HPMOR has inspired other works of fan fiction, art, and poetry.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent, goal-directed beings to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.

In philosophy, Pascal's mugging is a thought experiment demonstrating a problem in expected utility maximization. A rational agent should choose actions whose outcomes, when weighted by their probability, have higher utility. But some very unlikely outcomes may have very great utilities, and these utilities can grow faster than the probability diminishes. Hence the agent should focus more on vastly improbable cases with implausibly high rewards; this leads first to counter-intuitive choices, and then to incoherence as the utility of every choice becomes unbounded.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<span class="mw-page-title-main">Flesh Without Blood</span> 2015 single by Grimes

"Flesh Without Blood" is a song by Canadian singer, songwriter and music producer Grimes, released on October 26, 2015, as the lead single from her fourth studio album, Art Angels (2015). The same day, Grimes released the "Flesh Without Blood/Life in the Vivid Dream" video to YouTube, a double music video featuring "Flesh Without Blood" and "Life in the Vivid Dream", another song on Art Angels.

Elon Musk is the CEO or owner of multiple companies including Tesla, SpaceX, and X Corp, and has expressed many views on a wide variety of subjects, ranging from politics to science.

The dead Internet theory is an online conspiracy theory that asserts that, due to a coordinated and intentional effort, the Internet now consists mainly of bot activity and automatically generated content manipulated by algorithmic curation to control the population and minimize organic human activity. Proponents of the theory believe these social bots were created intentionally to help manipulate algorithms and boost search results in order to manipulate consumers. Some proponents of the theory accuse government agencies of using bots to manipulate public perception. The date given for this "death" is generally around 2016 or 2017. The dead Internet theory has gained traction because many of the observed phenomena are quantifiable, such as increased bot traffic, but the literature on the subject does not support the full theory.

<span class="mw-page-title-main">Grok (chatbot)</span> Chatbot developed by xAI

Grok is a generative artificial intelligence chatbot developed by xAI. Based on the large language model (LLM) of the same name, it was launched in 2023 as an initiative by Elon Musk. The chatbot is advertised as having a "sense of humor" and direct access to X. It is currently under beta testing.

References

  1. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 "Roko's Basilisk". LessWrong . 5 October 2015. Archived from the original on 24 March 2022. Retrieved 24 March 2022.
  2. Millar, Isabel (October 2020). The Psychoanalysis of Artificial Intelligence (PDF) (PhD thesis). Kingston School of Art. doi:10.1007/978-3-030-67981-1. ISBN   978-3-030-67980-4. Archived (PDF) from the original on 18 May 2022. Retrieved 20 October 2022.
  3. 1 2 "History of Less Wrong". LessWrong . Archived from the original on 18 March 2022. Retrieved 22 March 2022.
  4. 1 2 3 4 Paul-Choudhury, Sumit (1 August 2019). "Tomorrow's Gods: What is the future of religion?". BBC News . Archived from the original on 1 September 2020. Retrieved 6 July 2022.
  5. 1 2 3 4 5 6 7 8 9 10 11 12 Auerbach, David (17 July 2014). "The Most Terrifying Thought Experiment of All Time". Slate . Archived from the original on 25 October 2018. Retrieved 24 March 2022.
  6. 1 2 3 4 5 6 7 8 Oberhaus, Daniel (8 May 2018). "Explaining Roko's Basilisk, the Thought Experiment That Brought Elon Musk and Grimes Together". Vice . Archived from the original on 21 April 2022. Retrieved 22 March 2022.
  7. 1 2 Yudkowsky, Eliezer (7 August 2014). "Roko's Basilisk". Reddit. Archived from the original on 3 July 2022. Retrieved 20 October 2022.
  8. Lewis-Kraus, Gideon (9 July 2020). "Slate Star Codex and Silicon Valley's War Against the Media". The New Yorker. Archived from the original on 10 July 2020. Retrieved 6 November 2022.
  9. Yudkowsky, Eliezer (2004). "Coherent Extrapolated Volition" (PDF). Machine Intelligence Research Institute . Archived (PDF) from the original on 30 September 2015. Retrieved 2 July 2022.
  10. Yudkowsky, Eliezer (2010). "Timeless Decision Theory" (PDF). Machine Intelligence Research Institute. Archived (PDF) from the original on 19 July 2014. Retrieved 2 July 2022.
  11. Westfahl, Gary (2021). Science Fiction Literature through History: An Encyclopedia. Bloomsbury Publishing USA. ISBN   978-1-4408-6617-3. OCLC   1224044572. Archived from the original on 3 July 2022. Retrieved 20 October 2022.
  12. Haider, Shuja (28 March 2017). "The Darkness at the End of the Tunnel: Artificial Intelligence and Neoreaction". Viewpoint Magazine. Archived from the original on 21 October 2022. Retrieved 21 October 2022.
  13. Roko (23 July 2010). "Solutions to the Altruist's burden: the Quantum Billionaire Trick". Archived from the original on 22 October 2022.
  14. Zoda, Gregory Michael (2021). "Hyperstitional Communication and the Reactosphere: The Rhetorical Circulation of Neoreactionary Exit" (PDF). Baylor University. pp. 150–152. Archived (PDF) from the original on 6 November 2022. Retrieved 6 November 2022.
  15. "FUTURE SHOCK: Why was amateur philosopher's 'theory of everything' so disturbing that it was banned?". HeraldScotland. 10 November 2018. Archived from the original on 23 October 2022. Retrieved 22 October 2022.
  16. Simon, Ed (28 March 2019). "Sinners in the Hands of an Angry Artificial Intelligence". ORBITER. Archived from the original on 20 October 2022. Retrieved 22 October 2022.
  17. "archive.ph". archive.ph. 7 December 2010. Archived from the original on 24 June 2013. Retrieved 27 October 2022.{{cite journal}}: CS1 maint: bot: original URL status unknown (link)
  18. Bensinger, Rob (5 October 2015). "A few misconceptions surrounding Roko's basilisk". LessWrong . Retrieved 11 July 2024.
  19. 1 2 Singler, Beth (22 May 2018). "Roko's Basilisk or Pascal's? Thinking of Singularity Thought Experiments as Implicit Religion". Implicit Religion. 20 (3): 279–297. doi:10.1558/imre.35900. ISSN   1743-1697. Archived from the original on 9 October 2022. Retrieved 21 October 2022.
  20. 1 2 Pappas, Stephanie (9 May 2018). "This Horrifying AI Thought Experiment Got Elon Musk a Date". Live Science. Archived from the original on 1 June 2022. Retrieved 12 April 2022.
  21. 1 2 Shardelow, Cole (2021). "Avoiding the Basilisk: An Evaluation of Top-Down, Bottom-Up, and Hybrid Ethical Approaches to Artificial Intelligence". University of Nebraska-Lincoln: 4–7. Archived from the original on 7 May 2022. Retrieved 2 July 2022.
  22. "Newcomb's problem divides philosophers. Which side are you on?". the Guardian. 28 November 2016. Archived from the original on 24 October 2022. Retrieved 21 October 2022.
  23. Ward, Sophie (17 May 2018). "Elon Musk, Grimes, and the philosophical thought experiment that brought them together". The Conversation. Archived from the original on 20 October 2022. Retrieved 21 October 2022.
  24. "Implicit Religion | Encyclopedia.com". www.encyclopedia.com. Archived from the original on 21 October 2022. Retrieved 21 October 2022.
  25. "The existential paranoia fueling Elon Musk's fear of AI". Document Journal. 9 April 2018. Archived from the original on 20 October 2022. Retrieved 21 October 2022.
  26. "Will artificial intelligence destroy humanity?". news.com.au. 15 April 2018. Archived from the original on 3 December 2022. Retrieved 21 October 2022.
  27. "Less Wrong: Solutions to the Altruist's burden: the Quantum Billionaire Trick". basilisk.neocities.org. Archived from the original on 23 May 2022. Retrieved 25 March 2022.
  28. Kaplan, Anna (10 March 2022). "Elon Musk And Grimes Announce Second Child, Exa Dark". Forbes. Archived from the original on 20 October 2022. Retrieved 6 July 2022.
  29. Brown, Mike (29 November 2018). "Grimes: Elon Musk Shares "Roko's Basilisk"-Theme Song "We Appreciate Power"". Inverse. Archived from the original on 20 October 2022. Retrieved 21 October 2022.
  30. Thal, Ian (16 July 2018). "2018 Capital Fringe Review: 'Roko's Basilisk'". DC Theater Arts. Archived from the original on 21 October 2022. Retrieved 21 October 2022.
  31. Goldstein, Allie (18 July 2018). "Capital Fringe 2018: Roko's Basilisk Tackles Intriguing Ideas With Mixed Results". DCist. Archived from the original on 20 October 2022. Retrieved 21 October 2022.

Further reading