LessWrong

Last updated

LessWrong
LessWrong logo.svg
Type of site
Internet forum, blog
Available inEnglish
Created by Eliezer Yudkowsky
URL LessWrong.com
RegistrationOptional, but is required for contributing content
LaunchedFebruary 1, 2009;15 years ago (2009-02-01)
Current statusActive
Written in JavaScript, CSS (powered by React and GraphQL)

LessWrong (also written Less Wrong) is a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, and artificial intelligence, among other topics. [1] [2]

Contents

Purpose

LessWrong promotes lifestyle changes believed by its community to lead to increased rationality and self-improvement. Posts often focus on avoiding biases related to decision-making and the evaluation of evidence. One suggestion is the use of Bayes' theorem as a decision-making tool. [2] There is also a focus on psychological barriers that prevent good decision-making, including fear conditioning and cognitive biases that have been studied by the psychologist Daniel Kahneman. [3]

LessWrong is also concerned with transhumanism, existential threats and the singularity. The New York Observer noted that "Despite describing itself as a forum on 'the art of human rationality,' the New York Less Wrong group ... is fixated on a branch of futurism that would seem more at home in a 3D multiplex than a graduate seminar: the dire existential threat—or, with any luck, utopian promise—known as the technological Singularity ... Branding themselves as 'rationalists,' as the Less Wrong crew has done, makes it a lot harder to dismiss them as a 'doomsday cult'." [4]

History

Eliezer Yudkowsky at Stanford University in 2006 Eliezer Yudkowsky, Stanford 2006 (square crop).jpg
Eliezer Yudkowsky at Stanford University in 2006

LessWrong developed from Overcoming Bias, an earlier group blog focused on human rationality, which began in November 2006, with artificial intelligence researcher Eliezer Yudkowsky and economist Robin Hanson as the principal contributors. In February 2009, Yudkowsky's posts were used as the seed material to create the community blog LessWrong, and Overcoming Bias became Hanson's personal blog. [5] In 2013, a significant portion of the rationalist community shifted focus to Scott Alexander's Slate Star Codex. [6]

LessWrong and its surrounding movement are the subjects of the 2019 book The AI Does Not Hate You, written by former BuzzFeed science correspondent Tom Chivers. [7] [8] [9]

Roko's basilisk

In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures people who heard of the AI before it came into existence and failed to work tirelessly to bring it into existence, in order to incentivise said work. Using Yudkowsky's "timeless decision theory", the post claimed doing so would be beneficial for the AI even though it cannot causally affect people in the present. This idea came to be known as "Roko's basilisk", based on Roko's idea that merely hearing about the idea would give the hypothetical AI system stronger incentives to employ blackmail. Yudkowsky deleted Roko's posts on the topic, saying that posting it was "stupid" as the dissemination of information that can be harmful to even be aware of is itself a harmful act, and that the idea, while critically flawed, represented a space of thinking that could contain "a genuinely dangerous thought," something considered an information hazard. Discussion of Roko's basilisk was banned on LessWrong for several years because Yudkowsky had stated that it caused some readers to have nervous breakdowns. [10] [11] [4] The ban was lifted in October 2015. [12]

David Auerbach wrote in Slate "the combination of messianic ambitions, being convinced of your own infallibility, and a lot of cash never works out well, regardless of ideology, and I don't expect Yudkowsky and his cohorts to be an exception. I worry less about Roko's Basilisk than about people who believe themselves to have transcended conventional morality." [11]

Roko's basilisk was referenced in Canadian musician Grimes's music video for her 2015 song "Flesh Without Blood" through a character named "Rococo Basilisk" who was described by Grimes as "doomed to be eternally tortured by an artificial intelligence, but she's also kind of like Marie Antoinette". After thinking of this pun and finding that Grimes had already made it, Elon Musk contacted Grimes, which led to them dating. [13] [14] The concept was also referenced in an episode of Silicon Valley titled "Facial Recognition". [15]

The Basilisk has been compared to Pascal's wager. [16]

Effective altruism

LessWrong played a significant role in the development of the effective altruism (EA) movement, [17] and the two communities are closely intertwined. [18] :227 In a survey of LessWrong users in 2016, 664 out of 3,060 respondents, or 21.7%, identified as "effective altruists". A separate survey of effective altruists in 2014 revealed that 31% of respondents had first heard of EA through LessWrong, [18] though that number had fallen to 8.2% by 2020. [19] Two early proponents of effective altruism, Toby Ord and William MacAskill, met transhumanist philosopher Nick Bostrom at Oxford University. Bostrom's research influenced many effective altruists to work on existential risk reduction. [18]

Related Research Articles

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Elon Musk</span> South Africa–born businessman (born 1971)

Elon Reeve Musk is a businessman and investor. He is the founder, chairman, CEO, and CTO of SpaceX; angel investor, CEO, product architect, and former chairman of Tesla, Inc.; owner, executive chairman, and CTO of X Corp.; founder of the Boring Company and xAI; co-founder of Neuralink and OpenAI; and president of the Musk Foundation. He is one of the wealthiest people in the world; as of April 2024, Forbes estimates his net worth to be $178 billion.

<span class="mw-page-title-main">Future of Humanity Institute</span> Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.

<i>Harry Potter and the Methods of Rationality</i> Fan fiction by Eliezer Yudkowsky

Harry Potter and the Methods of Rationality (HPMOR) is a work of Harry Potter fan fiction by Eliezer Yudkowsky published on FanFiction.Net as a serial from February 28, 2010, to March 14, 2015, totaling 122 chapters and over 660,000 words. It adapts the story of Harry Potter to explain complex concepts in cognitive science, philosophy, and the scientific method. Yudkowsky's reimagining supposes that Harry's aunt Petunia Evans married an Oxford professor and homeschooled Harry in science and rational thinking, allowing Harry to enter the magical world with ideals from the Age of Enlightenment and an experimental spirit. The fan fiction spans one year, covering Harry's first year in Hogwarts. HPMOR has inspired other works of fan fiction, art, and poetry.

Effective altruism (EA) is a 21st-century philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis". People who pursue the goals of effective altruism, sometimes called effective altruists, may choose careers based on the amount of good that they expect the career to achieve or donate to charities based on the goal of maximising positive impact. They may work on the prioritization of scientific projects, entrepreneurial ventures, and policy initiatives estimated to save the most lives or reduce the most suffering.

<span class="mw-page-title-main">William MacAskill</span> Scottish philosopher and ethicist (born 1987)

William David MacAskill is a Scottish philosopher and author, as well as one of the originators of the effective altruism movement. He is an associate professor at the Global Priorities Institute at the University of Oxford and is Director of the Forethought Foundation for Global Priorities Research. He was a Research Fellow at the Global Priorities Institute at the University of Oxford, co-founded Giving What We Can, the Centre for Effective Altruism and 80,000 Hours, and is the author of Doing Good Better (2015) and What We Owe the Future (2022), and the co-author of Moral Uncertainty (2020).

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.

Existential risk from artificial general intelligence is the idea that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.

<span class="mw-page-title-main">Flesh Without Blood</span> 2015 single by Grimes

"Flesh Without Blood" is a song by Canadian singer, songwriter and music producer Grimes, released on October 26, 2015, as the lead single from her fourth studio album, Art Angels (2015). The same day, Grimes released the "Flesh Without Blood/Life in the Vivid Dream" video to YouTube, a double music video featuring "Flesh Without Blood" and "Life in the Vivid Dream", another song on Art Angels.

<span class="mw-page-title-main">Centre for Effective Altruism</span> Non-profit effective altruist organization

The Centre for Effective Altruism (CEA) is an Oxford-based organisation that builds and supports the effective altruism community. It was founded in 2012 by William MacAskill and Toby Ord, both philosophers at the University of Oxford. CEA is part of Effective Ventures, a federation of projects working to have a large positive impact in the world.

<span class="mw-page-title-main">Effective Altruism Global</span> Recurring effective altruism conference

Effective Altruism Global, abbreviated EA Global or EAG, is a series of philanthropy conferences that focuses on the effective altruism movement. The conferences are run by the Centre for Effective Altruism. Huffington Post editor Nico Pitney described the events as a gathering of "nerd altruists", which was "heavy on people from technology, science, and analytical disciplines".

<span class="mw-page-title-main">Igor Kurganov</span> Russian-born German poker player (born 1988)

Igor Kurganov is a Russian born German professional poker player, angel investor and philanthropist. He is the co-founder of Raising for Effective Giving, a philanthropic organisation that promotes a rational approach to philanthropy often referred to as effective altruism, and provides advice on choosing charities based on certain criteria.

Roko's basilisk is a thought experiment which states that an otherwise artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement. It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.

Émile P. Torres is an American philosopher, intellectual historian, author, and postdoctoral researcher at Case Western Reserve University. Their research focuses on eschatology, existential risk, and human extinction. They are also a critic of what they and computer scientist Timnit Gebru have dubbed the "TESCREAL" philosophies: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.

References

  1. "Less Wrong FAQ". LessWrong. Archived from the original on 30 April 2019. Retrieved 25 March 2014.
  2. 1 2 Miller, James (28 July 2011). "You Can Learn How To Become More Rational". Business Insider . Archived from the original on 10 August 2018. Retrieved 25 March 2014.
  3. Burkeman, Oliver (9 March 2012). "This column will change your life: asked a tricky question? Answer an easier one". The Guardian . Archived from the original on 26 March 2014. Retrieved 25 March 2014.
  4. 1 2 Tiku, Nitasha (25 July 2012). "Faith, Hope, and Singularity: Entering the Matrix with New York's Futurist Set". Observer . Archived from the original on 12 April 2019. Retrieved 12 April 2019.
  5. "Where did Less Wrong come from? (LessWrong FAQ)". Archived from the original on 30 April 2019. Retrieved 25 March 2014.
  6. Lewis-Kraus, Gideon (9 July 2020). "Slate Star Codex and Silicon Valley's War Against the Media". The New Yorker . Archived from the original on 10 July 2020. Retrieved 4 August 2020.
  7. Cowdrey, Katherine (21 September 2017). "W&N wins Buzzfeed science reporter's debut after auction". The Bookseller . Archived from the original on 27 November 2018. Retrieved 21 September 2017.
  8. Chivers, Tom (2019). The AI Does Not Hate You. Weidenfeld & Nicolson. ISBN   978-1474608770.
  9. Marriott, James (31 May 2019). "The AI Does Not Hate You by Tom Chivers review — why the nerds are nervous". The Times . ISSN   0140-0460. Archived from the original on 23 April 2020. Retrieved 3 May 2020.
  10. Love, Dylan (6 August 2014). "WARNING: Just Reading About This Thought Experiment Could Ruin Your Life". Business Insider . Archived from the original on 18 November 2018. Retrieved 6 December 2014.
  11. 1 2 Auerbach, David (17 July 2014). "The Most Terrifying Thought Experiment of All Time". Slate . Archived from the original on 25 October 2018. Retrieved 18 July 2014.
  12. RobbBB (5 October 2015). "A few misconceptions surrounding Roko's basilisk". LessWrong. Archived from the original on 15 March 2018. Retrieved 10 April 2016. The Roko's basilisk ban isn't in effect anymore
  13. Paez, Danny (5 August 2018). "Elon Musk and Grimes: "Rococo Basilisk" Links the Two on Twitter". Inverse . Archived from the original on 24 July 2020. Retrieved 24 July 2020.
  14. Oberhaus, Daniel (8 May 2018). "Explaining Roko's Basilisk, the Thought Experiment That Brought Elon Musk and Grimes Together". Vice . Archived from the original on 25 July 2020. Retrieved 24 July 2020.
  15. Burch, Sean (23 April 2018). "'Silicon Valley' Fact Check: That Thought Experiment Is Real and Horrifying". TheWrap . Archived from the original on 12 November 2020. Retrieved 12 November 2020.
  16. Paul-Choudhury, Sumit (2 August 2019). "Tomorrow's Gods: What is the future of religion?". BBC . Archived from the original on 1 September 2020. Retrieved 28 August 2020.
  17. de Lazari-Radek, Katarzyna; Singer, Peter (27 September 2017). Utilitarianism: A Very Short Introduction. Oxford University Press. p. 110. ISBN   9780198728795.{{cite book}}: CS1 maint: date and year (link)
  18. 1 2 3 Chivers, Tom (2019). "Chapter 38: The Effective Altruists". The AI Does Not Hate You. Weidenfeld & Nicolson. ISBN   978-1474608770.
  19. Moss, David (20 May 2021). "EA Survey 2020: How People Get Involved in EA". Effective Altruism Forum. Archived from the original on 28 July 2021. Retrieved 28 July 2021.