Type of site | Internet forum, blog |
---|---|
Available in | English |
Created by | Eliezer Yudkowsky |
URL | LessWrong.com |
Registration | Optional, but is required for contributing content |
Launched | February 1, 2009 |
Current status | Active |
Written in | JavaScript, CSS (powered by React and GraphQL) |
LessWrong (also written Less Wrong) is a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, and artificial intelligence, among other topics. [1] [2]
LessWrong describes itself as an online forum and community aimed at improving human reasoning, rationality, and decision-making, with the goal of helping its users hold more accurate beliefs and achieve their personal objectives. [3] The best known posts of LessWrong are "The Sequences", a series of essays which aim to describe how to avoid the typical failure modes of human reasoning with the goal of improving decision-making and the evaluation of evidence. [4] [5] One suggestion is the use of Bayes' theorem as a decision-making tool. [2] There is also a focus on psychological barriers that prevent good decision-making, including fear conditioning and cognitive biases that have been studied by the psychologist Daniel Kahneman. [6]
LessWrong is also concerned with artificial intelligence, transhumanism, existential threats and the singularity. The New York Observer in 2019 noted that "Despite describing itself as a forum on 'the art of human rationality,' the New York Less Wrong group ... is fixated on a branch of futurism that would seem more at home in a 3D multiplex than a graduate seminar: the dire existential threat—or, with any luck, utopian promise—known as the technological Singularity ... Branding themselves as 'rationalists,' as the Less Wrong crew has done, makes it a lot harder to dismiss them as a 'doomsday cult'." [7]
LessWrong developed from Overcoming Bias, an earlier group blog focused on human rationality, which began in November 2006, with artificial intelligence researcher Eliezer Yudkowsky and economist Robin Hanson as the principal contributors. In February 2009, Yudkowsky's posts were used as the seed material to create the community blog LessWrong, and Overcoming Bias became Hanson's personal blog. [8] In 2013, a significant portion of the rationalist community shifted focus to Scott Alexander's Slate Star Codex. [4]
Discussions of AI within LessWrong include AI alignment, AI safety, [9] and machine consciousness.[ citation needed ] Articles posted on LessWrong about AI have been cited in the news media. [9] [10] LessWrong, and its surrounding movement work on AI are the subjects of the 2019 book The AI Does Not Hate You, written by former BuzzFeed science correspondent Tom Chivers. [11] [12] [13]
LessWrong played a significant role in the development of the effective altruism (EA) movement, [14] and the two communities are closely intertwined. [15] : 227 In a survey of LessWrong users in 2016, 664 out of 3,060 respondents, or 21.7%, identified as "effective altruists". A separate survey of effective altruists in 2014 revealed that 31% of respondents had first heard of EA through LessWrong, [15] though that number had fallen to 8.2% by 2020. [16]
In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures people who heard of the AI before it came into existence and failed to work tirelessly to bring it into existence, in order to incentivise said work. This idea came to be known as "Roko's basilisk", based on Roko's idea that merely hearing about the idea would give the hypothetical AI system an incentive to try such blackmail. [17] [18] [7]
The comment section of Overcoming Bias attracted prominent neoreactionaries such as Curtis Yarvin (pen name Mencius Moldbug), the founder of the neoreactionary movement, [19] and Hanson posted his side of a debate versus Moldbug on futarchy. [20] After LessWrong split from Overcoming Bias, it too attracted some individuals affiliated with neoreaction with discussions of eugenics and evolutionary psychology. [21] However, Yudkowsky has strongly rejected neoreaction. [22] [23] In a survey among LessWrong users in 2016, 28 out of 3060 respondents (0.92%) identified as "neoreactionary". [24]
LessWrong has been associated with several influential contributors. Founder Eliezer Yudkowsky established the platform to promote rationality and raise awareness about potential risks associated with artificial intelligence. [25] Scott Alexander became one of the site's most popular writers before starting his own blog, Slate Star Codex, contributing discussions on AI safety and rationality. [25]
Further notable users on LessWrong include Paul Christiano, Wei Dai and Zvi Mowshowitz. A selection of posts by these and other contributors, selected through a community review process, [26] were published as parts of the essay collections "A Map That Reflects the Territory" [27] and "The Engines of Cognition". [28] [26] [29]
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.
Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.
Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
The Singularity Summit was the annual conference of the Machine Intelligence Research Institute. It was started in 2006 at Stanford University by Ray Kurzweil, Eliezer Yudkowsky, and Peter Thiel, and the subsequent summits in 2007, 2008, 2009, 2010, 2011, and 2012 have been held in San Francisco, San Jose, New York City, San Francisco, New York City, and San Francisco respectively. Some speakers have included Sebastian Thrun, Rodney Brooks, Barney Pell, Marshall Brain, Justin Rattner, Peter Diamandis, Stephen Wolfram, Gregory Benford, Robin Hanson, Anders Sandberg, Juergen Schmidhuber, Aubrey de Grey, Max Tegmark, and Michael Shermer.
In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
Harry Potter and the Methods of Rationality (HPMOR) is a work of Harry Potter fan fiction by Eliezer Yudkowsky published on FanFiction.Net as a serial from February 28, 2010, to March 14, 2015, totaling 122 chapters and over 660,000 words. It adapts the story of Harry Potter to explain complex concepts in cognitive science, philosophy, and the scientific method. Yudkowsky's reimagining supposes that Harry's aunt Petunia Evans married an Oxford professor and homeschooled Harry in science and rational thinking, allowing Harry to enter the magical world with ideals from the Age of Enlightenment and an experimental spirit. The fan fiction spans one year, covering Harry's first year in Hogwarts. HPMOR has inspired other works of fan fiction, art, and poetry.
The Center for Applied Rationality (CFAR) is a nonprofit organization based in Berkeley, California, that hosts workshops on rationality and cognitive bias. It was founded in 2012 by Julia Galef, Anna Salamon, Michael Smith and Andrew Critch, to improve participants' rationality using "a set of techniques from math and decision theory for forming your beliefs about the world as accurately as possible". Its president since 2021 is Anna Salamon.
MetaMed Research was an American medical consulting firm aiming to provide personalized medical research services. It was founded in 2012 by Michael Vassar, Jaan Tallinn, Zvi Mowshowitz, and Nevin Freeman with startup funding from Silicon Valley investor Peter Thiel. MetaMed stated that its researchers were drawn from top universities, as well as prominent technology companies such as Google. Many of its principals were associated with the Rationalist movement.
Our Final Invention: Artificial Intelligence and the End of the Human Era is a 2013 non-fiction book by the American author James Barrat. The book discusses the potential benefits and possible risks of human-level (AGI) or super-human (ASI) artificial intelligence. Those supposed risks include extermination of the human race.
The Dark Enlightenment, also called the neo-reactionary movement, is an anti-democratic, anti-egalitarian, reactionary philosophical and political movement. The term "Dark Enlightenment" is a reaction to the Age of Enlightenment and apologia for the public view of the "Dark Ages".
Instrumental convergence is the hypothetical tendency for most sufficiently intelligent, goal directed beings to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without ceasing, provided that their ultimate (intrinsic) goals may never be fully satisfied.
In philosophy, Pascal's mugging is a thought experiment demonstrating a problem in expected utility maximization. A rational agent should choose actions whose outcomes, when weighted by their probability, have higher utility. But some very unlikely outcomes may have very great utilities, and these utilities can grow faster than the probability diminishes. Hence the agent should focus more on vastly improbable cases with implausibly high rewards; this leads first to counter-intuitive choices, and then to incoherence as the utility of every choice becomes unbounded.
Curtis Guy Yarvin, also known by the pen name Mencius Moldbug, is an American blogger. He is known, along with philosopher Nick Land, for founding the anti-egalitarian and anti-democratic philosophical movement known as the Dark Enlightenment or neo-reactionary movement (NRx).
Wei Dai is a computer engineer known for contributions to cryptography and cryptocurrencies. He developed the Crypto++ cryptographic library, created the b-money cryptocurrency system, and co-proposed the VMAC message authentication algorithm.
Astral Codex Ten, formerly called Slate Star Codex (SSC), is a blog focused on science, medicine, philosophy, politics, and futurism. The blog is written by Scott Alexander Siskind, a San Francisco Bay Area psychiatrist, under the pen name Scott Alexander.
Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement. It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.
Since the late 1990s those worries have become more specific, and coalesced around Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies and Eliezer Yudkowsky's blog LessWrong.
one of the sites where [Moldbug] got his start as a commenter was on Overcoming Bias, i.e. where Yudkowsky was writing before LessWrong.
Thanks to LessWrong's discussions of eugenics and evolutionary psychology, it has attracted some readers and commenters affiliated with the alt-right and neoreaction, that broad cohort of neofascist, white nationalist and misogynist trolls.
Land and Yarvin are openly allies with the new reactionary movement, while Yudkowsky counts many reactionaries among his fanbase despite finding their racist politics disgusting.
Yudkowsky helped create the Singularity Institute (now called the Machine Intelligence Research Institute) to help mankind achieve a friendly Singularity. (Disclosure: I have contributed to the Singularity Institute.) Yudkowsky then founded the community blog http://LessWrong.com, which seeks to promote the art of rationality, to raise the sanity waterline, and to in part convince people to make considered, rational charitable donations, some of which, Yudkowsky (correctly) hoped, would go to his organization.
Users wrote reviews of the best posts of 2018, and voted on them using the quadratic voting system, popularized by Glen Weyl and Vitalik Buterin. From the 2000+ posts published that year, the Review narrowed down the 44 most interesting and valuable posts.
{{cite journal}}
: CS1 maint: DOI inactive as of July 2024 (link)