Superintelligence ban

Last updated

Superintelligence ban refers to proposed legal, ethical, or policy measures intended to restrict or prohibit the development of artificial superintelligence, [1] AI systems that would surpass human cognitive abilities in nearly all domains. [2] [3] The idea arises from concerns that such systems could become uncontrollable, potentially posing existential threats to humanity or causing severe social and economic disruption. [2] [4]

Contents

Background

The concept of limiting or banning superintelligence research has roots in early 21st-century debates on artificial general intelligence (AGI) safety. Thinkers such as Nick Bostrom and Eliezer Yudkowsky warned that self-improving AI could rapidly exceed human oversight. [4] [5] As advanced models like large-scale language models and autonomous agents began demonstrating complex reasoning abilities, policymakers and ethicists increasingly discussed the need for legal constraints on the creation of systems capable of recursive self-improvement. [6]

In October 2025, the Future of Life Institute published a statement [7] calling for "a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in." This statement was signed by various public personalities, such as Richard Branson and Steve Wozniak, and AI experts, such as Yoshua Bengio and Geoffrey Hinton. [2]

Rationale

Supporters of a superintelligence ban argue that once AI systems surpass human intelligence, traditional containment, alignment, and control methods may fail. They contend that even limited experimentation with such systems could lead to irreversible outcomes, including loss of human decision-making power or unintended global harm. Some propose international treaties modeled after the nuclear non-proliferation framework to prevent a competitive AI arms race. [2] [8]

Opponents argue that a ban would be difficult to define and enforce, given the lack of a precise threshold distinguishing advanced AGI from superintelligence. They also warn that excessive restriction could slow scientific progress, hinder beneficial automation, and encourage unregulated underground research. [2] [9]

Global discussion

Although no government has enacted an explicit superintelligence ban, the idea has been debated within the European Union, United Nations, and several independent AI safety organizations. The Future of Life Institute, Center for AI Safety, and other organizations have called for international cooperation to manage risks associated with the pursuit of superintelligent systems.[ citation needed ] In 2024 and 2025, proposals for a temporary moratorium on frontier AI research were circulated among major technology firms and research institutes, reflecting growing public concern over the trajectory of AI capabilities. [2]

See also

References

  1. Scammell, Robert (October 22, 2025). "Prince Harry, Steve Bannon, and will.i.am join tech pioneers calling for an AI superintelligence ban". Business Insider . Retrieved 2025-10-23.
  2. 1 2 3 4 5 6 Butts, Dylan (October 22, 2025). "Hundreds of public figures, including Apple co-founder Steve Wozniak and Virgin's Richard Branson urge AI 'superintelligence' ban". CNBC . Retrieved 2025-10-23.
  3. Ray, Siladitya. "Harry And Meghan, Steve Bannon And More Sign Petition Urging AI 'Superintelligence' Ban". Forbes . Retrieved 2025-10-23.
  4. 1 2 "Are we living in a computer simulation? Philosopher behind the theory issues an eerie new warning on AI: 'We would find different bases for our self-worth' - The Economic Times". The Economic Times . September 10, 2025. Retrieved 2025-10-23.
  5. Adams, Tim (June 12, 2016). "Artificial intelligence: 'We're like children playing with a bomb'". The Guardian . ISSN   0261-3077 . Retrieved 2025-10-23.
  6. "Many big names in group of unlikely allies seeking ban, for now, on AI "superintelligence"". CBS News . 2025-10-22. Retrieved 2025-10-23.
  7. "Statement on Superintelligence". superintelligence-statement.org. Retrieved 2026-01-25.
  8. Perrigo, Billy; Pillay, Tharin (October 22, 2025). "Open Letter Calls for Ban on Superintelligent AI Development". Time . Retrieved 2025-10-23.
  9. "Google's ex-CEO Eric Schmidt says America is ill-equipped to address AI's 'biggest challenge'". The Times of India . ISSN   0971-8257 . Retrieved 2025-10-23.