Superintelligence ban refers to proposed legal, ethical, or policy measures intended to restrict or prohibit the development of artificial superintelligence, [1] AI systems that would surpass human cognitive abilities in nearly all domains. [2] [3] The idea arises from concerns that such systems could become uncontrollable, potentially posing existential threats to humanity or causing severe social and economic disruption. [2] [4]
The concept of limiting or banning superintelligence research has roots in early 21st-century debates on artificial general intelligence (AGI) safety. Thinkers such as Nick Bostrom and Eliezer Yudkowsky warned that self-improving AI could rapidly exceed human oversight. [4] [5] As advanced models like large-scale language models and autonomous agents began demonstrating complex reasoning abilities, policymakers and ethicists increasingly discussed the need for legal constraints on the creation of systems capable of recursive self-improvement. [6]
In October 2025, the Future of Life Institute published a statement [7] calling for "a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in." This statement was signed by various public personalities, such as Richard Branson and Steve Wozniak, and AI experts, such as Yoshua Bengio and Geoffrey Hinton. [2]
Supporters of a superintelligence ban argue that once AI systems surpass human intelligence, traditional containment, alignment, and control methods may fail. They contend that even limited experimentation with such systems could lead to irreversible outcomes, including loss of human decision-making power or unintended global harm. Some propose international treaties modeled after the nuclear non-proliferation framework to prevent a competitive AI arms race. [2] [8]
Opponents argue that a ban would be difficult to define and enforce, given the lack of a precise threshold distinguishing advanced AGI from superintelligence. They also warn that excessive restriction could slow scientific progress, hinder beneficial automation, and encourage unregulated underground research. [2] [9]
Although no government has enacted an explicit superintelligence ban, the idea has been debated within the European Union, United Nations, and several independent AI safety organizations. The Future of Life Institute, Center for AI Safety, and other organizations have called for international cooperation to manage risks associated with the pursuit of superintelligent systems.[ citation needed ] In 2024 and 2025, proposals for a temporary moratorium on frontier AI research were circulated among major technology firms and research institutes, reflecting growing public concern over the trajectory of AI capabilities. [2]