![]() | This article or section is in a state of significant expansion or restructuring. You are welcome to assist in its construction by editing it as well. This template was placed by Cielquiparle (talk · contribs). If this article or section has not been edited in several days , please remove this template. If you are the editor who added this template and you are actively editing, please be sure to replace this template with {{ in use }} during the active editing session. Click on the link for template parameters to use. This article was last edited by Cielquiparle (talk | contribs) 2 seconds ago. (Update timer) |
The First IndependentInternational AI Safety Report was published on 29 January 2025. [1] The report assesses a wide range of risks posed by general-purpose AI and how to mitigate against them. [2] [3] [4] Commissioned after the 2023 AI Safety Summit at Bletchley Park in the United Kingdom, the AI Safety Report was intended to inform discussion at the 2025 AI Action Summit in Paris, France. [5] [2] The report was published by a cohort of 96 artificial intelligence experts led by Canadian machine learning pioneer Yoshua Bengio, often referred to as one of the "godfathers" of AI. [4] [2] [6]
In examining the what general-purpose AI can do, the report recognised that its capabilities have increased rapidly, and that the pace of further advancements is highly unpredictable. [4] [2] Policymakers thus face an "evidence dilemma" and could end up introducing mitigation measures that prove ineffective or unnecessary, or that are poorly timed. [4]
The report identified several concrete harms from AI, including the violation of privacy; the enablement of scams; and creation of deepfakes with sexual content, which expose women and children in particular to potential violence and abuse. [3] [2] Other harms include discriminatory outcomes due to biased models, and problems due to hallucinations and unreliability of the AI. [2]