Artificial Intelligence Act

Last updated

Regulation (EU) 2024/1689
European Union regulation
Text with EEA relevance
Flag of Europe.svg
TitleArtificial Intelligence Act [1]
Made by European Parliament and Council
Journal reference OJ L, 2024/1689, 12.7.2024
History
European Parliament vote13 March 2024
Council Vote21 May 2024
Entry into force1 August 2024
Preparative texts
Commission proposal 2021/206
Other legislation
AmendsRegulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU
Current legislation

The Artificial Intelligence Act (AI Act) [1] is a European Union regulation concerning artificial intelligence (AI). It establishes a common regulatory and legal framework for AI within the European Union (EU). [2] The regulation entered into force on 1 August 2024, [3] with provisions that shall come into operation gradually over the following 6 to 36 months. [4]

Contents

It covers most AI systems across a wide range of sectors, with exemptions for AI used only for military, national security, research purposes, or for non-professional use. [5] As a form of product regulation, it does not create individual rights; instead, it places duties on AI providers and on organisations that use AI in a professional context. [6] [7]

The Act classifies non-exempt AI applications by their risk of causing harm. There are four levels – unacceptable, high, limited, minimal – plus an additional category for general-purpose AI. [8]

For general-purpose AI, transparency requirements are imposed, with reduced requirements for open source models, and additional evaluations for high-capability models. [9] [10]

The Act also creates a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation. [11] Like the EU's General Data Protection Regulation, the Act can apply extraterritorially to providers from outside the EU if they have users within the EU. [7]

Proposed by the European Commission on 21 April 2021, [12] it passed the European Parliament on 13 March 2024, [13] and was unanimously approved by the EU Council on 21 May 2024. [14] The draft Act was revised to address the rise in popularity of generative artificial intelligence systems, such as ChatGPT, whose general-purpose capabilities did not fit the main framework. [15]

Provisions

Risk categories

There are different risk categories depending on the type of application, with a specific category dedicated to general-purpose generative AI:

The risk-based scheme follows a product-safety model in which regulatory duties are assigned to the providers and deployers of AI systems, and these duties become more demanding as the potential impact on health, safety, or fundamental rights increases. This structure is meant to ensure that oversight focuses on systems likely to create significant risks while allowing lighter approaches for uses considered less sensitive. [24] Some legal scholars also argue that, in practice, the Act frames "trustworthy AI" as systems that can show compliance with these safety and risk thresholds. [25] According to an initial appraisal by the European Parliamentary Research Service, the Commission's impact assessment drew on stakeholder consultations and a wide range of existing research when comparing policy options for this risk-based framework. [26]

Added in 2023, the general-purpose AI category includes foundation models (for example, ChatGPT) that can perform a wide range of tasks. If a model's weights and design are made open source, developers must publish a training data summary and a copyright policy; closed-source models must meet broader transparency requirements. High-impact models that pose systemic risks (require more than 1025 floating-point operations to train) [27] must undergo extra evaluation. [6] [10] A General-Purpose AI Code of Practice, published on 10 July 2025, outlines three main chapters on transparency, copyright, and safety and security to help providers demonstrate compliance with the AI Act. Participation in the code is voluntary. [28]

Beyond these basic transparency duties, the Act sets a common list of obligations for providers of general-purpose AI models. They must publish a summary of the training data, adopt a policy to comply with copyright law, and provide technical documentation to downstream providers and supervisory authorities. Models that are designated as posing systemic risk must also carry out model evaluations and adversarial testing, assess and mitigate risks such as bias and security failures, report serious incidents, and ensure an adequate level of cybersecurity. [29]

Exemptions

Articles 2.3 and 2.6 exempt AI systems used for military or national security purposes or pure scientific research and development from the AI Act. [16] In particular, the Regulation does not apply where AI systems are used exclusively for military, defence or national security purposes, or to systems developed and put into service solely for scientific research and development; if such systems are later used for other purposes (for example, civilian or law-enforcement uses), the Act applies. [1] These activities remain governed by separate EU and national rules on defence, security, and intelligence rather than by the AI Act itself. [30]

Article 5.2 bans algorithmic video surveillance of people ("The use of 'real-time' remote biometric identification systems in publicly accessible spaces" [31] ) only if it is conducted in real time. Exceptions allowing real-time algorithmic video surveillance include policing aims including "a real and present or real and foreseeable threat of terrorist attack". [16]

Recital 31 of the Act states that it aims to prohibit "AI systems providing social scoring of natural persons by public or private actors", but allows for "lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law." [32] La Quadrature du Net interprets this exemption as permitting sector-specific social scoring systems, [16] such as the suspicion score used by the French family payments agency Caisse d'allocations familiales. [33] [16]

Governance

The AI Act establishes various new bodies in Article 64 and the following articles. These bodies are tasked with implementing and enforcing the Act. The approach combines EU-level coordination with national implementation, involving both public authorities and private sector participation.

The following new bodies will be established: [34] [35]

  1. AI Office: attached to the European Commission, this authority will coordinate the implementation of the AI Act in all Member States and oversee the compliance of general-purpose AI providers. It can also request information or open investigations when serious issues are suspected. [6]
  2. European Artificial Intelligence Board: composed of one representative from each Member State, the Board will advise and assist the Commission and Member States to facilitate the consistent and effective application of the AI Act. Its tasks include gathering and sharing technical and regulatory expertise, providing recommendations, written opinions, and other advice.
  3. Advisory Forum: established to advise and provide technical expertise to the Board and the Commission, this forum will represent a balanced selection of stakeholders, including industry, start-ups, small and medium-sized enterprises, civil society, and academia, ensuring that a broad spectrum of opinions is represented during the implementation and application process.
  4. Scientific Panel of Independent Experts: this panel will provide technical advice and input to the AI Office and national authorities, enforce rules for general-purpose AI models (notably by launching qualified alerts of possible risks to the AI Office), and ensure that the rules and implementations of the AI Act correspond to the latest scientific findings.

While the establishment of new bodies is planned at the EU level, Member States will have to designate "national competent authorities." [note 1] These authorities will be responsible for ensuring the application and implementation of the AI Act, and for conducting "market surveillance." [36] They will verify that AI systems comply with the regulations, notably by checking the proper performance of conformity assessments and by appointing third-parties to carry out external conformity assessments.

Once harmonised standards for the AI Act are published in the Official Journal, compliant products are presumed to conform with the regulation; these standards are being drafted by CEN/CENELEC Joint Technical Committee 21 (JTC 21). [37]

Enforcement

The Act regulates entry to the EU internal market using the New Legislative Framework. It contains essential requirements that all AI systems must meet to access the EU market. These essential requirements are passed on to European Standardisation Organisations, which develop technical standards that further detail these requirements. [38] These standards are developed by CEN/CENELEC JTC 21. [39]

The Act mandates that member states establish their own notifying bodies. Conformity assessments are conducted to verify whether AI systems comply with the standards set out in the AI Act. [40] This assessment can be done in two ways: either through self-assessment, where the AI system provider checks conformity, or through third-party conformity assessment, where the notifying body conducts the assessment. [23] Notifying bodies also have the authority to carry out audits to ensure proper conformity assessments. [41]

Each EU Member State must also designate a national supervisory authority to monitor compliance, handle complaints, and share information with the European AI Office, which coordinates enforcement across the Union. [6]

Together, these actors form a multi-level supervision system. National market surveillance and supervisory authorities carry out checks and enforce the rules on high-risk AI systems and other regulated uses, while EU-level bodies such as the AI Office and the European Artificial Intelligence Board support implementation of the Act and help ensure consistent application, in particular for general-purpose AI models. [24]

Criticism has arisen regarding the fact that many high-risk AI systems do not require third-party conformity assessments. [42] [43] [44] Some commentators argue that independent third-party assessments are necessary for high-risk AI systems to ensure safety before deployment. Legal scholars have suggested that AI systems capable of generating deepfakes for political misinformation or creating non-consensual intimate imagery should be classified as high-risk and subjected to stricter regulation. [45] However, the Act does not explicitly mandate specific deepfake detection mechanisms, nor does it comprehensively address the broader human rights implications stemming from the proliferation of such synthetic media. [46]

Penalties

Non-compliance with the prohibitions in Article 5 is subject to administrative fines of up to EUR 35,000,000 or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover, whichever is higher. Other operator obligations may be sanctioned with fines of up to EUR 15,000,000 or 3% of worldwide annual turnover, whichever is higher; providing incorrect, incomplete or misleading information may be fined up to EUR 7,500,000 or 1% of worldwide annual turnover. For SMEs, including start-ups, the applicable cap is the lower of the relevant percentage or the fixed amount. For providers of general-purpose AI models, the Commission may impose separate fines up to EUR 15,000,000 or 3% of worldwide annual turnover in the specific cases listed in Article 101. [1]

Regulatory sandboxes

The Act provides for "regulatory sandboxes", i.e., controlled testing environments run by national competent authorities, where developers can trial AI systems under supervision before placing them on the market, to support innovation while maintaining safeguards. Member States are encouraged to set up such sandboxes under coordinated EU rules, and small and medium-sized enterprises (SMEs) and start-ups may receive priority access to test systems in a compliant setting. [47]

Legislative procedure

In February 2020, the European Commission published "White Paper on Artificial Intelligence – A European approach to excellence and trust". [48] In October 2020, debates between EU leaders took place in the European Council. On 21 April 2021, the AI Act was officially proposed by the Commission. [12] On 6 December 2022, the European Council adopted the general orientation, allowing negotiations to begin with the European Parliament. On 9 December 2023, after three days of "marathon" talks, the EU Council and Parliament concluded an agreement. [49] [50]

The law was passed in the European Parliament on 13 March 2024, when MEPs adopted the final text in plenary session by a vote of 523 for, 46 against, and 49 abstaining. [21] It was approved by the EU Council on 21 May 2024. [14] It entered into force on 1 August 2024, [3] 20 days after being published in the Official Journal on 12 July 2024. [13] [51] After coming into force, there will be a delay before it becomes applicable, which depends on the type of application. This delay is 6 months for bans on "unacceptable risk" AI systems, 9 months for codes of practice, 12 months for general-purpose AI systems, 36 months for some obligations related to "high-risk" AI systems, and 24 months for everything else. [51] [21]

Legal commentators have described the AI Act as a first-of-its-kind, comprehensive framework for regulating AI across multiple sectors, and as a likely reference point for companies and regulators outside the EU when they design their own approaches to AI governance. [52]

Reactions

Experts have argued that though the jurisdiction of the law is European, it could have far-ranging implications for international companies that plan to expand to Europe. [53] Anu Bradford at Columbia has argued that the law provides significant momentum to the world-wide movement to regulate AI technologies. [53]

Amnesty International stated that, while EU policymakers present the AI Act as a global model for AI regulation, the legislation fails to take basic human rights principles into account and offers only limited protections to impacted and marginalised people. It argued that the Act does not ban the reckless use and export of what it calls "draconian AI technologies", fails to ensure equal protection for migrants, refugees and asylum seekers, and lacks adequate accountability and transparency provisions, which in its view risks exacerbating human rights abuses. [54]

Some tech watchdogs have argued that there were major loopholes in the law that would allow large tech monopolies to entrench their advantage in AI, or to lobby to weaken rules. [55] [56] Some startups welcomed the clarification the Act provides, while others argued the additional regulation would make European startups uncompetitive compared to American and Chinese startups. [56] Legal analysis has also suggested that, while the Act may serve as an early reference point for AI governance frameworks, its detailed requirements could add significant compliance complexity for providers operating across multiple jurisdictions. [52] La Quadrature du Net (LQDN) described the AI Act as "tailor-made for the tech industry, European police forces as well as other large bureaucracies eager to automate social control." LQDN described the role of self-regulation and exemptions in the Act to render it "largely incapable of standing in the way of the social, political and environmental damage linked to the proliferation of AI." [16] Commentary on national security has also noted that the Act's sectoral exclusion for security and defence sits alongside separate security frameworks, raising questions about how these exemptions will interact with future security and defence policy. [30] Academic commentary has described the Act as a "medley" of product-safety and fundamental-rights frameworks and has also cautioned that its product-safety approach relies heavily on provider self-assessment, raising concerns about effective enforcement. [23] [44] An initial appraisal of the Commission's impact assessment by the European Parliamentary Research Service likewise noted that much of the analysis is qualitative and that some estimates, including for compliance costs under the different options, depend on modelling assumptions and available evidence. [26] Some scholars also note that linking "trustworthy AI" to compliance with these risk thresholds can make the underlying policy choices about what levels of risk are acceptable less visible. [25] Separate proposals for algorithmic impact assessments, such as a framework developed by the AI Now Institute for public agencies, argue that impact assessments should map affected communities, examine fairness, bias and due process, and allow ongoing public and researcher scrutiny of automated decision systems. [20]

A broad civil society coalition coordinated by European Digital Rights (EDRi) published a joint analysis comparing the final AI Act with its earlier demands and concluded that the law falls short of what is needed to protect rights to privacy, equality, non-discrimination and the presumption of innocence. The coalition also called for stronger safeguards for people on the move and for people with disabilities, arguing that the final rules leave significant gaps in these areas. [57]

Organisations representing authors, performers, publishers and producers have also criticised the implementation of the Act, particularly its rules for general-purpose AI. A joint statement from 38 global creators' organisations, reported by Le Monde , argued that the Code of Practice, the guidelines for general-purpose AI and the template for training-data summaries under Article 53 do not adequately protect intellectual property rights or ensure sufficient transparency about the data used to train generative AI models, describing the outcome as a missed opportunity to deliver on the Act's stated goals. This criticism has been discussed as part of wider debates about how the AI Act and its general-purpose AI provisions may affect the position of cultural and creative industries in the EU. [58]

Building on these critiques, scholars have raised concerns in particular about the Act's approach to regulating the secondary uses of trained AI models, which may have significant societal impacts. [59] [60] They argue that the Act's narrow focus on deployment contexts and reliance on providers to self-declare intended purposes creates opportunities for misinterpretation and insufficient oversight. [60] Additionally, the Act often exempts open-source models and neglects critical lifecycle phases, such as the reuse of trained models. [59] Trained models store decision-mappings as parameters that approximate patterns from the training data. This "model data" is distinct from the original training data and is typically classified as non-personal, as it often cannot be traced back to individual data subjects. Consequently, it falls outside the scope of other regulations like the GDPR. [59] Some scholars also criticize the AI Act for not sufficiently regulating the reuse of model data, warning of potentially harmful consequences for individual privacy, social equity, and democratic processes. [59] Other commentators have also questioned the democratic legitimacy, transparency, and practical enforceability of the new rules for general-purpose AI, noting that their impact will depend on how oversight bodies apply them over time. [29]

See also

Notes

  1. Defined in the regulation [1] in Article 3, §48: "'national competent authority' means a notifying authority or a market surveillance authority; as regards AI systems put into service or used by Union institutions, agencies, offices and bodies, references to national competent authorities or market surveillance authorities in this Regulation shall be construed as references to the European Data Protection Supervisor"

References

  1. 1 2 3 4 5 6 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
  2. "Proposal for a Regulation laying down harmonised rules on artificial intelligence: Shaping Europe's digital future". digital-strategy.ec.europa.eu. 21 April 2021. Archived from the original on 4 January 2023. Retrieved 6 October 2024.
  3. 1 2 "AI Act enters into force" (Press release). Brussels: European Commission. 1 August 2024. Retrieved 5 August 2024.
  4. "Timeline of Developments". artificialintelligenceact.eu. Future of Life Institute. Retrieved 13 July 2024.
  5. "Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world". Council of the EU. 9 December 2023. Archived from the original on 10 January 2024. Retrieved 6 January 2024.
  6. 1 2 3 4 "EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act" . Retrieved 27 October 2025.
  7. 1 2 3 Mueller, Benjamin (4 May 2021). "The Artificial Intelligence Act: A Quick Explainer". Center for Data Innovation. Archived from the original on 14 October 2022. Retrieved 6 January 2024.
  8. Lilkov, Dimitar (2021). "Regulating artificial intelligence in the EU: A risky game". European View. 20 (2): 166–174. doi: 10.1177/17816858211059248 .
  9. Espinoza, Javier (9 December 2023). "EU agrees landmark rules on artificial intelligence". Financial Times. Archived from the original on 29 December 2023. Retrieved 6 January 2024.
  10. 1 2 3 4 "EU AI Act: first regulation on artificial intelligence". European Parliament News. Archived from the original on 10 January 2024. Retrieved 6 January 2024.
  11. MacCarthy, Mark; Propp, Kenneth (4 May 2021). "Machines learn that Brussels writes the rules: The EU's new AI regulation". Brookings. Archived from the original on 27 October 2022. Retrieved 7 September 2021.
  12. 1 2 Proposal for a Regulation laying down harmonised rules on artificial intelligence
  13. 1 2 "World's first major act to regulate AI passed by European lawmakers". CNBC . 14 March 2024. Archived from the original on 13 March 2024. Retrieved 13 March 2024.
  14. 1 2 Browne, Ryan (21 May 2024). "World's first major law for artificial intelligence gets final EU green light". CNBC. Archived from the original on 21 May 2024. Retrieved 22 May 2024.
  15. Coulter, Martin (7 December 2023). "What is the EU AI Act and when will regulation come into effect?". Reuters. Archived from the original on 10 December 2023. Retrieved 11 January 2024.
  16. 1 2 3 4 5 6 With the AI Act adopted, the techno-solutionist gold-rush can continue, La Quadrature du Net, 22 May 2024, Wikidata   Q126064181, archived from the original on 23 May 2024
  17. Henning, Maximilian (27 August 2025). "Is your AI trying to make you fall in love with it?". Euractiv. Archived from the original on 29 August 2025. Retrieved 4 September 2025.
  18. 1 2 Mantelero, Alessandro (2024). "The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template". Computer Law & Security Review. 54 106020. arXiv: 2411.15149 . doi: 10.1016/j.clsr.2024.106020 . ISSN   0267-3649.
  19. Mantelero, Alessandro (2022), Beyond Data. Human Rights, Ethical and Social Impact Assessment in AI, Information Technology and Law Series, vol. 36, The Hague: Springer-T.M.C. Asser Press, doi: 10.1007/978-94-6265-531-7 , ISBN   978-94-6265-533-1
  20. 1 2 Institute, AI Now (9 April 2018). "Algorithmic Impact Assessments Report: A Practical Framework for Public Agency Accountability". AI Now Institute. Retrieved 29 November 2025.
  21. 1 2 3 "Artificial Intelligence Act: MEPs adopt landmark law". European Parliament. 13 March 2024. Archived from the original on 15 March 2024. Retrieved 14 March 2024.
  22. Liboreiro, Jorge (21 April 2021). "'Higher risk, stricter rules': EU's new artificial intelligence rules". Euronews. Archived from the original on 6 January 2024. Retrieved 6 January 2024.
  23. 1 2 3 Veale, Michael; Borgesius, Frederik Zuiderveen (1 August 2021). "Demystifying the Draft EU Artificial Intelligence Act — Analysing the good, the bad, and the unclear elements of the proposed approach". Computer Law Review International. 22 (4): 97–112. arXiv: 2107.03721 . doi:10.9785/cri-2021-220402. ISSN   2194-4164. S2CID   235765823.
  24. 1 2 "Artificial intelligence act: EU Legislation in Progress". Archived from the original on 5 October 2025. Retrieved 29 November 2025.
  25. 1 2 Laux, Johann; Wachter, Sandra; Mittelstadt, Brent (2024). "Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk". Regulation & Governance. 18 (1): 3–32. doi:10.1111/rego.12512. ISSN   1748-5991. PMC   10903109 . PMID   38435808.
  26. 1 2 "Artificial intelligence act: Initial Appraisal of a European Commission Impact Assessment". Archived from the original on 19 February 2025. Retrieved 29 November 2025.
  27. Bertuzzi, Luca (7 December 2023). "AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement". Euractiv. Archived from the original on 8 January 2024. Retrieved 6 January 2024.
  28. "The General-Purpose AI Code of Practice | Shaping Europe's digital future". digital-strategy.ec.europa.eu. Retrieved 27 October 2025.
  29. 1 2 Oskar, Gstrein J.; Haleem, Noman; Zwitter, Andrej (1 August 2024). "General-purpose AI regulation and the European Union AI Act". policyreview.info. Retrieved 29 November 2025.
  30. 1 2 Powell, Rosamund (31 July 2024). "The EU AI Act: National Security Implications". The Alan Turing Institute | Center for Emerging Technology and Security. Retrieved 29 November 2025.
  31. "Article 5: Prohibited AI Practices". artificialintelligenceact.eu. Retrieved 2 May 2025.
  32. Artificial Intelligence Act: [1] Recital 31
  33. Notation des allocataires : la CAF étend sa surveillance à l'analyse des revenus en temps réel (in French), La Quadrature du Net, 13 March 2024, Wikidata   Q126066451, archived from the original on 1 April 2024
  34. Bertuzzi, Luca (21 November 2023). "EU lawmakers to discuss AI rulebook's revised governance structure". Euractiv. Archived from the original on 22 May 2024. Retrieved 18 April 2024.
  35. Friedl, Paul; Gasiola, Gustavo Gil (7 February 2024). "Examining the EU's Artificial Intelligence Act". Verfassungsblog. doi:10.59704/789d6ad759d0a40b. Archived from the original on 22 May 2024. Retrieved 16 April 2024.
  36. "Artificial Intelligence – Questions and Answers". European Commission. 12 December 2023. Archived from the original on 6 April 2024. Retrieved 17 April 2024.
  37. "Artificial Intelligence". CEN-CENELEC. Retrieved 27 October 2025.
  38. Tartaro, Alessio (2023). "Regulating by standards: current progress and main challenges in the standardisation of Artificial Intelligence in support of the AI Act". European Journal of Privacy Law and Technologies. 1 (1). Archived from the original on 3 December 2023. Retrieved 10 December 2023.
  39. "With the AI Act, we need to mind the standards gap". CEPS. 25 April 2023. Retrieved 15 September 2024.
  40. COMMISSION STAFF WORKING DOCUMENT IMPACT ASSESSMENT Accompanying the Proposal for a Regulation of the European Parliament and of the Council LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS
  41. Casarosa, Federica (1 June 2022). "Cybersecurity certification of Artificial Intelligence: a missed opportunity to coordinate between the Artificial Intelligence Act and the Cybersecurity Act". International Cybersecurity Law Review. 3 (1): 115–130. doi:10.1365/s43439-021-00043-6. hdl: 1814/77775 . ISSN   2662-9739. S2CID   258697805.
  42. Smuha, Nathalie A.; Ahmed-Rengers, Emma; Harkens, Adam; Li, Wenlong; MacLaren, James; Piselli, Riccardo; Yeung, Karen (5 August 2021). "How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission's Proposal for an Artificial Intelligence Act". SSRN   3899991.
  43. Ebers, Martin; Hoch, Veronica R. S.; Rosenkranz, Frank; Ruschemeier, Hannah; Steinrötter, Björn (December 2021). "The European Commission's Proposal for an Artificial Intelligence Act—A Critical Assessment by Members of the Robotics and AI Law Society (RAILS)". J: Multidisciplinary Scientific Journal. 4 (4): 589–603. doi: 10.3390/j4040043 . ISSN   2571-8800.
  44. 1 2 Almada, Marco; Petit, Nicolas (27 October 2023). "The EU AI Act: Between Product Safety and Fundamental Rights". SSRN   4308072.
  45. Romero-Moreno, Felipe (29 March 2024). "Generative AI and deepfakes: a human rights approach to tackling harmful content". International Review of Law, Computers & Technology. 39 (2): 297–326. doi: 10.1080/13600869.2024.2324540 . hdl: 2299/20431 . ISSN   1360-0869.
  46. Romero-Moreno, Felipe (23 June 2025). "Deepfake detection in generative AI: A legal framework proposal to protect human rights". Computer Law & Security Review. 58 106162. doi: 10.1016/j.clsr.2025.106162 .
  47. "Artificial intelligence act and regulatory sandboxes | Think Tank | European Parliament". Archived from the original on 3 October 2025. Retrieved 27 October 2025.
  48. "White Paper on Artificial Intelligence – a European approach to excellence and trust". European Commission. 19 February 2020. Archived from the original on 5 January 2024. Retrieved 6 January 2024.
  49. Procedure 2021/0106/COD
  50. "Timeline – Artificial intelligence". European Council. 9 December 2023. Archived from the original on 6 January 2024. Retrieved 6 January 2024.
  51. 1 2 David, Emilia (14 December 2023). "The EU AI Act passed — now comes the waiting". The Verge. Archived from the original on 10 January 2024. Retrieved 6 January 2024.
  52. 1 2 "The First of its Kind: the EU AI Act and What it Means for the Future of AI". Fordham Journal of Corporate and Financial Law. 23 April 2024. Retrieved 29 November 2025.
  53. 1 2 "Europe agreed on world-leading AI rules. How do they work and will they affect people everywhere?". AP News. 11 December 2023. Retrieved 31 May 2024.
  54. "EU: Artificial Intelligence rulebook fails to stop proliferation of abusive technologies". Amnesty International. 13 March 2024. Retrieved 29 November 2025.
  55. "EU parliament greenlights landmark artificial intelligence regulations". Al Jazeera. Retrieved 31 May 2024.
  56. 1 2 "The EU passed the first AI law. Tech experts say it's 'bittersweet'". euronews. 16 March 2024. Retrieved 31 May 2024.
  57. "EU's AI Act fails to set gold standard for human rights". European Digital Rights (EDRi). Retrieved 29 November 2025.
  58. "AI Act: 38 global creators' organizations condemn 'betrayal' of Europe's stated goals". Le Monde. 2 August 2025. Retrieved 30 November 2025.
  59. 1 2 3 4 Mühlhoff, Rainer; Ruschemeier, Hannah (2024). "Regulating AI with Purpose Limitation for Models". Journal of AI Law and Regulation. 1 (1): 24–39. doi:10.21552/aire/2024/1/5.
  60. 1 2 Muehlhoff, Rainer; Ruschmeier, Hannah (2024). "Updating Purpose Limitation for AI: A Normative Approach from Law and Philosophy". SSRN   4711621.