Artificial Intelligence Act

Last updated

Regulation
European Union regulation
Flag of Europe.svg
TitleArtificial Intelligence Act [lower-alpha 1]
Made by European Parliament and Council
History
European Parliament vote13 March 2024
Council Vote21 May 2024
Preparative texts
Commission proposal 2021/206
Current legislation

The Artificial Intelligence Act (AI Act) [lower-alpha 1] is a European Union regulation concerning artificial intelligence (AI).

Contents

It establishes a common regulatory and legal framework for AI within the European Union (EU). [1] Proposed by the European Commission on 21 April 2021, [2] it passed the European Parliament on 13 March 2024, [3] and was unanimously approved by the EU Council on 21 May 2024. [4] The Act also creates a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation. [5] Like the EU's General Data Protection Regulation, the Act can apply extraterritorially to providers from outside the EU if they have users within the EU. [6]

It covers all types of AI across a broad range of sectors, with exceptions for AI systems used solely for military, national security, research and non-professional purposes. [7] As a piece of product regulation, it does not confer rights on individuals, but regulates the providers of AI systems and entities using AI in a professional context. [6] The draft Act was revised to address the rise in popularity of generative artificial intelligence systems, such as ChatGPT, whose general-purpose capabilities did not fit the main framework. [8] More restrictive regulations are planned for powerful generative AI systems with systemic impact. [9]

The Act classifies non-exempted AI applications by their risk of causing harm. There are four levels—unacceptable, high, limited, minimal—plus an additional category for general-purpose AI. Applications with unacceptable risks are banned. High-risk applications must comply with security, transparency and quality obligations and undergo conformity assessments. Limited-risk applications only have transparency obligations, while minimal-risk applications are not regulated. For general-purpose AI, transparency requirements are imposed, with additional evaluations for high-capability models. [9] [10]

Provisions

Risk categories

There are different risk categories depending on the type of application, with a specific category dedicated to general-purpose generative AI:

Exemptions

Articles 2.3 and 2.6 exempt AI systems used for military or national security purposes or pure scientific research and development from the AI Act. [11]

Article 5.2 bans algorithmic video surveillance only if it is conducted in real time. Exceptions allowing real-time algorithmic video surveillance include policing aims including "a real and present or real and foreseeable threat of terrorist attack". [11]

Recital 31 of the act prohibits "AI systems providing social scoring of natural persons by public or private actors", but allows for "lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law." [17] La Quadrature du Net interprets this exemption as permitting sector-specific social scoring systems, [11] such as the suspicion score used by the French family payments agency Caisse d'allocations familiales . [18] [11]

Institutional governance

The AI Act, per the European Parliament Legislative Resolution of 13 March 2024, includes the establishment of various new institutions in Article 64 and the following articles. These institutions are tasked with implementing and enforcing the AI Act. The approach is characterised by a multidimensional combination of centralised and decentralised, as well as public and private enforcement aspects, due to the interaction of various institutions and actors at both EU and national levels.

The following new institutions will be established: [19] [20]

  1. AI Office: attached to the European Commission, this authority will coordinate the implementation of the AI Act in all Member States and oversee the compliance of general-purpose AI providers.
  2. European Artificial Intelligence Board: composed of one representative from each Member State, the Board will advise and assist the Commission and Member States to facilitate the consistent and effective application of the AI Act. Its tasks include gathering and sharing technical and regulatory expertise, providing recommendations, written opinions, and other advice.
  3. Advisory Forum: established to advise and provide technical expertise to the Board and the Commission, this forum will represent a balanced selection of stakeholders, including industry, start-ups, small and medium-sized enterprises, civil society, and academia, ensuring that a broad spectrum of opinions is represented during the implementation and application process.
  4. Scientific Panel of Independent Experts: this panel will provide technical advice and input to the AI Office and national authorities, enforce rules for general-purpose AI models (notably by launching qualified alerts of possible risks to the AI Office), and ensure that the rules and implementations of the AI Act correspond to the latest scientific findings.

While the establishment of new institutions is planned at the EU level, Member States will have to designate "national competent authorities". [21] These authorities will be responsible for ensuring the application and implementation of the AI Act, and for conducting "market surveillance". [22] They will verify that AI systems comply with the regulations, notably by checking the proper performance of conformity assessments and by appointing third-parties to carry out external conformity assessments.

Enforcement

The Act regulates the entry to the EU internal market using the New Legislative Framework. It contains essential requirements that all AI systems must meet to access the EU market. These essential requirements are passed on to European Standardisation Organisations, which develop technical standards that further detail these requirements. [23]

The Act mandates that member states establish their own notifying bodies. Conformity assessments are conducted to verify whether AI systems comply with the standards set out in the AI Act. [24] This assessment can be done in two ways: either through self-assessment, where the AI system provider checks conformity, or through third-party conformity assessment, where the notifying body conducts the assessment. [25] Notifying bodies also have the authority to carry out audits to ensure proper conformity assessments. [26]

Criticism has arisen regarding the fact that many high-risk AI systems do not require third-party conformity assessments. [27] [28] [29] Some commentators argue that independent third-party assessments are necessary for high-risk AI systems to ensure safety before deployment. Legal scholars have suggested that AI systems capable of generating deepfakes for political misinformation or creating non-consensual intimate imagery should be classified as high-risk and subjected to stricter regulation. [30]

Legislative procedure

In February 2020, the European Commission published "White Paper on Artificial Intelligence – A European approach to excellence and trust". [31] In October 2020, debates between EU leaders took place in the European Council. On 21 April 2021, the AI Act was officially proposed by the Commission. On 6 December 2022, the European Council adopted the general orientation, allowing negotiations to begin with the European Parliament. On 9 December 2023, after three days of "marathon" talks, the EU Council and Parliament concluded an agreement. [32]

The law was passed in the European Parliament on 13 March 2024, by a vote of 523 for, 46 against, and 49 abstaining. [33] It was approved by the EU Council on 21 May 2024. [4] It will come into force 20 days after being published in the Official Journal at the end of the legislative term in May. [3] [34] After coming into force, there will be a delay before it becomes applicable, which depends on the type of application. This delay is 6 months for bans on "unacceptable risk" AI systems, 9 months for codes of practice, 12 months for general-purpose AI systems, 36 months for some obligations related to "high-risk" AI systems, and 24 months for everything else. [34] [33]

Reactions

La Quadrature du Net (LQDN) described the AI Act as "tailor-made for the tech industry, European police forces as well as other large bureaucracies eager to automate social control". LQDN described the role of self-regulation and exemptions in the act to render it "largely incapable of standing in the way of the social, political and environmental damage linked to the proliferation of AI". [11]

See also

Notes

  1. 1 2 Officially the Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828

Related Research Articles

<span class="mw-page-title-main">Restriction of Hazardous Substances Directive</span> European Union directive restricting ten hazardous materials

The Restriction of Hazardous Substances Directive 2002/95/EC, short for Directive on the restriction of the use of certain hazardous substances in electrical and electronic equipment, was adopted in February 2003 by the European Union.

<span class="mw-page-title-main">Comitology</span> Process by which European Union law is modified or adjusted

Comitology in the European Union refers to a process by which EU law is implemented or adjusted by the European Commission working in conjunction with committees of national representatives from the EU member states, colloquially called "comitology committees". These are chaired by the European Commission. The official term for the process is committee procedure. Comitology committees are part of the EU's broader system of committees that assist in the making, adoption, and implementation of EU laws.

<span class="mw-page-title-main">CE marking</span> European Declaration of conformity mark

The presence of the logo on commercial products indicates that the manufacturer or importer affirms the goods' conformity with European health, safety, and environmental protection standards. It is not a quality indicator or a certification mark. The CE marking is required for goods sold in the European Economic Area (EEA); goods sold elsewhere may also carry the mark.

<span class="mw-page-title-main">Medical device</span> Device to be used for medical purposes

A medical device is any device intended to be used for medical purposes. Significant potential for hazards are inherent when using a device for medical purposes and thus medical devices must be proved safe and effective with reasonable assurance before regulating governments allow marketing of the device in their country. As a general rule, as the associated risk of the device increases the amount of testing required to establish safety and efficacy also increases. Further, as associated risk increases the potential benefit to the patient must also increase.

A novel food is a type of food that does not have a significant history of consumption or is produced by a method that has not previously been used for food.

<span class="mw-page-title-main">Know your customer</span> Financial institution and company-related term

In the United States, Know Your Customer (KYC) guidelines and regulations in financial services require professionals to verify the identity, suitability, and risks involved with maintaining a business relationship with a customer. The procedures fit within the broader scope of anti-money laundering (AML) and counter terrorism financing (CTF) regulations.

Type approval or certificate of conformity is granted to a product that meets a minimum set of regulatory, technical and safety requirements. Generally, type approval is required before a product is allowed to be sold in a particular country, so the requirements for a given product will vary around the world. Processes and certifications known as type approval in English are often called homologation, or some cognate expression, in other European languages.

<span class="mw-page-title-main">Schengen Area</span> Area of 29 European states without mutual border controls

The Schengen Area is an area encompassing 29 European countries that have officially abolished border controls at their mutual borders. Being an element within the wider area of freedom, security and justice policy of the European Union (EU), it mostly functions as a single jurisdiction under a common visa policy for international travel purposes. The area is named after the 1985 Schengen Agreement and the 1990 Schengen Convention, both signed in Schengen, Luxembourg.

The regulation of chemicals is the legislative intent of a variety of national laws or international initiatives such as agreements, strategies or conventions. These international initiatives define the policy of further regulations to be implemented locally as well as exposure or emission limits. Often, regulatory agencies oversee the enforcement of these laws.

Motor vehicle type approval is the method by which motor vehicles, vehicle trailers and systems, components and separate technical units intended for such vehicles achieve type approval in the European Union (EU) or in other UN-ECE member states. There is no EU approval body: authorized approval bodies of member states are responsible for type approval, which will be accepted in all member states.

<span class="mw-page-title-main">European Securities and Markets Authority</span> Financial regulatory agency of the European Union

The European Securities and Markets Authority (ESMA) is an agency of the European Union located in Paris.

<span class="mw-page-title-main">General Data Protection Regulation</span> EU regulation on the processing of personal data

The General Data Protection Regulation is a European Union regulation on information privacy in the European Union (EU) and the European Economic Area (EEA). The GDPR is an important component of EU privacy law and human rights law, in particular Article 8(1) of the Charter of Fundamental Rights of the European Union. It also governs the transfer of personal data outside the EU and EEA. The GDPR's goals are to enhance individuals' control and rights over their personal information and to simplify the regulations for international business. It supersedes the Data Protection Directive 95/46/EC and, among other things, simplifies the terminology.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

<span class="mw-page-title-main">Digital Services Act</span> European Union regulation on digital services content

The Digital Services Act Regulation 2022 (EU) 2022/2065 ("DSA") is a regulation in EU law to update the Electronic Commerce Directive 2000 regarding illegal content, transparent advertising, and disinformation. It was submitted along with the Digital Markets Act (DMA) by the European Commission to the European Parliament and the Council on 15 December 2020. The DSA was prepared by the Executive Vice President of the European Commission for A Europe Fit for the Digital Age Margrethe Vestager and by the European Commissioner for Internal Market Thierry Breton, as members of the Von der Leyen Commission.

<span class="mw-page-title-main">Data Act (European Union)</span> EU regulation on promoting the exchange and use of data

The Data Act is a European Union regulation which aims to facilitate and promote the exchange and use of data within the European Economic Area.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

<span class="mw-page-title-main">Cyber Resilience Act</span> Proposed cybersecurity regulation in the EU

The Cyber Resilience Act (CRA) is an EU regulation proposed on 15 September 2022 by the European Commission for improving cybersecurity and cyber resilience in the EU through common cybersecurity standards for products with digital elements in the EU, such as required incident reports and automatic security updates. Products with digital elements mainly are hardware and software whose "intended and foreseeable use includes direct or indirect data connection to a device or network".

Discussions on regulation of artificial intelligence in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.

References

  1. "Proposal for a Regulation laying down harmonised rules on artificial intelligence: Shaping Europe's digital future". digital-strategy.ec.europa.eu. 21 April 2021. Archived from the original on 4 January 2023. Retrieved 9 January 2023.
  2. "EUR-Lex – 52021PC0206 – EN – EUR-Lex". eur-lex.europa.eu. Archived from the original on 23 August 2021. Retrieved 7 September 2021.
  3. 1 2 "World's first major act to regulate AI passed by European lawmakers". CNBC. 14 March 2024. Archived from the original on 13 March 2024. Retrieved 13 March 2024.
  4. 1 2 Browne, Ryan (21 May 2024). "World's first major law for artificial intelligence gets final EU green light". CNBC. Archived from the original on 21 May 2024. Retrieved 22 May 2024.
  5. MacCarthy, Mark; Propp, Kenneth (4 May 2021). "Machines learn that Brussels writes the rules: The EU's new AI regulation". Brookings. Archived from the original on 27 October 2022. Retrieved 7 September 2021.
  6. 1 2 3 Mueller, Benjamin (4 May 2021). "The Artificial Intelligence Act: A Quick Explainer". Center for Data Innovation. Archived from the original on 14 October 2022. Retrieved 6 January 2024.
  7. "Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world". Council of the EU. 9 December 2023. Archived from the original on 10 January 2024. Retrieved 6 January 2024.
  8. Coulter, Martin (7 December 2023). "What is the EU AI Act and when will regulation come into effect?". Reuters. Archived from the original on 10 December 2023. Retrieved 11 January 2024.
  9. 1 2 Espinoza, Javier (9 December 2023). "EU agrees landmark rules on artificial intelligence". Financial Times. Archived from the original on 29 December 2023. Retrieved 6 January 2024.
  10. 1 2 3 4 "EU AI Act: first regulation on artificial intelligence". European Parliament News. Archived from the original on 10 January 2024. Retrieved 6 January 2024.
  11. 1 2 3 4 5 6 With the AI Act adopted, the techno-solutionist gold-rush can continue, La Quadrature du Net, 22 May 2024, Wikidata   Q126064181, archived from the original on 23 May 2024
  12. Mantelero, Alessandro (2022), Beyond Data. Human Rights, Ethical and Social Impact Assessment in AI, Information Technology and Law Series, vol. 36, The Hague: Springer-T.M.C. Asser Press, doi: 10.1007/978-94-6265-531-7 , ISBN   978-94-6265-533-1
  13. 1 2 Bertuzzi, Luca (7 December 2023). "AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement". Euractiv. Archived from the original on 8 January 2024. Retrieved 6 January 2024.
  14. "Regulating Chatbots and Deepfakes". mhc.ie. Mason Hayes & Curran. Archived from the original on 9 January 2024. Retrieved 11 January 2024.
  15. Liboreiro, Jorge (21 April 2021). "'Higher risk, stricter rules': EU's new artificial intelligence rules". Euronews. Archived from the original on 6 January 2024. Retrieved 6 January 2024.
  16. Veale, Michael (2021). "Demystifying the Draft EU Artificial Intelligence Act". Computer Law Review International. 22 (4). arXiv: 2107.03721 . doi:10.31235/osf.io/38p5f. S2CID   241559535.
  17. "European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD))". Archived from the original on 21 May 2024. Retrieved 24 May 2024.
  18. Notation des allocataires : la CAF étend sa surveillance à l'analyse des revenus en temps réel (in French), La Quadrature du Net, 13 March 2024, Wikidata   Q126066451, archived from the original on 1 April 2024
  19. Bertuzzi, Luca (21 November 2023). "EU lawmakers to discuss AI rulebook's revised governance structure". Euractiv. Archived from the original on 22 May 2024. Retrieved 18 April 2024.
  20. Friedl, Paul; Gasiola, Gustavo Gil (7 February 2024). "Examining the EU's Artificial Intelligence Act". Verfassungsblog. Archived from the original on 22 May 2024. Retrieved 16 April 2024.
  21. "Artificial Intelligence Act". European Parliament. 13 March 2024. Archived from the original on 18 April 2024. Retrieved 18 April 2024. Article 3 – definitions. Excerpt: "'national competent authority' means the national supervisory authority, the notifying authority and the market surveillance authority;"
  22. "Artificial Intelligence – Questions and Answers". European Commission. 12 December 2023. Archived from the original on 6 April 2024. Retrieved 17 April 2024.
  23. Tartaro, Alessio (2023). "Regulating by standards: current progress and main challenges in the standardisation of Artificial Intelligence in support of the AI Act". European Journal of Privacy Law and Technologies. 1 (1). Archived from the original on 3 December 2023. Retrieved 10 December 2023.
  24. "EUR-Lex – 52021SC0084 – EN – EUR-Lex". eur-lex.europa.eu. Archived from the original on 17 April 2023. Retrieved 17 April 2023.
  25. Veale, Michael; Borgesius, Frederik Zuiderveen (1 August 2021). "Demystifying the Draft EU Artificial Intelligence Act — Analysing the good, the bad, and the unclear elements of the proposed approach". Computer Law Review International. 22 (4): 97–112. arXiv: 2107.03721 . doi:10.9785/cri-2021-220402. ISSN   2194-4164. S2CID   235765823.
  26. Casarosa, Federica (1 June 2022). "Cybersecurity certification of Artificial Intelligence: a missed opportunity to coordinate between the Artificial Intelligence Act and the Cybersecurity Act". International Cybersecurity Law Review. 3 (1): 115–130. doi:10.1365/s43439-021-00043-6. ISSN   2662-9739. S2CID   258697805.
  27. Smuha, Nathalie A.; Ahmed-Rengers, Emma; Harkens, Adam; Li, Wenlong; MacLaren, James; Piselli, Riccardo; Yeung, Karen (5 August 2021). "How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission's Proposal for an Artificial Intelligence Act". doi:10.2139/ssrn.3899991. S2CID   239717302. SSRN   3899991. Archived from the original on 26 February 2024. Retrieved 14 March 2024.
  28. Ebers, Martin; Hoch, Veronica R. S.; Rosenkranz, Frank; Ruschemeier, Hannah; Steinrötter, Björn (December 2021). "The European Commission's Proposal for an Artificial Intelligence Act—A Critical Assessment by Members of the Robotics and AI Law Society (RAILS)". J. 4 (4): 589–603. doi: 10.3390/j4040043 . ISSN   2571-8800.
  29. Almada, Marco; Petit, Nicolas (27 October 2023). "The EU AI Act: Between Product Safety and Fundamental Rights". Robert Schuman Centre for Advanced Studies Research Paper No. 2023/59. doi:10.2139/ssrn.4308072. S2CID   255388310. SSRN   4308072. Archived from the original on 17 April 2023. Retrieved 14 March 2024.
  30. Romero-Moreno, Felipe (29 March 2024). "Generative AI and deepfakes: a human rights approach to tackling harmful content". International Review of Law, Computers & Technology. 39 (2): 1–30. doi: 10.1080/13600869.2024.2324540 . hdl: 2299/20431 . ISSN   1360-0869.
  31. "White Paper on Artificial Intelligence – a European approach to excellence and trust". European Commission. 19 February 2020. Archived from the original on 5 January 2024. Retrieved 6 January 2024.
  32. "Timeline – Artificial intelligence". European Council. 9 December 2023. Archived from the original on 6 January 2024. Retrieved 6 January 2024.
  33. 1 2 "Artificial Intelligence Act: MEPs adopt landmark law". European Parliament. 13 March 2024. Archived from the original on 15 March 2024. Retrieved 14 March 2024.
  34. 1 2 David, Emilia (14 December 2023). "The EU AI Act passed — now comes the waiting". The Verge. Archived from the original on 10 January 2024. Retrieved 6 January 2024.