European Union regulation | |
Text with EEA relevance | |
Title | Artificial Intelligence Act [1] |
---|---|
Made by | European Parliament and Council |
Journal reference | OJ L, 2024/1689, 12.7.2024 |
History | |
European Parliament vote | 13 March 2024 |
Council Vote | 21 May 2024 |
Entry into force | 1 August 2024 |
Preparative texts | |
Commission proposal | 2021/206 |
Other legislation | |
Amends | Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU |
Current legislation |
The Artificial Intelligence Act (AI Act) [1] is a European Union regulation concerning artificial intelligence (AI). It establishes a common regulatory and legal framework for AI within the European Union (EU). [2] It came into force on 1 August 2024, [3] with provisions that shall come into operation gradually over the following 6 to 36 months. [4]
It covers all types of AI across a broad range of sectors, with exceptions for AI systems used solely for military, national security, research and non-professional purposes. [5] As a piece of product regulation, it does not confer rights on individuals, but regulates the providers of AI systems and entities using AI in a professional context. [6]
The Act classifies non-exempt AI applications by their risk of causing harm. There are four levels – unacceptable, high, limited, minimal – plus an additional category for general-purpose AI. [7]
For general-purpose AI, transparency requirements are imposed, with reduced requirements for open source models, and additional evaluations for high-capability models. [8] [9]
The Act also creates a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation. [10] Like the EU's General Data Protection Regulation, the Act can apply extraterritorially to providers from outside the EU if they have users within the EU. [6]
Proposed by the European Commission on 21 April 2021, [11] it passed the European Parliament on 13 March 2024, [12] and was unanimously approved by the EU Council on 21 May 2024. [13] The draft Act was revised to address the rise in popularity of generative artificial intelligence systems, such as ChatGPT, whose general-purpose capabilities did not fit the main framework. [14]
There are different risk categories depending on the type of application, with a specific category dedicated to general-purpose generative AI:
Articles 2.3 and 2.6 exempt AI systems used for military or national security purposes or pure scientific research and development from the AI Act. [15]
Article 5.2 bans algorithmic video surveillance only if it is conducted in real time. Exceptions allowing real-time algorithmic video surveillance include policing aims including "a real and present or real and foreseeable threat of terrorist attack". [15]
Recital 31 of the act states that it aims to prohibit "AI systems providing social scoring of natural persons by public or private actors", but allows for "lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law." [20] La Quadrature du Net interprets this exemption as permitting sector-specific social scoring systems, [15] such as the suspicion score used by the French family payments agency Caisse d'allocations familiales. [21] [15]
The AI Act establishes various new bodies in Article 64 and the following articles. These bodies are tasked with implementing and enforcing the Act. The approach combines EU-level coordination with national implementation, involving both public authorities and private sector participation.
The following new bodies will be established: [22] [23]
While the establishment of new bodies is planned at the EU level, Member States will have to designate "national competent authorities". [24] These authorities will be responsible for ensuring the application and implementation of the AI Act, and for conducting "market surveillance". [25] They will verify that AI systems comply with the regulations, notably by checking the proper performance of conformity assessments and by appointing third-parties to carry out external conformity assessments.
The Act regulates the entry to the EU internal market using the New Legislative Framework. It contains essential requirements that all AI systems must meet to access the EU market. These essential requirements are passed on to European Standardisation Organisations, which develop technical standards that further detail these requirements. [26] These standards are developed by CEN/CENELEC JTC 21. [27]
The Act mandates that member states establish their own notifying bodies. Conformity assessments are conducted to verify whether AI systems comply with the standards set out in the AI Act. [28] This assessment can be done in two ways: either through self-assessment, where the AI system provider checks conformity, or through third-party conformity assessment, where the notifying body conducts the assessment. [19] Notifying bodies also have the authority to carry out audits to ensure proper conformity assessments. [29]
Criticism has arisen regarding the fact that many high-risk AI systems do not require third-party conformity assessments. [30] [31] [32] Some commentators argue that independent third-party assessments are necessary for high-risk AI systems to ensure safety before deployment. Legal scholars have suggested that AI systems capable of generating deepfakes for political misinformation or creating non-consensual intimate imagery should be classified as high-risk and subjected to stricter regulation. [33]
In February 2020, the European Commission published "White Paper on Artificial Intelligence – A European approach to excellence and trust". [34] In October 2020, debates between EU leaders took place in the European Council. On 21 April 2021, the AI Act was officially proposed by the Commission. [11] On 6 December 2022, the European Council adopted the general orientation, allowing negotiations to begin with the European Parliament. On 9 December 2023, after three days of "marathon" talks, the EU Council and Parliament concluded an agreement. [35] [36]
The law was passed in the European Parliament on 13 March 2024, by a vote of 523 for, 46 against, and 49 abstaining. [37] It was approved by the EU Council on 21 May 2024. [13] It entered into force on 1 August 2024, [3] 20 days after being published in the Official Journal on 12 July 2024. [12] [38] After coming into force, there will be a delay before it becomes applicable, which depends on the type of application. This delay is 6 months for bans on "unacceptable risk" AI systems, 9 months for codes of practice, 12 months for general-purpose AI systems, 36 months for some obligations related to "high-risk" AI systems, and 24 months for everything else. [38] [37]
Experts have argued that though the jurisdiction of the law is European, it could have far-ranging implications for international companies that plan to expand to Europe. [39] Anu Bradford at Columbia has argued that the law provides significant momentum to the world-wide movement to regulate AI technologies. [39]
Amnesty International criticized the AI Act for not completely banning real-time facial recognition, which they said could damage "human rights, civil space and rule of law" in the European Union. It also criticized the absence of ban on exporting AI technologies that can harm human rights. [39]
Some tech watchdogs have argued that there were major loopholes in the law that would allow large tech monopolies to entrench their advantage in AI, or to lobby to weaken rules. [40] [41] Some startups welcomed the clarification the act provides, while others argued the additional regulation would make European startups uncompetitive compared to American and Chinese startups. [41] La Quadrature du Net (LQDN) described the AI Act as "tailor-made for the tech industry, European police forces as well as other large bureaucracies eager to automate social control". LQDN described the role of self-regulation and exemptions in the act to render it "largely incapable of standing in the way of the social, political and environmental damage linked to the proliferation of AI". [15]
The presence of the logo on commercial products indicates that the manufacturer or importer affirms the goods' conformity with European health, safety, and environmental protection standards. It is not a quality indicator or a certification mark. The CE marking is required for goods sold in the European Economic Area (EEA); goods sold elsewhere may also carry the mark.
A medical device is any device intended to be used for medical purposes. Significant potential for hazards are inherent when using a device for medical purposes and thus medical devices must be proved safe and effective with reasonable assurance before regulating governments allow marketing of the device in their country. As a general rule, as the associated risk of the device increases the amount of testing required to establish safety and efficacy also increases. Further, as associated risk increases the potential benefit to the patient must also increase.
Know your customer (KYC) guidelines and regulations in financial services require professionals to verify the identity, suitability, and risks involved with maintaining a business relationship with a customer. The procedures fit within the broader scope of anti-money laundering (AML) and counter terrorism financing (CTF) regulations.
Type approval or certificate of conformity is granted to a product that meets a minimum set of regulatory, technical and safety requirements. Generally, type approval is required before a product is allowed to be sold in a particular country, so the requirements for a given product will vary around the world. Processes and certifications known as type approval in English are often called homologation, or some cognate expression, in other European languages.
Export control is legislation that regulates the export of goods, software and technology. Some items could potentially be useful for purposes that are contrary to the interest of the exporting country. These items are considered to be controlled. The export of controlled item is regulated to restrict the harmful use of those items. Many governments implement export controls. Typically, legislation lists and classifies the controlled items, classifies the destinations, and requires exporters to apply for a licence to a local government department.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
The CLP Regulation is a European Union regulation from 2008, which aligns the European Union system of classification, labelling and packaging of chemical substances and mixtures to the Globally Harmonised System (GHS). It is expected to facilitate global trade and the harmonised communication of hazard information of chemicals and to promote regulatory efficiency. It complements the 2006 Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) Regulation and replaces an older system contained in the Dangerous Substances Directive (67/548/EEC) and the Dangerous Preparations Directive (1999/45/EC).
The regulation of chemicals is the legislative intent of a variety of national laws or international initiatives such as agreements, strategies or conventions. These international initiatives define the policy of further regulations to be implemented locally as well as exposure or emission limits. Often, regulatory agencies oversee the enforcement of these laws.
The European Banking Authority (EBA) is a regulatory agency of the European Union headquartered in La Défense, Île-de-France. Its activities include conducting stress tests on European banks to increase transparency in the European financial system and identifying weaknesses in banks' capital structures.
Regulation No. 305/2011 of the European Parliament and of the Council of the European Union is a regulation of 9 March 2011 which lays down harmonised conditions for the marketing of construction products and replaces Construction Products Directive (89/106/EEC). This EU regulation is designed to simplify and clarify the existing framework for the placing on the market of construction products. It replaced the earlier (1989) Construction Products Directive (89/106/EEC).
The General Data Protection Regulation, abbreviated GDPR, or French RGPD is a European Union regulation on information privacy in the European Union (EU) and the European Economic Area (EEA). The GDPR is an important component of EU privacy law and human rights law, in particular Article 8(1) of the Charter of Fundamental Rights of the European Union. It also governs the transfer of personal data outside the EU and EEA. The GDPR's goals are to enhance individuals' control and rights over their personal information and to simplify the regulations for international business. It supersedes the Data Protection Directive 95/46/EC and, among other things, simplifies the terminology.
The migration and asylum policy of the European Union is within the area of freedom, security and justice, established to develop and harmonise principles and measures used by member countries of the European Union to regulate migration processes and to manage issues concerning asylum and refugee status in the European Union.
Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.
The Digital Services Act (DSA) is an EU regulation adopted in 2022 that addresses illegal content, transparent advertising and disinformation. It updates the Electronic Commerce Directive 2000 in EU law, and was proposed alongside the Digital Markets Act (DMA).
The Digital Markets Act (DMA) is an EU regulation that aims to make the digital economy fairer and more contestable. The regulation entered into force on 1 November 2022 and became applicable, for the most part, on 2 May 2023.
The Data Act is a European Union regulation which aims to facilitate and promote the exchange and use of data within the European Economic Area.
The Cyber Resilience Act (CRA) is an EU regulation proposed on 15 September 2022 by the European Commission for improving cybersecurity and cyber resilience in the EU through common cybersecurity standards for products with digital elements in the EU, such as required incident reports and automatic security updates. Products with digital elements mainly are hardware and software whose "intended and foreseeable use includes direct or indirect data connection to a device or network".
Discussions on regulation of artificial intelligence in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.